Compare commits

..

40 Commits

Author SHA1 Message Date
39901f2295 Fix lower precision check for MKLDNN on Windows (#122645)
Fixes #120788

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121618
Approved by: https://github.com/xuhancn, https://github.com/jgong5, https://github.com/mingfeima, https://github.com/seemethere

(cherry picked from commit 03717430cc54609189cc7df593b2c96a99fb7f55)

Co-authored-by: CaoE <e.cao@intel.com>
2024-03-25 17:33:04 -04:00
9e6f42d369 Pin protobuf to 3.20.2 on macOS (#121918) (#122207)
The newer protobuf 5.26.0 releasing on March 13rd is causing failures with `test_hparams_*` from `test_tensorboard` in which the stringify metadata is wrong when escaping double quote. For example, 3bc2bb6781.  This looks like an upstream issue from Tensorboard where it doesn't work with this brand new protobuf version https://github.com/tensorflow/tensorboard/blob/master/tensorboard/pip_package/requirements.txt#L29

The package has been pinned on Docker https://github.com/pytorch/pytorch/blob/main/.ci/docker/requirements-ci.txt#L155, so it should be pinned on macOS too.  We want to eventually just have one requirements.txt file.

Fixes https://github.com/pytorch/pytorch/issues/122008
Fixes https://github.com/pytorch/pytorch/issues/121927
Fixes https://github.com/pytorch/pytorch/issues/121946
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121918
Approved by: https://github.com/kit1980

(cherry picked from commit 5f601a41e0a8c91ecf7ca5e4b95d752166ed9093)

Co-authored-by: Huy Do <huydhn@gmail.com>
2024-03-19 11:41:52 -07:00
13a5142f56 Fix MSVC 14.38 - VS 2022 Build (#122120)
Fixes #115922

This PR is prepared to separate existing https://github.com/pytorch/pytorch/pull/116926 and to apply suggestions in the review.

`scalar_t` which is defined as `c10::impl::ScalarTypeToCPPType<ScalarType::Half>::t` appears to be causing the issue with `Visual Studio 2022 17.8.4`  (coming with `MSVC 14.38.33130`)

Error message:
```
aten\src\ATen/cpu/vec/vec_base.h(150): fatal error C1001: Internal compiler error.
(compiler file 'D:\a_work\1\s\src\vctools\Compiler\CxxFE\sl\p1\c\toinil.c', line 910)
```

---

Related line was added for a similar issue before as a workaround (`scalar_t` definition) [Fix compile error for vs2022](https://github.com/pytorch/pytorch/pull/85958)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117497
Approved by: https://github.com/ezyang, https://github.com/malfet

(cherry picked from commit fa86fa7a61e7cb85e1d193ed69d41757abe43310)

Co-authored-by: Ozan Aydin <148207261+ozanMSFT@users.noreply.github.com>
2024-03-18 16:47:46 -04:00
c1f8ec5a6f chore: add unit test to verify split_by_tags output_type (#121262) (#122122)
Add a test case as per https://github.com/pytorch/pytorch/pull/120361#issuecomment-1979163324

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121262
Approved by: https://github.com/atalman

(cherry picked from commit 0a1b3be2163ea99633f95c4927bd816eb713e9bd)

Co-authored-by: Dheeraj Peri <peri.dheeraj@gmail.com>
2024-03-18 12:59:48 -07:00
abe172eeaf fix: set codegen in _SplitterBase partitioner (#120361) (#122121)
For graphs with single output, the expectation of torch.export / torch.compile graph_module output type is a single torch.tensor instead of a tuple.
However,  after using `_SplitterBase` partitioner on these graph_module (obtained from torch.export/torch.compile), the resultant graph module will return a tuple of tensors, in this case `(output,)`.

This PR adds codegen to the graphs produced by `_SplitterBase` partitioner. Setting this will ensure pytree unflatten nodes will be added automatically to handle unflattening of the output to return single outputs directly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120361
Approved by: https://github.com/angelayi

(cherry picked from commit 15add24bf28477843a7e13d9deaa4beb39473900)

Co-authored-by: Dheeraj Peri <peri.dheeraj@gmail.com>
2024-03-18 12:59:39 -07:00
49022c752e Fix missing permission in create release workflow (#118681) (#120518)
Fixes https://github.com/pytorch/pytorch/actions/runs/7715417683/job/21029944543
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118681
Approved by: https://github.com/clee2000, https://github.com/seemethere, https://github.com/atalman, https://github.com/malfet

(cherry picked from commit 48f876143af4920cba34735429fa1f8ba75d42ca)

Co-authored-by: Huy Do <huydhn@gmail.com>
2024-03-15 18:14:06 -07:00
5ba8a77a69 [Release only] Disable triton build workflows (#121934) 2024-03-14 18:30:15 -04:00
da3f59012f [CPP] Update GCC minversion check to 9 or newer (#120126) (#121419)
It's already a requirement for building PyTorch, but should be a
requirement for linking extensions with it, as that can lead to runtime
crashes, as `std::optional` template layout is incompatible between
gcc-9 and older compilers.

Also, update minimum supported clang version to 9.x(used to build Android), as clang-5 is clearly not C++17 compliant.

Fixes https://github.com/pytorch/pytorch/issues/120020

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120126
Approved by: https://github.com/Skylion007

(cherry picked from commit 3ad067fe2b969d17773e9ada918c67da829bb5cc)

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2024-03-13 16:23:04 -07:00
d37ef499da Windows Dynamo Error Removal CI Check (#121026)
Link to landed trunk PR (if applicable):
* https://github.com/pytorch/pytorch/pull/115969

Criteria Category:
* Low risk critical fixes for backwards compatibility

Approved-by: PaliC, thiagocrepaldi
2024-03-12 12:43:53 -04:00
3184b6f719 [FSDP][StateDict] Allow FULL_STATE_DICT option for 2D (#120837) (#121250)
Fixes #120722

TL;DR for the issue:
As users are expected to use get_model_state_dict to do state_dict retrieval, I think it's fine to remove the warning and RuntimeError.
More context in #120722.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120837
Approved by: https://github.com/Skylion007

Co-authored-by: wz337 <wz337@cornell.edu>
2024-03-08 08:14:19 -05:00
56a20680f0 Fix make triton command on release branch (#121169) (#121229)
Fixes #120044

Should fix build from source instructions on release branch here: https://github.com/pytorch/pytorch#from-source

Please note we are using /test/ channel for release here to make sure it works, before actual release is completed.

Test main:
```
make triton
pip3 uninstall -y triton
WARNING: Skipping triton as it is not installed.
Looking in indexes: https://download.pytorch.org/whl/nightly/
Collecting pytorch-triton==3.0.0+a9bc1a3647
  Downloading https://download.pytorch.org/whl/nightly/pytorch_triton-3.0.0%2Ba9bc1a3647-cp310-cp310-linux_x86_64.whl (239.0 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 239.0/239.0 MB 8.7 MB/s eta 0:00:00
Requirement already satisfied: filelock in /home/atalman/miniconda3/envs/py310/lib/python3.10/site-packages (from pytorch-triton==3.0.0+a9bc1a3647) (3.13.1)
Installing collected packages: pytorch-triton
  Attempting uninstall: pytorch-triton
    Found existing installation: pytorch-triton 2.2.0
    Uninstalling pytorch-triton-2.2.0:
      Successfully uninstalled pytorch-triton-2.2.0
Successfully installed pytorch-triton-3.0.0+a9bc1a3647
```

Test release/2.2:
```
make triton
pip3 uninstall -y triton
WARNING: Skipping triton as it is not installed.
Looking in indexes: https://download.pytorch.org/whl/test/
Collecting pytorch-triton==2.2.0
  Using cached https://download.pytorch.org/whl/test/pytorch_triton-2.2.0-cp310-cp310-linux_x86_64.whl (183.1 MB)
Requirement already satisfied: filelock in /home/atalman/miniconda3/envs/py310/lib/python3.10/site-packages (from pytorch-triton==2.2.0) (3.13.1)
Installing collected packages: pytorch-triton
  Attempting uninstall: pytorch-triton
    Found existing installation: pytorch-triton 3.0.0+a9bc1a3647
    Uninstalling pytorch-triton-3.0.0+a9bc1a3647:
      Successfully uninstalled pytorch-triton-3.0.0+a9bc1a3647
Successfully installed pytorch-triton-2.2.0
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121169
Approved by: https://github.com/seemethere
2024-03-07 12:49:44 -05:00
f938615548 Don't use size on TensorVariable when doing out resize test (#121232)
Fixes https://github.com/pytorch/pytorch/issues/120482
Fixes https://github.com/pytorch/pytorch/issues/120511

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120567
Approved by: https://github.com/Skylion007

(cherry picked from commit 0f20cc1e0e474caec9183548e07cbaa5388bcdb3)

Co-authored-by: Edward Z. Yang <ezyang@meta.com>
2024-03-07 11:24:58 -05:00
6c8c5ad5ea [RelEng] Define BUILD_BUNDLE_PTXAS (#119750) (#119988)
Co-authored-by: Nikita Shulga <nshulga@meta.com>
Fixes https://github.com/pytorch/pytorch/issues/119054
resolved: https://github.com/pytorch/pytorch/pull/119750
2024-02-15 13:19:00 -05:00
f00f0ab0e4 fix compile DTensor.from_local in trace_rule_look up (#119659) (#119941)
resolved: https://github.com/pytorch/pytorch/pull/119659
2024-02-15 12:46:55 -05:00
077791bb6b Revert "Update state_dict.py to propagate cpu offload (#117453)" (#119995) 2024-02-15 12:45:22 -05:00
3eaaeeb45a Update state_dict.py to propagate cpu offload (#117453) (#119916)
resolved: https://github.com/pytorch/pytorch/pull/117453
2024-02-15 10:14:52 -05:00
0aa3fd32fe HSDP + TP integration bug fixes (#119819)
Co-authored-by: Andrew Gu <andgu@fb.com>
resolved: https://github.com/pytorch/pytorch/pull/112435
resolved: https://github.com/pytorch/pytorch/pull/118620
Fixed `device_mesh` and auto wrap (#119064)
fix https://github.com/pytorch/pytorch/issues/118906.
resolved: https://github.com/pytorch/pytorch/pull/119064
resolved: https://github.com/pytorch/pytorch/pull/118638
Fixes https://github.com/pytorch/pytorch/issues/118639.
resolved: https://github.com/pytorch/pytorch/pull/119481
2024-02-14 15:46:31 -05:00
eef51a6bee [Inductor] Skip triton templates for mixedmm on SM70- (#118591) (#119894)
As it results in numerical errors, see https://github.com/pytorch/pytorch/issues/117144

Fixes https://github.com/pytorch/pytorch/issues/117144

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118591
Approved by: https://github.com/jansel

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2024-02-14 12:23:24 -08:00
940358f12f [dtensor] fix dtensor _to_copy op for mix precision (#116426) (#119687)
Co-authored-by: Wanchao Liang <wanchaol@users.noreply.github.com>
fix dtensor _to_copy op for mix precision (#116426)
resolved: https://github.com/pytorch/pytorch/pull/116426
2024-02-14 14:01:54 -05:00
24e4751650 [state_dict] Calls wait() for the DTensor to_local() result (#118197) (#119692)
Co-authored-by: Chien-Chin Huang <chienchin@fb.com>
Co-authored-by: Yue Dong <yoyoyod@meta.com>
resolved: https://github.com/pytorch/pytorch/pull/118197
fix to address numerical correctness concerns identified in PR #118197, and we should only wait on `AsyncCollectiveTensor`.
resolved: https://github.com/pytorch/pytorch/pull/119716
2024-02-14 13:59:06 -05:00
dcaeed36eb [DCP][state_dict] Fix the issue that get_state_dict/set_state_dict ig… (#119807)
Fixes, https://github.com/pytorch/pytorch/issues/119535.
resolved: https://github.com/pytorch/pytorch/pull/119573
2024-02-14 12:14:01 -05:00
4f882a5f32 Properly preserve SymInt input invariant when splitting graphs (#117406) (#118067)
Co-authored-by: Edward Z. Yang <ezyang@meta.com>
Fixes https://github.com/pytorch/pytorch/issues/111636
Fixes https://github.com/pytorch/pytorch/issues/108877
Fixes https://github.com/pytorch/pytorch/issues/116956
resolved: https://github.com/pytorch/pytorch/pull/117406
2024-02-14 11:28:54 -05:00
e80c8c2e98 Correctly formatting the example in get_state_dict (#119532) (#119804)
Co-authored-by: jmarin <diyemti@gmail.com>
Fixes #118837
resolved: https://github.com/pytorch/pytorch/pull/119532
2024-02-14 10:15:46 -05:00
445b0f9b63 [DCP][state_dict] DCP state_dict cannot correctly find FQN when the l… (#119691)
Co-authored-by: Chien-Chin Huang <chienchin@fb.com>
resolved: https://github.com/pytorch/pytorch/pull/115592
2024-02-14 10:07:35 -05:00
95ea4e6648 [FSDP][2D] Fix DTensor Extension Bugs (#119690)
Co-authored-by: Wanchao Liang <wanchaol@users.noreply.github.com>
resolved: https://github.com/pytorch/pytorch/pull/116122
resolved: https://github.com/pytorch/pytorch/pull/117020
fixes https://github.com/pytorch/pytorch/issues/117126
resolved: https://github.com/pytorch/pytorch/pull/117336
2024-02-14 10:04:56 -05:00
bbfcfb0302 [FSDP] enable autograd in forward prefetching (#116792) (#119688)
Co-authored-by: Wei (Will) Feng <134637289+weifengpy@users.noreply.github.com>
resolved: https://github.com/pytorch/pytorch/pull/116792
2024-02-14 10:03:11 -05:00
2304d6bfb1 Fix ColwiseParallel typo (#116151) (#119821)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116151
Approved by: https://github.com/wanchaol

Co-authored-by: Carlos Mocholí <carlossmocholi@gmail.com>
2024-02-13 16:34:45 -08:00
7b436b0d05 Update oneDNN build option for older systems (#118057) (#119773)
Co-authored-by: yanbing-j <yanbing.jiang@intel.com>
Fixes [#116623](https://github.com/pytorch/pytorch/issues/116623).
resolved: https://github.com/pytorch/pytorch/pull/118057
2024-02-13 15:07:55 -05:00
4ae866593d [EZ] Set maximum supported version of Python as 3.12 (#119743) (#119770)
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
resolved: https://github.com/pytorch/pytorch/pull/119743
2024-02-13 15:06:38 -05:00
bac09b8555 Fix TCP Store Windows (#118860) (#119769)
Co-authored-by: mantaionut <ionut@janeasystems.com>
Fixes #118737
resolved: https://github.com/pytorch/pytorch/pull/118860
2024-02-13 15:05:56 -05:00
b9814bc525 Updated docs for deprecated torch.set_default_tensor_type (#115041) (#119316)
Fixes #113646.
resolved: https://github.com/pytorch/pytorch/pull/115041
2024-02-12 11:57:30 -05:00
6a3a3df103 Clarified sampling process of torch.randn for complex dtypes. (#118315) (#119315)
Fixes #118269.
resolved: https://github.com/pytorch/pytorch/pull/118315
2024-02-12 11:55:06 -05:00
b126b0d724 Missing docs for CircularPad2d (#119313)
Fixes #118429
resolved: https://github.com/pytorch/pytorch/pull/118465
2024-02-12 11:54:31 -05:00
d65d0e598e Replaced CHECK with TORCH_CHECK in order to not abort, but throw a Ru… (#119301)
…ntimeError instead.

Fixes #117499.

Cherry-pick of  https://github.com/pytorch/pytorch/pull/117653 into release/2.2 
Approved by: https://github.com/antoniojkim, https://github.com/JackCaoG, https://github.com/alanwaketan

Co-authored-by: Tobias Ringwald <github@ringwald.email>
2024-02-12 07:32:37 -08:00
a412db0995 [CI] Explicitly specify read-all permissions on the token (#117290) (#119568)
Co-authored-by: Nikita Shulga <nshulga@meta.com>
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
resolved: https://github.com/pytorch/pytorch/pull/117290
resolved: https://github.com/pytorch/pytorch/pull/117371
2024-02-09 14:30:18 -05:00
e9956badeb Migrate rocm test to using oidc (#117160) (#119565)
Co-authored-by: Huy Do <huydhn@gmail.com>
resolved: https://github.com/pytorch/pytorch/pull/117160
resolved: https://github.com/pytorch/pytorch/pull/117422
2024-02-09 14:29:13 -05:00
574f46da53 [oidc] Migrate Triton wheel upload to oidc (#117648) (#119564)
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
resolved: https://github.com/pytorch/pytorch/pull/117648
Fix trition wheels build (take 2) (#117706)
resolved: https://github.com/pytorch/pytorch/pull/117706
2024-02-09 14:28:32 -05:00
55d10abc0f Switch nightly binaries to oidc. Remove aws keys (#117416) (#119560)
resolved: https://github.com/pytorch/pytorch/pull/117416
2024-02-09 14:27:54 -05:00
0cd0631716 Fix typo on torch.frombuffer() documentation (#119388) 2024-02-09 13:13:09 -05:00
44ab785f75 Fix typo on Contribution Guide (#119428) (#119505)
Fixes #119427
resolved: https://github.com/pytorch/pytorch/pull/119428
2024-02-09 13:11:35 -05:00
113 changed files with 1295 additions and 853 deletions

View File

@ -28,3 +28,6 @@ rockset==1.0.3
z3-solver==4.12.2.0
tensorboard==2.13.0
optree==0.9.1
# NB: test_hparams_* from test_tensorboard is failing with protobuf 5.26.0 in
# which the stringify metadata is wrong when escaping double quote
protobuf==3.20.2

View File

@ -7,6 +7,7 @@
name: !{{ build_environment }}
{%- endblock %}
on:
push:
{%- if branches == "nightly" %}

View File

@ -53,6 +53,9 @@
{%- macro upload_binaries(config, is_windows=False, has_test=True, use_s3=True) -%}
!{{ config["build_name"] }}-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
{%- if has_test %}
needs: !{{ config["build_name"] }}-test
{%- else %}
@ -65,8 +68,6 @@
{%- endif %}
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -59,18 +59,13 @@ on:
github-token:
required: true
description: Github Token
aws-pytorch-uploader-access-key-id:
required: true
description: AWS access key id
aws-pytorch-uploader-secret-access-key:
required: true
description: AWS secret access key
conda-pytorchbot-token:
required: true
description: Conda PyTorchBot token
conda-pytorchbot-token-test:
required: true
description: Conda PyTorchBot token
jobs:
upload:
runs-on: ubuntu-22.04
@ -104,6 +99,20 @@ jobs:
with:
no-sudo: true
- name: Configure AWS credentials(PyTorch account) for nightly
if: ${{ github.event_name == 'push' && github.event.ref == 'refs/heads/nightly' }}
uses: aws-actions/configure-aws-credentials@v3
with:
role-to-assume: arn:aws:iam::749337293305:role/gha_workflow_nightly_build_wheels
aws-region: us-east-1
- name: Configure AWS credentials(PyTorch account) for RC builds
if: ${{ github.event_name == 'push' && (startsWith(github.event.ref, 'refs/tags/') && !startsWith(github.event.ref, 'refs/tags/ciflow/')) }}
uses: aws-actions/configure-aws-credentials@v3
with:
role-to-assume: arn:aws:iam::749337293305:role/gha_workflow_test_build_wheels
aws-region: us-east-1
- name: Download Build Artifacts
id: download-artifacts
# NB: When the previous build job is skipped, there won't be any artifacts and
@ -135,8 +144,6 @@ jobs:
PKG_DIR: "${{ runner.temp }}/artifacts"
UPLOAD_SUBFOLDER: "${{ env.DESIRED_CUDA }}"
# When running these on pull_request events these should be blank
AWS_ACCESS_KEY_ID: ${{ secrets.aws-pytorch-uploader-access-key-id }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.aws-pytorch-uploader-secret-access-key }}
CONDA_PYTORCHBOT_TOKEN: ${{ secrets.conda-pytorchbot-token }}
CONDA_PYTORCHBOT_TOKEN_TEST: ${{ secrets.conda-pytorchbot-token-test }}
BUILD_NAME: ${{ inputs.build_name }}

View File

@ -42,6 +42,10 @@ on:
env:
GIT_DEFAULT_BRANCH: ${{ github.event.repository.default_branch }}
permissions:
id-token: write
contents: read
jobs:
test:
# Don't run on forked repos or empty test matrix
@ -61,6 +65,17 @@ jobs:
- name: Setup ROCm
uses: ./.github/actions/setup-rocm
- name: configure aws credentials
id: aws_creds
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::308535385114:role/gha_workflow_s3_and_ecr_read_only
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login@v2
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.2

View File

@ -1,293 +0,0 @@
name: Build Triton wheels
on:
push:
branches:
- release/2.2
tags:
# NOTE: Binary build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
paths:
- .github/workflows/build-triton-wheel.yml
- .github/scripts/build_triton_wheel.py
- .github/ci_commit_pins/triton.txt
- .ci/docker/ci_commit_pins/triton.txt
- .ci/docker/ci_commit_pins/triton-rocm.txt
pull_request:
paths:
- .github/workflows/build-triton-wheel.yml
- .github/scripts/build_triton_wheel.py
- .github/ci_commit_pins/triton.txt
- .ci/docker/ci_commit_pins/triton.txt
- .ci/docker/ci_commit_pins/triton-rocm.txt
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
jobs:
build-wheel:
name: "Build Triton Wheel"
runs-on: [self-hosted, linux.2xlarge]
strategy:
fail-fast: false
matrix:
py_vers: [ "3.8", "3.9", "3.10", "3.11", "3.12" ]
device: ["cuda", "rocm"]
include:
- device: "rocm"
rocm_version: "5.7"
- device: "cuda"
rocm_version: ""
timeout-minutes: 40
env:
DOCKER_IMAGE: ${{ matrix.device == 'rocm' && format('pytorch/manylinux-rocm:{0}', matrix.rocm_version) || 'pytorch/manylinux-builder:cpu' }}
PY_VERS: ${{ matrix.py_vers }}
BUILD_DEVICE: ${{ matrix.device }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.2
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.2
with:
submodules: false
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.2
with:
docker-image: ${{ env.DOCKER_IMAGE }}
- name: Build Triton wheel
env:
IS_RELEASE_TAG: ${{ startsWith(github.event.ref, 'refs/tags/v') }}
run: |
set -x
mkdir -p "${RUNNER_TEMP}/artifacts/"
container_name=$(docker run \
--tty \
--detach \
-v "${GITHUB_WORKSPACE}:/pytorch" \
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
-w /artifacts/ \
"${DOCKER_IMAGE}" \
)
# Determine python executable for given version
case $PY_VERS in
3.8)
PYTHON_EXECUTABLE=/opt/python/cp38-cp38/bin/python
;;
3.9)
PYTHON_EXECUTABLE=/opt/python/cp39-cp39/bin/python
;;
3.10)
PYTHON_EXECUTABLE=/opt/python/cp310-cp310/bin/python
;;
3.11)
PYTHON_EXECUTABLE=/opt/python/cp311-cp311/bin/python
;;
3.12)
PYTHON_EXECUTABLE=/opt/python/cp312-cp312/bin/python
;;
*)
echo "Unsupported python version ${PY_VERS}"
exit 1
;;
esac
BUILD_ROCM=""
if [[ "$BUILD_DEVICE" == "rocm" ]]; then
BUILD_ROCM="--build-rocm"
fi
RELEASE=""
if [[ "${IS_RELEASE_TAG}" == true ]]; then
RELEASE="--release"
fi
docker exec -t "${container_name}" yum install -y zlib-devel zip
docker exec -t "${container_name}" "${PYTHON_EXECUTABLE}" -m pip install -U setuptools==67.4.0
docker exec -t "${container_name}" "${PYTHON_EXECUTABLE}" /pytorch/.github/scripts/build_triton_wheel.py $BUILD_ROCM $RELEASE
docker exec -t "${container_name}" chown -R 1000.1000 /artifacts
- uses: actions/upload-artifact@v3
with:
# NB: Use the same name here and all wheels can be downloaded by referring to the same artifact
name: pytorch-triton-wheel
if-no-files-found: error
path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.2
if: always()
upload-wheel:
runs-on: ubuntu-22.04
needs: build-wheel
container:
image: continuumio/miniconda3:4.12.0
environment: ${{ (github.event_name == 'push' && (github.event.ref == 'refs/heads/main' || startsWith(github.event.ref, 'refs/tags/v'))) && 'conda-aws-upload' || '' }}
steps:
- uses: actions/checkout@v3
- name: Download Build Artifacts
uses: actions/download-artifact@v3
with:
name: pytorch-triton-wheel
path: ${{ runner.temp }}/artifacts/
- name: Set DRY_RUN (only for tagged pushes)
if: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/main' || startsWith(github.event.ref, 'refs/tags/v')) }}
shell: bash
run: |
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/v') }}
shell: bash
run: |
set -ex
# reference ends with an RC suffix
if [[ "${GITHUB_REF_NAME}" = *-rc[0-9]* ]]; then
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
fi
# NB: This step is gated by DRY_RUN, which is enabled everywhere except main and release branches
- name: Upload binaries
env:
PACKAGE_TYPE: wheel
# The UPLOAD_SUBFOLDER needs to be empty here so that triton wheels are uploaded
# to nightly or test
UPLOAD_SUBFOLDER: ""
PKG_DIR: ${{ runner.temp }}/artifacts
# When running these on pull_request events these should be blank
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
shell: bash
run: |
set -ex
bash .circleci/scripts/binary_upload.sh
build-conda:
name: "Build Triton Conda"
runs-on: [self-hosted, linux.2xlarge]
strategy:
fail-fast: false
matrix:
py_vers: [ "3.8", "3.9", "3.10", "3.11" ]
timeout-minutes: 40
env:
DOCKER_IMAGE: pytorch/conda-builder:cpu
PY_VERS: ${{ matrix.py_vers }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.2
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.2
with:
submodules: false
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.2
with:
docker-image: ${{ env.DOCKER_IMAGE }}
- name: Build Triton conda package
env:
IS_RELEASE_TAG: ${{ startsWith(github.event.ref, 'refs/tags/v') }}
run: |
set -x
mkdir -p "${RUNNER_TEMP}/artifacts/"
container_name=$(docker run \
--tty \
--detach \
-v "${GITHUB_WORKSPACE}:/pytorch" \
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
-w /artifacts/ \
"${DOCKER_IMAGE}" \
)
RELEASE=""
if [[ "${IS_RELEASE_TAG}" == true ]]; then
RELEASE="--release"
fi
docker exec -t "${container_name}" yum install -y llvm11 llvm11-devel llvm11-static llvm11-libs zlib-devel
docker exec -t "${container_name}" python /pytorch/.github/scripts/build_triton_wheel.py --build-conda --py-version="${PY_VERS}" $RELEASE
docker exec -t "${container_name}" chown -R 1000.1000 /artifacts
- uses: actions/upload-artifact@v3
with:
# NB: Use the same name here and all wheels can be downloaded by referring to the same artifact
name: pytorch-triton-conda
if-no-files-found: error
path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.2
if: always()
upload-conda:
runs-on: ubuntu-22.04
needs: build-conda
container:
image: continuumio/miniconda3:4.12.0
environment: ${{ (github.event_name == 'push' && (github.event.ref == 'refs/heads/main' || startsWith(github.event.ref, 'refs/tags/v'))) && 'conda-aws-upload' || '' }}
steps:
- uses: actions/checkout@v3
- name: Download Build Artifacts
uses: actions/download-artifact@v3
with:
name: pytorch-triton-conda
path: ${{ runner.temp }}/artifacts/
- name: Set DRY_RUN (only for tagged pushes)
if: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/main' || startsWith(github.event.ref, 'refs/tags/v')) }}
shell: bash
run: |
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/v') }}
shell: bash
run: |
set -ex
# reference ends with an RC suffix
if [[ "${GITHUB_REF_NAME}" = *-rc[0-9]* ]]; then
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
fi
# NB: This step is gated by DRY_RUN, which is enabled everywhere except nightly and release branches
- name: Upload binaries to Anaconda
env:
PACKAGE_TYPE: conda
PKG_DIR: ${{ runner.temp }}/artifacts
# When running these on pull_request events these should be blank
CONDA_PYTORCHBOT_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
CONDA_PYTORCHBOT_TOKEN_TEST: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
shell: bash
run: |
set -ex
if [[ "${UPLOAD_CHANNEL:-nightly}" == "nightly" ]]; then
export ANACONDA_API_TOKEN="${CONDA_PYTORCHBOT_TOKEN}"
else
export ANACONDA_API_TOKEN="${CONDA_PYTORCHBOT_TOKEN_TEST}"
fi
bash .circleci/scripts/binary_upload.sh

View File

@ -15,6 +15,9 @@ jobs:
if: ${{ github.repository == 'pytorch/pytorch' }}
name: Create Release
runs-on: ubuntu-latest
# https://github.com/softprops/action-gh-release?tab=readme-ov-file#permissions
permissions:
contents: write
steps:
- uses: malfet/checkout@silent-checkout
with:

View File

@ -27,6 +27,8 @@ env:
ALPINE_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine
AWS_DEFAULT_REGION: us-east-1
permissions: read-all
jobs:
docker-build:
runs-on: [self-hosted, linux.2xlarge]

View File

@ -28,6 +28,8 @@ env:
USE_BUILDX: 1
WITH_PUSH: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || (startsWith(github.event.ref, 'refs/tags/') && !startsWith(github.event.ref, 'refs/tags/ciflow/'))) }}
permissions: read-all
jobs:
generate-matrix:
if: github.repository_owner == 'pytorch'

View File

@ -4,6 +4,7 @@
# Generation script: .github/scripts/generate_ci_workflows.py
name: linux-aarch64-binary-manywheel
on:
push:
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
@ -78,6 +79,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_8-cpu-aarch64-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_8-cpu-aarch64-test
with:
PYTORCH_ROOT: /pytorch
@ -92,8 +96,6 @@ jobs:
build_name: manywheel-py3_8-cpu-aarch64
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -140,6 +142,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_9-cpu-aarch64-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_9-cpu-aarch64-test
with:
PYTORCH_ROOT: /pytorch
@ -154,8 +159,6 @@ jobs:
build_name: manywheel-py3_9-cpu-aarch64
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -202,6 +205,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_10-cpu-aarch64-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_10-cpu-aarch64-test
with:
PYTORCH_ROOT: /pytorch
@ -216,8 +222,6 @@ jobs:
build_name: manywheel-py3_10-cpu-aarch64
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -264,6 +268,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_11-cpu-aarch64-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_11-cpu-aarch64-test
with:
PYTORCH_ROOT: /pytorch
@ -278,8 +285,6 @@ jobs:
build_name: manywheel-py3_11-cpu-aarch64
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -326,6 +331,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_12-cpu-aarch64-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_12-cpu-aarch64-test
with:
PYTORCH_ROOT: /pytorch
@ -340,8 +348,6 @@ jobs:
build_name: manywheel-py3_12-cpu-aarch64
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -4,6 +4,7 @@
# Generation script: .github/scripts/generate_ci_workflows.py
name: linux-binary-conda
on:
push:
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
@ -74,6 +75,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_8-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_8-cpu-test
with:
PYTORCH_ROOT: /pytorch
@ -88,8 +92,6 @@ jobs:
build_name: conda-py3_8-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -135,6 +137,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_8-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_8-cuda11_8-test
with:
PYTORCH_ROOT: /pytorch
@ -150,8 +155,6 @@ jobs:
build_name: conda-py3_8-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -197,6 +200,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_8-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_8-cuda12_1-test
with:
PYTORCH_ROOT: /pytorch
@ -212,8 +218,6 @@ jobs:
build_name: conda-py3_8-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -256,6 +260,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_9-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_9-cpu-test
with:
PYTORCH_ROOT: /pytorch
@ -270,8 +277,6 @@ jobs:
build_name: conda-py3_9-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -317,6 +322,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_9-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_9-cuda11_8-test
with:
PYTORCH_ROOT: /pytorch
@ -332,8 +340,6 @@ jobs:
build_name: conda-py3_9-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -379,6 +385,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_9-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_9-cuda12_1-test
with:
PYTORCH_ROOT: /pytorch
@ -394,8 +403,6 @@ jobs:
build_name: conda-py3_9-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -438,6 +445,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_10-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_10-cpu-test
with:
PYTORCH_ROOT: /pytorch
@ -452,8 +462,6 @@ jobs:
build_name: conda-py3_10-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -499,6 +507,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_10-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_10-cuda11_8-test
with:
PYTORCH_ROOT: /pytorch
@ -514,8 +525,6 @@ jobs:
build_name: conda-py3_10-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -561,6 +570,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_10-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_10-cuda12_1-test
with:
PYTORCH_ROOT: /pytorch
@ -576,8 +588,6 @@ jobs:
build_name: conda-py3_10-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -620,6 +630,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_11-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_11-cpu-test
with:
PYTORCH_ROOT: /pytorch
@ -634,8 +647,6 @@ jobs:
build_name: conda-py3_11-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -681,6 +692,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_11-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_11-cuda11_8-test
with:
PYTORCH_ROOT: /pytorch
@ -696,8 +710,6 @@ jobs:
build_name: conda-py3_11-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -743,6 +755,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_11-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_11-cuda12_1-test
with:
PYTORCH_ROOT: /pytorch
@ -758,8 +773,6 @@ jobs:
build_name: conda-py3_11-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -802,6 +815,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_12-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_12-cpu-test
with:
PYTORCH_ROOT: /pytorch
@ -816,8 +832,6 @@ jobs:
build_name: conda-py3_12-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -863,6 +877,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_12-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_12-cuda11_8-test
with:
PYTORCH_ROOT: /pytorch
@ -878,8 +895,6 @@ jobs:
build_name: conda-py3_12-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -925,6 +940,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
conda-py3_12-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_12-cuda12_1-test
with:
PYTORCH_ROOT: /pytorch
@ -940,8 +958,6 @@ jobs:
build_name: conda-py3_12-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -4,6 +4,7 @@
# Generation script: .github/scripts/generate_ci_workflows.py
name: linux-binary-libtorch-cxx11-abi
on:
push:
branches:

View File

@ -4,6 +4,7 @@
# Generation script: .github/scripts/generate_ci_workflows.py
name: linux-binary-libtorch-cxx11-abi
on:
push:
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
@ -76,6 +77,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
libtorch-cpu-shared-with-deps-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cpu-shared-with-deps-cxx11-abi-test
with:
PYTORCH_ROOT: /pytorch
@ -91,8 +95,6 @@ jobs:
build_name: libtorch-cpu-shared-with-deps-cxx11-abi
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -139,6 +141,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
libtorch-cuda11_8-shared-with-deps-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cuda11_8-shared-with-deps-cxx11-abi-test
with:
PYTORCH_ROOT: /pytorch
@ -155,8 +160,6 @@ jobs:
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -203,6 +206,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
libtorch-cuda12_1-shared-with-deps-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cuda12_1-shared-with-deps-cxx11-abi-test
with:
PYTORCH_ROOT: /pytorch
@ -219,8 +225,6 @@ jobs:
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -307,6 +311,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
libtorch-rocm5_6-shared-with-deps-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-rocm5_6-shared-with-deps-cxx11-abi-test
with:
PYTORCH_ROOT: /pytorch
@ -323,8 +330,6 @@ jobs:
build_name: libtorch-rocm5_6-shared-with-deps-cxx11-abi
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -411,6 +416,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
libtorch-rocm5_7-shared-with-deps-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-rocm5_7-shared-with-deps-cxx11-abi-test
with:
PYTORCH_ROOT: /pytorch
@ -427,8 +435,6 @@ jobs:
build_name: libtorch-rocm5_7-shared-with-deps-cxx11-abi
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -4,6 +4,7 @@
# Generation script: .github/scripts/generate_ci_workflows.py
name: linux-binary-libtorch-pre-cxx11
on:
push:
branches:

View File

@ -4,6 +4,7 @@
# Generation script: .github/scripts/generate_ci_workflows.py
name: linux-binary-libtorch-pre-cxx11
on:
push:
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
@ -76,6 +77,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
libtorch-cpu-shared-with-deps-pre-cxx11-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cpu-shared-with-deps-pre-cxx11-test
with:
PYTORCH_ROOT: /pytorch
@ -91,8 +95,6 @@ jobs:
build_name: libtorch-cpu-shared-with-deps-pre-cxx11
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -139,6 +141,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
libtorch-cuda11_8-shared-with-deps-pre-cxx11-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cuda11_8-shared-with-deps-pre-cxx11-test
with:
PYTORCH_ROOT: /pytorch
@ -155,8 +160,6 @@ jobs:
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -203,6 +206,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
libtorch-cuda12_1-shared-with-deps-pre-cxx11-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cuda12_1-shared-with-deps-pre-cxx11-test
with:
PYTORCH_ROOT: /pytorch
@ -219,8 +225,6 @@ jobs:
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -307,6 +311,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
libtorch-rocm5_6-shared-with-deps-pre-cxx11-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-rocm5_6-shared-with-deps-pre-cxx11-test
with:
PYTORCH_ROOT: /pytorch
@ -323,8 +330,6 @@ jobs:
build_name: libtorch-rocm5_6-shared-with-deps-pre-cxx11
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -411,6 +416,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
libtorch-rocm5_7-shared-with-deps-pre-cxx11-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-rocm5_7-shared-with-deps-pre-cxx11-test
with:
PYTORCH_ROOT: /pytorch
@ -427,8 +435,6 @@ jobs:
build_name: libtorch-rocm5_7-shared-with-deps-pre-cxx11
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -4,6 +4,7 @@
# Generation script: .github/scripts/generate_ci_workflows.py
name: linux-binary-manywheel
on:
push:
branches:

View File

@ -4,6 +4,7 @@
# Generation script: .github/scripts/generate_ci_workflows.py
name: linux-binary-manywheel
on:
push:
# NOTE: Meta Employees can trigger new nightlies using: https://fburl.com/trigger_pytorch_nightly_build
@ -74,6 +75,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_8-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_8-cpu-test
with:
PYTORCH_ROOT: /pytorch
@ -88,8 +92,6 @@ jobs:
build_name: manywheel-py3_8-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -134,6 +136,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_8-cpu-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_8-cpu-cxx11-abi-test
with:
PYTORCH_ROOT: /pytorch
@ -149,8 +154,6 @@ jobs:
build_name: manywheel-py3_8-cpu-cxx11-abi
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -196,6 +199,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_8-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_8-cuda11_8-test
with:
PYTORCH_ROOT: /pytorch
@ -211,8 +217,6 @@ jobs:
build_name: manywheel-py3_8-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -258,6 +262,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_8-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_8-cuda12_1-test
with:
PYTORCH_ROOT: /pytorch
@ -273,8 +280,6 @@ jobs:
build_name: manywheel-py3_8-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -359,6 +364,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
manywheel-py3_8-rocm5_6-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_8-rocm5_6-test
with:
PYTORCH_ROOT: /pytorch
@ -374,8 +382,6 @@ jobs:
build_name: manywheel-py3_8-rocm5_6
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -460,6 +466,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
manywheel-py3_8-rocm5_7-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_8-rocm5_7-test
with:
PYTORCH_ROOT: /pytorch
@ -475,8 +484,6 @@ jobs:
build_name: manywheel-py3_8-rocm5_7
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -519,6 +526,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_9-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_9-cpu-test
with:
PYTORCH_ROOT: /pytorch
@ -533,8 +543,6 @@ jobs:
build_name: manywheel-py3_9-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -579,6 +587,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_9-cpu-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_9-cpu-cxx11-abi-test
with:
PYTORCH_ROOT: /pytorch
@ -594,8 +605,6 @@ jobs:
build_name: manywheel-py3_9-cpu-cxx11-abi
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -641,6 +650,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_9-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_9-cuda11_8-test
with:
PYTORCH_ROOT: /pytorch
@ -656,8 +668,6 @@ jobs:
build_name: manywheel-py3_9-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -703,6 +713,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_9-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_9-cuda12_1-test
with:
PYTORCH_ROOT: /pytorch
@ -718,8 +731,6 @@ jobs:
build_name: manywheel-py3_9-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -804,6 +815,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
manywheel-py3_9-rocm5_6-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_9-rocm5_6-test
with:
PYTORCH_ROOT: /pytorch
@ -819,8 +833,6 @@ jobs:
build_name: manywheel-py3_9-rocm5_6
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -905,6 +917,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
manywheel-py3_9-rocm5_7-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_9-rocm5_7-test
with:
PYTORCH_ROOT: /pytorch
@ -920,8 +935,6 @@ jobs:
build_name: manywheel-py3_9-rocm5_7
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -964,6 +977,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_10-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_10-cpu-test
with:
PYTORCH_ROOT: /pytorch
@ -978,8 +994,6 @@ jobs:
build_name: manywheel-py3_10-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1024,6 +1038,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_10-cpu-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_10-cpu-cxx11-abi-test
with:
PYTORCH_ROOT: /pytorch
@ -1039,8 +1056,6 @@ jobs:
build_name: manywheel-py3_10-cpu-cxx11-abi
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1086,6 +1101,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_10-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_10-cuda11_8-test
with:
PYTORCH_ROOT: /pytorch
@ -1101,8 +1119,6 @@ jobs:
build_name: manywheel-py3_10-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1148,6 +1164,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_10-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_10-cuda12_1-test
with:
PYTORCH_ROOT: /pytorch
@ -1163,8 +1182,6 @@ jobs:
build_name: manywheel-py3_10-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1249,6 +1266,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
manywheel-py3_10-rocm5_6-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_10-rocm5_6-test
with:
PYTORCH_ROOT: /pytorch
@ -1264,8 +1284,6 @@ jobs:
build_name: manywheel-py3_10-rocm5_6
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1350,6 +1368,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
manywheel-py3_10-rocm5_7-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_10-rocm5_7-test
with:
PYTORCH_ROOT: /pytorch
@ -1365,8 +1386,6 @@ jobs:
build_name: manywheel-py3_10-rocm5_7
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1409,6 +1428,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_11-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_11-cpu-test
with:
PYTORCH_ROOT: /pytorch
@ -1423,8 +1445,6 @@ jobs:
build_name: manywheel-py3_11-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1469,6 +1489,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_11-cpu-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_11-cpu-cxx11-abi-test
with:
PYTORCH_ROOT: /pytorch
@ -1484,8 +1507,6 @@ jobs:
build_name: manywheel-py3_11-cpu-cxx11-abi
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1531,6 +1552,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_11-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_11-cuda11_8-test
with:
PYTORCH_ROOT: /pytorch
@ -1546,8 +1570,6 @@ jobs:
build_name: manywheel-py3_11-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1593,6 +1615,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_11-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_11-cuda12_1-test
with:
PYTORCH_ROOT: /pytorch
@ -1608,8 +1633,6 @@ jobs:
build_name: manywheel-py3_11-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1694,6 +1717,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
manywheel-py3_11-rocm5_6-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_11-rocm5_6-test
with:
PYTORCH_ROOT: /pytorch
@ -1709,8 +1735,6 @@ jobs:
build_name: manywheel-py3_11-rocm5_6
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1795,6 +1819,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
manywheel-py3_11-rocm5_7-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_11-rocm5_7-test
with:
PYTORCH_ROOT: /pytorch
@ -1810,8 +1837,6 @@ jobs:
build_name: manywheel-py3_11-rocm5_7
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1854,6 +1879,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_12-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_12-cpu-test
with:
PYTORCH_ROOT: /pytorch
@ -1868,8 +1896,6 @@ jobs:
build_name: manywheel-py3_12-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1914,6 +1940,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_12-cpu-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_12-cpu-cxx11-abi-test
with:
PYTORCH_ROOT: /pytorch
@ -1929,8 +1958,6 @@ jobs:
build_name: manywheel-py3_12-cpu-cxx11-abi
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1976,6 +2003,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_12-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_12-cuda11_8-test
with:
PYTORCH_ROOT: /pytorch
@ -1991,8 +2021,6 @@ jobs:
build_name: manywheel-py3_12-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -2038,6 +2066,9 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_12-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_12-cuda12_1-test
with:
PYTORCH_ROOT: /pytorch
@ -2053,8 +2084,6 @@ jobs:
build_name: manywheel-py3_12-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -2139,6 +2168,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
manywheel-py3_12-rocm5_6-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_12-rocm5_6-test
with:
PYTORCH_ROOT: /pytorch
@ -2154,8 +2186,6 @@ jobs:
build_name: manywheel-py3_12-rocm5_6
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -2240,6 +2270,9 @@ jobs:
uses: ./.github/actions/teardown-rocm
manywheel-py3_12-rocm5_7-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_12-rocm5_7-test
with:
PYTORCH_ROOT: /pytorch
@ -2255,8 +2288,6 @@ jobs:
build_name: manywheel-py3_12-rocm5_7
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -130,6 +130,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
conda-py3_8-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_8-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -145,8 +148,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -246,6 +247,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
conda-py3_9-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_9-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -261,8 +265,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -362,6 +364,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
conda-py3_10-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_10-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -377,8 +382,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -478,6 +481,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
conda-py3_11-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_11-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -493,8 +499,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -594,6 +598,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
conda-py3_12-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_12-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -609,8 +616,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -132,6 +132,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
libtorch-cpu-shared-with-deps-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cpu-shared-with-deps-cxx11-abi-build
with:
PYTORCH_ROOT: /pytorch
@ -148,8 +151,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -129,6 +129,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_8-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_8-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -144,8 +147,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -246,6 +247,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_9-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_9-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -261,8 +265,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -363,6 +365,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_10-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_10-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -378,8 +383,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -480,6 +483,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_11-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_11-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -495,8 +501,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -597,6 +601,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_12-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_12-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -612,8 +619,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -128,6 +128,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
conda-py3_8-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_8-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -143,8 +146,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -244,6 +245,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
conda-py3_9-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_9-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -259,8 +263,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -360,6 +362,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
conda-py3_10-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_10-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -375,8 +380,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -476,6 +479,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
conda-py3_11-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_11-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -491,8 +497,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -592,6 +596,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
conda-py3_12-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_12-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -607,8 +614,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -132,6 +132,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
libtorch-cpu-shared-with-deps-cxx11-abi-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cpu-shared-with-deps-cxx11-abi-build
with:
PYTORCH_ROOT: /pytorch
@ -148,8 +151,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -129,6 +129,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_8-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_8-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -144,8 +147,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -246,6 +247,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_9-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_9-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -261,8 +265,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -363,6 +365,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_10-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_10-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -378,8 +383,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -480,6 +483,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_11-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_11-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -495,8 +501,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -597,6 +601,9 @@ jobs:
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_12-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_12-cpu-build
with:
PYTORCH_ROOT: /pytorch
@ -612,8 +619,6 @@ jobs:
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -253,6 +253,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_8-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_8-cpu-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -266,8 +269,6 @@ jobs:
build_name: conda-py3_8-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -494,6 +495,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_8-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_8-cuda11_8-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -508,8 +512,6 @@ jobs:
build_name: conda-py3_8-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -736,6 +738,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_8-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_8-cuda12_1-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -750,8 +755,6 @@ jobs:
build_name: conda-py3_8-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -976,6 +979,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_9-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_9-cpu-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -989,8 +995,6 @@ jobs:
build_name: conda-py3_9-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1217,6 +1221,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_9-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_9-cuda11_8-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -1231,8 +1238,6 @@ jobs:
build_name: conda-py3_9-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1459,6 +1464,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_9-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_9-cuda12_1-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -1473,8 +1481,6 @@ jobs:
build_name: conda-py3_9-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1699,6 +1705,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_10-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_10-cpu-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -1712,8 +1721,6 @@ jobs:
build_name: conda-py3_10-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1940,6 +1947,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_10-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_10-cuda11_8-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -1954,8 +1964,6 @@ jobs:
build_name: conda-py3_10-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -2182,6 +2190,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_10-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_10-cuda12_1-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -2196,8 +2207,6 @@ jobs:
build_name: conda-py3_10-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -2422,6 +2431,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_11-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_11-cpu-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -2435,8 +2447,6 @@ jobs:
build_name: conda-py3_11-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -2663,6 +2673,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_11-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_11-cuda11_8-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -2677,8 +2690,6 @@ jobs:
build_name: conda-py3_11-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -2905,6 +2916,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_11-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_11-cuda12_1-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -2919,8 +2933,6 @@ jobs:
build_name: conda-py3_11-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -3145,6 +3157,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_12-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_12-cpu-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -3158,8 +3173,6 @@ jobs:
build_name: conda-py3_12-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -3386,6 +3399,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_12-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_12-cuda11_8-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -3400,8 +3416,6 @@ jobs:
build_name: conda-py3_12-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -3628,6 +3642,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
conda-py3_12-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: conda-py3_12-cuda12_1-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -3642,8 +3659,6 @@ jobs:
build_name: conda-py3_12-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -261,6 +261,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
libtorch-cpu-shared-with-deps-debug-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cpu-shared-with-deps-debug-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -278,8 +281,6 @@ jobs:
build_name: libtorch-cpu-shared-with-deps-debug
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -514,6 +515,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
libtorch-cuda11_8-shared-with-deps-debug-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cuda11_8-shared-with-deps-debug-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -532,8 +536,6 @@ jobs:
build_name: libtorch-cuda11_8-shared-with-deps-debug
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -768,6 +770,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
libtorch-cuda12_1-shared-with-deps-debug-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cuda12_1-shared-with-deps-debug-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -786,8 +791,6 @@ jobs:
build_name: libtorch-cuda12_1-shared-with-deps-debug
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -261,6 +261,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
libtorch-cpu-shared-with-deps-release-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cpu-shared-with-deps-release-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -278,8 +281,6 @@ jobs:
build_name: libtorch-cpu-shared-with-deps-release
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -514,6 +515,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
libtorch-cuda11_8-shared-with-deps-release-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cuda11_8-shared-with-deps-release-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -532,8 +536,6 @@ jobs:
build_name: libtorch-cuda11_8-shared-with-deps-release
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -768,6 +770,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
libtorch-cuda12_1-shared-with-deps-release-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: libtorch-cuda12_1-shared-with-deps-release-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -786,8 +791,6 @@ jobs:
build_name: libtorch-cuda12_1-shared-with-deps-release
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -254,6 +254,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_8-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_8-cpu-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -267,8 +270,6 @@ jobs:
build_name: wheel-py3_8-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -496,6 +497,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_8-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_8-cuda11_8-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -510,8 +514,6 @@ jobs:
build_name: wheel-py3_8-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -739,6 +741,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_8-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_8-cuda12_1-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -753,8 +758,6 @@ jobs:
build_name: wheel-py3_8-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -980,6 +983,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_9-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_9-cpu-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -993,8 +999,6 @@ jobs:
build_name: wheel-py3_9-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1222,6 +1226,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_9-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_9-cuda11_8-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -1236,8 +1243,6 @@ jobs:
build_name: wheel-py3_9-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1465,6 +1470,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_9-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_9-cuda12_1-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -1479,8 +1487,6 @@ jobs:
build_name: wheel-py3_9-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1706,6 +1712,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_10-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_10-cpu-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -1719,8 +1728,6 @@ jobs:
build_name: wheel-py3_10-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -1948,6 +1955,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_10-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_10-cuda11_8-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -1962,8 +1972,6 @@ jobs:
build_name: wheel-py3_10-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -2191,6 +2199,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_10-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_10-cuda12_1-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -2205,8 +2216,6 @@ jobs:
build_name: wheel-py3_10-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -2432,6 +2441,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_11-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_11-cpu-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -2445,8 +2457,6 @@ jobs:
build_name: wheel-py3_11-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -2674,6 +2684,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_11-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_11-cuda11_8-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -2688,8 +2701,6 @@ jobs:
build_name: wheel-py3_11-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -2917,6 +2928,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_11-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_11-cuda12_1-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -2931,8 +2945,6 @@ jobs:
build_name: wheel-py3_11-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -3158,6 +3170,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_12-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_12-cpu-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -3171,8 +3186,6 @@ jobs:
build_name: wheel-py3_12-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -3400,6 +3413,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_12-cuda11_8-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_12-cuda11_8-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -3414,8 +3430,6 @@ jobs:
build_name: wheel-py3_12-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
@ -3643,6 +3657,9 @@ jobs:
.github\scripts\kill_active_ssh_sessions.ps1
wheel-py3_12-cuda12_1-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_12-cuda12_1-test
with:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
@ -3657,8 +3674,6 @@ jobs:
build_name: wheel-py3_12-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token-test: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -10,6 +10,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
permissions: read-all
jobs:
linux-focal-cuda12_1-py3_10-gcc9-inductor-build:
name: cuda12.1-py3.10-gcc9-sm80

View File

@ -61,6 +61,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}
cancel-in-progress: true
permissions: read-all
jobs:
linux-focal-cuda12_1-py3_10-gcc9-inductor-build:
name: cuda12.1-py3.10-gcc9-sm80

View File

@ -14,6 +14,9 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
permissions: read-all
jobs:
linux-focal-cuda12_1-py3_10-gcc9-periodic-dynamo-benchmarks-build:
name: cuda12.1-py3.10-gcc9-sm86-periodic-dynamo-benchmarks

View File

@ -13,6 +13,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
permissions: read-all
jobs:
linux-focal-cuda12_1-py3_10-gcc9-inductor-build:
name: cuda12.1-py3.10-gcc9-sm86

View File

@ -11,6 +11,7 @@ on:
- landchecks/*
workflow_dispatch:
permissions: read-all
# The names of steps that actually test the code should be suffixed with `(nonretryable)`.
# When any other step fails, it's job will be retried once by retryBot.
jobs:

View File

@ -10,6 +10,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
permissions: read-all
jobs:
macos-12-py3-arm64-build:
name: macos-12-py3-arm64

View File

@ -20,6 +20,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}-${{ github.event.schedule }}
cancel-in-progress: true
permissions: read-all
jobs:
parallelnative-linux-jammy-py3_8-gcc11-build:
name: parallelnative-linux-jammy-py3.8-gcc11
@ -235,6 +237,9 @@ jobs:
]}
linux-focal-rocm5_7-py3_8-test:
permissions:
id-token: write
contents: read
name: linux-focal-rocm5.7-py3.8
uses: ./.github/workflows/_rocm-test.yml
needs: linux-focal-rocm5_7-py3_8-build

View File

@ -17,6 +17,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}
cancel-in-progress: true
permissions: read-all
jobs:
linux-jammy-py3_8-gcc11-build:
name: linux-jammy-py3.8-gcc11

View File

@ -15,6 +15,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}
cancel-in-progress: true
permissions: read-all
jobs:
linux-focal-rocm5_7-py3_8-build:
name: linux-focal-rocm5.7-py3.8
@ -31,6 +33,9 @@ jobs:
]}
linux-focal-rocm5_7-py3_8-test:
permissions:
id-token: write
contents: read
name: linux-focal-rocm5.7-py3.8
uses: ./.github/workflows/_rocm-test.yml
needs: linux-focal-rocm5_7-py3_8-build

View File

@ -18,6 +18,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}-${{ github.event.schedule }}
cancel-in-progress: true
permissions: read-all
jobs:
linux-focal-cuda12_1-py3-gcc9-slow-gradcheck-build:
name: linux-focal-cuda12.1-py3-gcc9-slow-gradcheck
@ -98,6 +100,9 @@ jobs:
]}
linux-focal-rocm5_6-py3_8-test:
permissions:
id-token: write
contents: read
name: linux-focal-rocm5.6-py3.8
uses: ./.github/workflows/_rocm-test.yml
needs: linux-focal-rocm5_6-py3_8-build

View File

@ -21,6 +21,9 @@ jobs:
stale:
if: ${{ github.repository == 'pytorch/pytorch' }}
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: write
steps:
- uses: actions/github-script@v6

View File

@ -16,6 +16,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}
cancel-in-progress: true
permissions: read-all
jobs:
# Build PyTorch with BUILD_CAFFE2=ON
caffe2-linux-jammy-py3_8-gcc11-build:
@ -188,6 +190,9 @@ jobs:
]}
linux-focal-rocm5_7-py3_8-test:
permissions:
id-token: write
contents: read
name: linux-focal-rocm5.7-py3.8
uses: ./.github/workflows/_rocm-test.yml
needs: linux-focal-rocm5_7-py3_8-build

View File

@ -13,6 +13,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}-${{ github.event.schedule }}
cancel-in-progress: true
permissions: read-all
jobs:
# There must be at least one job here to satisfy GitHub action workflow syntax
introduction:

View File

@ -12,6 +12,8 @@ concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
permissions: read-all
jobs:
# There must be at least one job here to satisfy GitHub action workflow syntax
introduction:

View File

@ -8,6 +8,8 @@ on:
- cron: 37 7 * * 1
workflow_dispatch:
permissions: read-all
jobs:
update-commit-hash:
runs-on: ubuntu-latest

View File

@ -349,6 +349,8 @@ cmake_dependent_option(
"NOT INTERN_BUILD_MOBILE" OFF)
cmake_dependent_option(
BUILD_FUNCTORCH "Build Functorch" ON "BUILD_PYTHON" OFF)
cmake_dependent_option(
BUILD_BUNDLE_PTXAS "Bundle PTX into torch/bin fodler" OFF "USE_CUDA" OFF)
option(USE_MIMALLOC "Use mimalloc" OFF)
# Enable third party mimalloc library to improve memory allocation performance on Windows.
@ -1230,3 +1232,12 @@ if(DEFINED USE_CUSTOM_DEBINFO)
set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -g")
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -g")
endif()
# Bundle PTXAS if needed
if(BUILD_BUNDLE_PTXAS AND USE_CUDA)
if(NOT EXISTS "${PROJECT_SOURCE_DIR}/build/bin/ptxas")
message(STATUS "Copying PTXAS into the bin folder")
file(COPY "${CUDAToolkit_BIN_DIR}/ptxas" DESTINATION "${PROJECT_BINARY_DIR}")
endif()
install(PROGRAMS "${PROJECT_BINARY_DIR}/ptxas" DESTINATION "${CMAKE_INSTALL_BINDIR}")
endif()

View File

@ -158,7 +158,7 @@ They require JetPack 4.2 and above, and [@dusty-nv](https://github.com/dusty-nv)
#### Prerequisites
If you are installing from source, you will need:
- Python 3.8 or later (for Linux, Python 3.8.1+ is needed)
- A compiler that fully supports C++17, such as clang or gcc (especially for aarch64, gcc 9.4.0 or newer is required)
- A compiler that fully supports C++17, such as clang or gcc (gcc 9.4.0 or newer is required)
We highly recommend installing an [Anaconda](https://www.anaconda.com/download) environment. You will get a high-quality BLAS library (MKL) and you get controlled dependency versions regardless of your Linux distro.

View File

@ -147,9 +147,8 @@ public:
// versions GCC/Clang have buggy determinations on whether or not an
// identifier is odr-used or not, and in any case it's hard to tell if
// a variable is odr-used or not. So best to just cut the problem at the root.
static constexpr size_type size_T = sizeof(T); // Workaround to compile with VS2022.
static constexpr size_type size() {
return VECTOR_WIDTH / size_T;
return VECTOR_WIDTH / sizeof(T);
}
Vectorized() : values{static_cast<T>(0)} {}
Vectorized(T val) {

View File

@ -97,7 +97,7 @@ constexpr bool mkldnn_bf16_device_check_arm() {
#if AT_MKLDNN_ENABLED()
inline bool mkldnn_bf16_device_check() {
#if defined(__x86_64__)
#if defined(__x86_64__) || (defined(_M_X64) && !defined(_M_ARM64EC))
// Use ideep to check bf16 on X64 as cpuinfo has no avx_ne_convert check.
return ideep::has_bf16_type_support();
#else
@ -106,7 +106,7 @@ inline bool mkldnn_bf16_device_check() {
}
inline bool mkldnn_fp16_device_check() {
#if defined(__x86_64__)
#if defined(__x86_64__) || (defined(_M_X64) && !defined(_M_ARM64EC))
return ideep::has_fp16_type_support();
#else
return false;

View File

@ -12,14 +12,14 @@
#include <utility>
#if !defined(__clang__) && !defined(_MSC_VER) && defined(__GNUC__) && \
__GNUC__ < 5
__GNUC__ < 9
#error \
"You're trying to build PyTorch with a too old version of GCC. We need GCC 5 or later."
"You're trying to build PyTorch with a too old version of GCC. We need GCC 9 or later."
#endif
#if defined(__clang__) && __clang_major__ < 4
#if defined(__clang__) && __clang_major__ < 9
#error \
"You're trying to build PyTorch with a too old version of Clang. We need Clang 4 or later."
"You're trying to build PyTorch with a too old version of Clang. We need Clang 9 or later."
#endif
#if (defined(_MSC_VER) && (!defined(_MSVC_LANG) || _MSVC_LANG < 201703L)) || \

View File

@ -88,7 +88,8 @@ IF(NOT MKLDNN_FOUND)
ELSE()
IF(CMAKE_CXX_COMPILER_ID STREQUAL "GNU" OR CMAKE_CXX_COMPILER_ID STREQUAL "Clang")
IF(CPU_INTEL)
SET(DNNL_ARCH_OPT_FLAGS "-msse4" CACHE STRING "" FORCE)
# Do not specify arch in oneDNN build option, for the portability in older systems
SET(DNNL_ARCH_OPT_FLAGS "" CACHE STRING "" FORCE)
ELSEIF(CPU_AARCH64)
SET(DNNL_ARCH_OPT_FLAGS "-mcpu=generic" CACHE STRING "" FORCE)
ENDIF()

View File

@ -303,7 +303,7 @@ Frequently Asked Questions
tasks or pull requests with your environment details is helpful and
appreciated.
- **CI tests failed, what does it mean?** Maybe your PR is based
off a broken main bracnh? You can try to rebase your change on top
off a broken main branch? You can try to rebase your change on top
of the latest main branch. You can also see the current status of
main branch's CI at https://hud.pytorch.org/.
- **What are the most high risk changes?** Anything that touches build

View File

@ -130,6 +130,9 @@ Padding Layers
nn.ConstantPad1d
nn.ConstantPad2d
nn.ConstantPad3d
nn.CircularPad1d
nn.CircularPad2d
nn.CircularPad3d
Non-linear Activations (weighted sum, nonlinearity)
---------------------------------------------------

View File

@ -1,3 +1,11 @@
#!/bin/bash
# Updates Triton to the pinned version for this copy of PyTorch
pip install --index-url https://download.pytorch.org/whl/nightly/ "pytorch-triton==$(cat .ci/docker/triton_version.txt)+$(head -c 10 .ci/docker/ci_commit_pins/triton.txt)"
BRANCH=$(git rev-parse --abbrev-ref HEAD)
TRITON_VERSION="pytorch-triton==$(cat .ci/docker/triton_version.txt)"
DOWNLOAD_PYTORCH_ORG="https://download.pytorch.org/whl"
if [[ "$BRANCH" =~ .*release.* ]]; then
pip install --index-url ${DOWNLOAD_PYTORCH_ORG}/test/ $TRITON_VERSION
else
pip install --index-url ${DOWNLOAD_PYTORCH_ORG}/nightly/ $TRITON_VERSION+$(head -c 10 .ci/docker/ci_commit_pins/triton.txt)
fi

View File

@ -1110,7 +1110,7 @@ def main():
with open(os.path.join(cwd, "README.md"), encoding="utf-8") as f:
long_description = f.read()
version_range_max = max(sys.version_info[1], 10) + 1
version_range_max = max(sys.version_info[1], 12) + 1
torch_package_data = [
"py.typed",
"bin/*",

View File

@ -8,6 +8,7 @@ from unittest.mock import patch
import torch
import torch._dynamo
import torch._dynamo.testing
import torch.distributed as dist
import torch.nn as nn
from torch._C import FileCheck
@ -145,11 +146,16 @@ class TestDTensorCompile(torch._dynamo.test_case.TestCase):
# _dt_lib_impl = torch.library.Library("dtensor", "IMPL")
# _dt_lib_impl.impl("from_local", from_local_tensor, "Autograd")
x = torch.ones(1)
x = torch.ones(1, requires_grad=True)
ref = fn(x)
opt_fn = torch.compile(fn, backend="aot_eager", fullgraph=True, dynamic=False)
cnt = torch._dynamo.testing.CompileCounterWithBackend("aot_eager")
opt_fn = torch.compile(fn, backend=cnt, fullgraph=True)
res = opt_fn(x)
# backward should work as well
res.sum().backward()
self.assertEqual(res, ref)
self.assertEqual(cnt.frame_count, 1)
# test if user calls from_local with mesh/placements as kwargs and that should still work
def from_local_kwargs_fn(x):
@ -159,11 +165,10 @@ class TestDTensorCompile(torch._dynamo.test_case.TestCase):
return dt.to_local() + 2
ref = from_local_kwargs_fn(x)
opt_kwargs_fn = torch.compile(
from_local_kwargs_fn, backend="aot_eager", fullgraph=True, dynamic=False
)
opt_kwargs_fn = torch.compile(from_local_kwargs_fn, backend=cnt, fullgraph=True)
res = opt_kwargs_fn(x)
self.assertEqual(res, ref)
self.assertEqual(cnt.frame_count, 2)
def test_dynamo_dtensor_from_local_redistribute(self):
mesh = DeviceMesh(self.device_type, torch.arange(self.world_size))
@ -176,7 +181,8 @@ class TestDTensorCompile(torch._dynamo.test_case.TestCase):
x = torch.ones(1)
ref = fn(x)
opt_fn = torch.compile(fn, backend="aot_eager", fullgraph=True, dynamic=False)
cnt = torch._dynamo.testing.CompileCounterWithBackend("aot_eager")
opt_fn = torch.compile(fn, backend=cnt, fullgraph=True)
res = opt_fn(x)
self.assertEqual(res, ref)
@ -190,7 +196,7 @@ class TestDTensorCompile(torch._dynamo.test_case.TestCase):
x = torch.ones(1)
ref = redistribute_kwargs_fn(x)
opt_kwargs_fn = torch.compile(
redistribute_kwargs_fn, backend="aot_eager", fullgraph=True, dynamic=False
redistribute_kwargs_fn, backend=cnt, fullgraph=True
)
res = opt_kwargs_fn(x)
self.assertEqual(res, ref)
@ -254,9 +260,12 @@ class TestDTensorCompile(torch._dynamo.test_case.TestCase):
parallelize_plan=parallel_plan,
)
compiled_model = torch.compile(model)
cnt = torch._dynamo.testing.CompileCounterWithBackend("inductor")
compiled_model = torch.compile(model, backend=cnt, fullgraph=True)
inp = torch.rand(20, 16).to(self.device_type)
out = compiled_model(inp)
out.sum().backward()
self.assertEqual(cnt.frame_count, 1)
code = run_and_get_triton_code(compiled_model, inp)
# Check that `buf2` is correctly waited on before first use.
@ -327,11 +336,12 @@ class TestDTensorCompileE2E(DTensorTestBase):
torch.manual_seed(rng_seed)
inp = torch.rand(20, 10, device=self.device_type)
out = model(inp)
compiled_mod = torch.compile(
model, backend="aot_eager", fullgraph=True, dynamic=False
)
cnt = torch._dynamo.testing.CompileCounterWithBackend("aot_eager")
compiled_mod = torch.compile(model, backend=cnt, fullgraph=True)
compiled_out = compiled_mod(inp)
compiled_out.sum().backward()
self.assertEqual(compiled_out, out)
self.assertEqual(cnt.frame_count, 1)
@with_comms
@skip_if_lt_x_gpu(4)
@ -377,10 +387,12 @@ class TestDTensorCompileE2E(DTensorTestBase):
)
# TODO: once aot autograd support is ready we can just use default backend
compiled_2d = torch.compile(fsdp_2d, backend="aot_eager", dynamic=False)
cnt = torch._dynamo.testing.CompileCounterWithBackend("aot_eager")
compiled_2d = torch.compile(fsdp_2d, backend=cnt)
compiled_output = compiled_2d(inp)
self.assertEqual(out, compiled_output)
self.assertEqual(cnt.frame_count, 1)
@with_comms
@skip_if_lt_x_gpu(4)

View File

@ -399,6 +399,40 @@ class DistTensorOpsTest(DTensorTestBase):
ref = torch.where(global_tensor > 0, 1, 0)
self.assertEqual(res.full_tensor(), ref)
@with_comms
def test_dtensor_dtype_conversion(self):
device_mesh = DeviceMesh(self.device_type, list(range(self.world_size)))
shard_spec = [Shard(0)]
# by default we start from bf16 dtype
local_tenor = torch.randn(2, 8, dtype=torch.bfloat16)
bf16_sharded_dtensor = DTensor.from_local(local_tenor, device_mesh, shard_spec)
self.assertEqual(bf16_sharded_dtensor.dtype, torch.bfloat16)
self.assertEqual(bf16_sharded_dtensor.to_local().dtype, torch.bfloat16)
# convert to float dtype
fp32_sharded_dtensor = bf16_sharded_dtensor.float()
self.assertEqual(fp32_sharded_dtensor.dtype, torch.float32)
self.assertEqual(fp32_sharded_dtensor.to_local().dtype, torch.float32)
# convert back to bf16 dtype
bf16_sharded_dtensor1 = fp32_sharded_dtensor.type_as(bf16_sharded_dtensor)
self.assertEqual(bf16_sharded_dtensor1.dtype, torch.bfloat16)
self.assertEqual(bf16_sharded_dtensor1.to_local().dtype, torch.bfloat16)
from torch.distributed._tensor.debug import get_sharding_prop_cache_info
# by this point we only have cache misses
hits, misses, _, _ = get_sharding_prop_cache_info()
self.assertEqual(hits, 0)
self.assertEqual(misses, 2)
# convert to fp32 again and see if there's cache hit
fp32_sharded_dtensor1 = bf16_sharded_dtensor1.float()
hits, misses, _, _ = get_sharding_prop_cache_info()
# by now we should have cache hit
self.assertEqual(hits, 1)
self.assertEqual(misses, 2)
if __name__ == "__main__":
run_tests()

View File

@ -3,10 +3,11 @@
import copy
import sys
from itertools import chain
from typing import Callable
from typing import Callable, Tuple
import torch
import torch.distributed as dist
import torch.nn as nn
from torch.distributed._composable import fully_shard, replicate
from torch.distributed._shard.sharded_tensor import ShardedTensor
from torch.distributed._tensor import DTensor, init_device_mesh
@ -133,7 +134,12 @@ class TestStateDict(DTensorTestBase, VerifyStateDictMixin):
self._verify_osd(model, optim, osd, dist_osd)
def _test_fsdp(
self, use_orig_params: bool, use_composable: bool, use_dtensor: bool
self,
*,
use_orig_params: bool,
use_composable: bool,
use_dtensor: bool,
wrapping: Tuple[nn.Module] = (),
) -> None:
if not use_orig_params and use_composable:
return
@ -149,23 +155,27 @@ class TestStateDict(DTensorTestBase, VerifyStateDictMixin):
orig_model = CompositeParamModel(device=torch.device("cuda"))
orig_optim = torch.optim.Adam(orig_model.parameters(), lr=1e-3)
copy_optim = torch.optim.Adam(orig_model.parameters(), lr=1e-3)
if wrapping:
strategy = set(wrapping)
else:
strategy = {UnitModule}
if use_composable:
dist_model = fully_shard(
copy.deepcopy(orig_model), policy=ModuleWrapPolicy({UnitModule})
copy.deepcopy(orig_model), policy=ModuleWrapPolicy(strategy)
)
else:
if use_dtensor:
device_mesh = init_device_mesh("cuda", (self.world_size,))
dist_model = FSDP(
copy.deepcopy(orig_model),
auto_wrap_policy=ModuleWrapPolicy({UnitModule}),
auto_wrap_policy=ModuleWrapPolicy(strategy),
use_orig_params=use_orig_params,
device_mesh=device_mesh,
)
else:
dist_model = FSDP(
copy.deepcopy(orig_model),
auto_wrap_policy=ModuleWrapPolicy({UnitModule}),
auto_wrap_policy=ModuleWrapPolicy(strategy),
use_orig_params=use_orig_params,
)
@ -182,6 +192,7 @@ class TestStateDict(DTensorTestBase, VerifyStateDictMixin):
"use_orig_params": [True, False],
"use_composable": [True, False],
"use_dtensor": [True, False],
"wrapping": [tuple(), (nn.Linear, UnitModule)],
},
self._test_fsdp,
)

View File

@ -313,30 +313,6 @@ class TestFSDPWithDeviceMeshAndDTensor(DTensorTestBase):
with FSDP.state_dict_type(model, StateDictType.LOCAL_STATE_DICT):
optim_state_dict = FSDP.optim_state_dict(model, optim)
with self.assertLogs(
"torch.distributed.fsdp._state_dict_utils", level="WARNING"
) as log:
with FSDP.state_dict_type(model, StateDictType.FULL_STATE_DICT):
state_dict = model.state_dict()
self.assertEqual(len(log.records), 1)
self.assertEqual(len(log.output), 1)
self.assertIn(
"Found both state_dict_type FULL_STATE_DICT and device_mesh.",
log.output[0],
)
with self.assertLogs(
"torch.distributed.fsdp._optim_utils", level="WARNING"
) as log:
with FSDP.state_dict_type(model, StateDictType.FULL_STATE_DICT):
state_dict = FSDP.optim_state_dict(model, optim)
self.assertEqual(len(log.records), 1)
self.assertEqual(len(log.output), 1)
self.assertIn(
"Found both state_dict_type FULL_STATE_DICT and device_mesh.",
log.output[0],
)
instantiate_parametrized_tests(TestFSDPWithDeviceMeshAndDTensor)
if __name__ == "__main__":

View File

@ -1,5 +1,6 @@
# Owner(s): ["oncall: distributed"]
import contextlib
import sys
from enum import Enum
@ -31,7 +32,13 @@ if TEST_WITH_DEV_DBG_ASAN:
class Model(nn.Module):
def __init__(self, with_fsdp, freeze_after_wrap_fsdp):
def __init__(
self,
with_fsdp,
freeze_after_wrap_fsdp,
disable_autograd,
fsdp_kwargs,
):
super().__init__()
self.trunk = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=3),
@ -39,20 +46,32 @@ class Model(nn.Module):
nn.AdaptiveAvgPool2d(output_size=(1, 1)),
nn.Flatten(),
)
self.device = torch.cuda.current_device()
self.head = nn.Linear(64, 10)
if with_fsdp and freeze_after_wrap_fsdp:
self.fsdp_wrap()
self.fsdp_wrap(fsdp_kwargs)
self.autograd_ctx = (
torch.no_grad if disable_autograd else contextlib.nullcontext
)
def fsdp_wrap(self):
self.trunk = FSDP(self.trunk)
self.head = FSDP(self.head)
def fsdp_wrap(self, fsdp_kwargs):
self.trunk = FSDP(self.trunk, **fsdp_kwargs)
self.head = FSDP(self.head, **fsdp_kwargs)
def forward(self, x):
return self.head(self.trunk(x))
with self.autograd_ctx():
x = self.trunk(x)
return self.head(x)
class NestedTrunkModel(nn.Module):
def __init__(self, with_fsdp, freeze_after_wrap_fsdp):
def __init__(
self,
with_fsdp,
freeze_after_wrap_fsdp,
disable_autograd,
fsdp_kwargs,
):
super().__init__()
self.trunk = nn.Sequential(
self._create_block(3, 64, with_fsdp, freeze_after_wrap_fsdp),
@ -64,17 +83,22 @@ class NestedTrunkModel(nn.Module):
nn.Linear(64, 10),
)
if with_fsdp and freeze_after_wrap_fsdp:
self.fsdp_wrap()
self.fsdp_wrap(fsdp_kwargs)
self.autograd_ctx = (
torch.no_grad if disable_autograd else contextlib.nullcontext
)
def fsdp_wrap(self):
def fsdp_wrap(self, fsdp_kwargs):
for name, child in self.trunk.named_children():
wrapped_child = FSDP(child)
wrapped_child = FSDP(child, **fsdp_kwargs)
setattr(self.trunk, name, wrapped_child)
self.trunk = FSDP(self.trunk)
self.head = FSDP(self.head)
self.trunk = FSDP(self.trunk, **fsdp_kwargs)
self.head = FSDP(self.head, **fsdp_kwargs)
def forward(self, x):
return self.head(self.trunk(x))
with self.autograd_ctx():
x = self.trunk(x)
return self.head(x)
def _create_block(
self, in_channels, out_channels, with_fsdp, freeze_after_wrap_fsdp
@ -92,20 +116,53 @@ class FreezingMethod(str, Enum):
class TestFreezingWeights(FSDPTest):
def _create_model(self, with_fsdp, with_nested_trunk, freeze_after_wrap_fsdp):
def _create_model(
self,
with_fsdp,
with_nested_trunk,
freeze_after_wrap_fsdp,
disable_autograd,
fsdp_kwargs,
):
if with_nested_trunk:
model = NestedTrunkModel(with_fsdp, freeze_after_wrap_fsdp)
model = NestedTrunkModel(
with_fsdp, freeze_after_wrap_fsdp, disable_autograd, fsdp_kwargs
)
else:
model = Model(with_fsdp, freeze_after_wrap_fsdp)
model = Model(
with_fsdp, freeze_after_wrap_fsdp, disable_autograd, fsdp_kwargs
)
return model
def _dist_train(
self, with_nested_trunk, freezing_method, freeze_after_wrap_fsdp, with_fsdp
self,
with_nested_trunk,
freezing_method,
freeze_after_wrap_fsdp,
with_fsdp,
disable_autograd,
forward_prefetch,
):
torch.manual_seed(0)
batch = torch.randn(size=(2, 3, 224, 224)).cuda()
model = self._create_model(with_fsdp, with_nested_trunk, freeze_after_wrap_fsdp)
fsdp_kwargs = {
"device_id": self.rank,
"forward_prefetch": forward_prefetch,
}
ddp_kwargs = {
"device_ids": [self.rank],
"find_unused_parameters": True if disable_autograd else False,
}
model = self._create_model(
with_fsdp,
with_nested_trunk,
freeze_after_wrap_fsdp,
disable_autograd,
fsdp_kwargs,
)
model = model.cuda()
# freezing the trunk using requires_grad.
@ -115,10 +172,10 @@ class TestFreezingWeights(FSDPTest):
if with_fsdp:
if not freeze_after_wrap_fsdp:
model.fsdp_wrap()
model = FSDP(model)
model.fsdp_wrap(fsdp_kwargs)
model = FSDP(model, **fsdp_kwargs)
else:
model = DistributedDataParallel(model, device_ids=[self.rank])
model = DistributedDataParallel(model, **ddp_kwargs)
target = torch.tensor([0, 1], dtype=torch.long).cuda()
criterion = nn.CrossEntropyLoss()
@ -145,17 +202,34 @@ class TestFreezingWeights(FSDPTest):
"freezing_method", [FreezingMethod.RequiresGrad, FreezingMethod.GradToNone]
)
@parametrize("freeze_after_wrap_fsdp", [True, False])
@parametrize("disable_autograd", [True, False])
@parametrize("forward_prefetch", [True, False])
def test_freezing_weights(
self, with_nested_trunk, freezing_method, freeze_after_wrap_fsdp
self,
with_nested_trunk,
freezing_method,
freeze_after_wrap_fsdp,
disable_autograd,
forward_prefetch,
):
# DDP
ddp_state = self._dist_train(
with_nested_trunk, freezing_method, freeze_after_wrap_fsdp, with_fsdp=False
with_nested_trunk,
freezing_method,
freeze_after_wrap_fsdp,
with_fsdp=False,
disable_autograd=disable_autograd,
forward_prefetch=False, # does not apply to DDP
)
# FSDP
fsdp_state = self._dist_train(
with_nested_trunk, freezing_method, freeze_after_wrap_fsdp, with_fsdp=True
with_nested_trunk,
freezing_method,
freeze_after_wrap_fsdp,
with_fsdp=True,
disable_autograd=disable_autograd,
forward_prefetch=forward_prefetch,
)
self.assertEqual(

View File

@ -11,6 +11,7 @@ import torch
import torch.distributed as dist
import torch.distributed.fsdp._traversal_utils as traversal_utils
import torch.nn as nn
from torch.distributed.device_mesh import init_device_mesh
from torch.distributed.distributed_c10d import _rank_not_in_group
from torch.distributed.fsdp import (
FullyShardedDataParallel as FSDP,
@ -116,46 +117,6 @@ class TestFSDPHybridShard(FSDPTest):
with err_ctx:
model = FSDP(model, sharding_strategy=ShardingStrategy._HYBRID_SHARD_ZERO2)
@skip_if_lt_x_gpu(2)
def test_hybrid_shard_pg_mismatch_raises(self):
model = MyModel().cuda()
intra_pg = self.process_group
inter_pg = dist.new_group(ranks=[self.rank])
# Mismatched process groups for intra-node
model.lin1 = FSDP(
model.lin1,
process_group=(intra_pg, inter_pg),
sharding_strategy=ShardingStrategy.HYBRID_SHARD,
)
model = FSDP(
model,
process_group=(dist.new_group(), dist.new_group()),
sharding_strategy=ShardingStrategy.HYBRID_SHARD,
)
# Errors during _lazy_init
inp = torch.randn(4, 10)
with self.assertRaisesRegex(
ValueError, "intra-node process groups do not match"
):
model(inp)
# Mismatched process groups for inter-node
model = MyModel().cuda()
model.lin1 = FSDP(
model.lin1,
process_group=(intra_pg, inter_pg),
sharding_strategy=ShardingStrategy.HYBRID_SHARD,
)
model = FSDP(
model,
process_group=(intra_pg, dist.new_group()),
sharding_strategy=ShardingStrategy.HYBRID_SHARD,
)
with self.assertRaisesRegex(
ValueError, "inter-node process groups do not match"
):
model(inp)
@skip_if_lt_x_gpu(4)
def test_hsdp_save_load_state_dict(self):
model = MyModel().cuda()
@ -284,6 +245,7 @@ class TestFSDPHybridShard(FSDPTest):
ShardingStrategyMode.MIXED_HYBRID_FULL_SHARD,
],
"use_orig_params": [False, True],
"use_device_mesh": [False, True],
},
self._test_fsdp_hybrid_shard_basic_setup,
)
@ -293,9 +255,17 @@ class TestFSDPHybridShard(FSDPTest):
hsdp_sharding_strategy: ShardingStrategy,
sharding_strategy_mode: ShardingStrategyMode,
use_orig_params: bool,
use_device_mesh: bool,
):
if use_device_mesh:
device_mesh = init_device_mesh("cuda", (1, self.world_size))
else:
device_mesh = None
hsdp_model = self._init_hsdp_model(
hsdp_sharding_strategy, sharding_strategy_mode, use_orig_params
hsdp_sharding_strategy,
sharding_strategy_mode,
use_orig_params,
hsdp_device_mesh=device_mesh,
)
# All FSDP modules should have state.process_group as the process group over which to
# shard (default process group), and state._inter_node_pg (process group containing only
@ -428,7 +398,9 @@ class TestFSDPHybridShard(FSDPTest):
hsdp_process_groups: Optional[
Tuple[dist.ProcessGroup, dist.ProcessGroup]
] = None,
hsdp_device_mesh: Optional = None,
):
assert hsdp_process_groups is None or hsdp_device_mesh is None
auto_wrap_policy = ModuleWrapPolicy(
{TransformerEncoderLayer, TransformerDecoderLayer},
)
@ -437,6 +409,7 @@ class TestFSDPHybridShard(FSDPTest):
"auto_wrap_policy": auto_wrap_policy,
"sharding_strategy": hsdp_sharding_strategy,
"use_orig_params": use_orig_params,
"device_mesh": hsdp_device_mesh,
}
if sharding_strategy_mode == ShardingStrategyMode.ALL_HYBRID_SHARD:
hsdp_model = TransformerWithSharedParams.init(

View File

@ -8,7 +8,15 @@ import torch
from torch import distributed as dist
from torch.distributed._shard.sharded_tensor.api import ShardedTensor
from torch.distributed._shard.sharding_spec import ChunkShardingSpec
from torch.distributed._tensor import DeviceMesh, DTensor as DT, init_device_mesh, Shard
from torch.distributed._tensor import (
DeviceMesh,
distribute_module,
DTensor,
init_device_mesh,
Replicate,
Shard,
)
from torch.distributed._tensor.debug import CommDebugMode
from torch.distributed.fsdp.fully_sharded_data_parallel import (
CPUOffload,
FullyShardedDataParallel as FSDP,
@ -26,6 +34,7 @@ from torch.testing._internal.common_utils import (
run_tests,
TEST_WITH_DEV_DBG_ASAN,
)
from torch.testing._internal.distributed._tensor.common_dtensor import MLPModule
if not dist.is_available():
print("Distributed not available, skipping tests", file=sys.stderr)
@ -68,6 +77,34 @@ class SimpleModel(torch.nn.Module):
return ["net3.weight", "net3.bias"]
# simple RMSNorm layer for testing
class RMSNormPython(torch.nn.Module):
def __init__(self, dim: int, eps: float = 1e-6):
super().__init__()
self.eps = eps
self.weight = torch.nn.Parameter(torch.ones(dim))
def _norm(self, x):
return x * torch.rsqrt(x.pow(2).mean(-1, keepdim=True) + self.eps)
def forward(self, x):
output = self._norm(x)
return output * self.weight
def distribute_rmsnorm(module, device_mesh):
def prepare_input_fn(inputs, device_mesh):
shard_tensor = DTensor.from_local(inputs[0], device_mesh, [Shard(0)])
return shard_tensor
def prepare_output_fn(outputs, device_mesh):
return outputs.to_local()
return distribute_module(
module, device_mesh, input_fn=prepare_input_fn, output_fn=prepare_output_fn
)
class TestTPFSDPIntegration(FSDPTest):
def _get_params_and_sharding_info(
self,
@ -260,8 +297,8 @@ class TestTPFSDPIntegration(FSDPTest):
sequence_parallelize_plan,
)
tp_pg = mesh_2d["tp"].get_group(mesh_dim=0)
assert isinstance(tp_fsdp_model.net1.weight, DT)
assert isinstance(tp_fsdp_model.net2.weight, DT)
assert isinstance(tp_fsdp_model.net1.weight, DTensor)
assert isinstance(tp_fsdp_model.net2.weight, DTensor)
tp_fsdp_model = FSDP(
tp_fsdp_model,
cpu_offload=cpu_offload,
@ -314,6 +351,117 @@ class TestTPFSDPIntegration(FSDPTest):
tp_fsdp_out = tp_fsdp_model(inp)
self.assertEqual(fsdp_out, tp_fsdp_out)
@skip_if_lt_x_gpu(4)
def test_fsdp_tp_extension_grad(self):
"""
Tests TP + FSDP extension with correct gradient (i.e. no ACT)
"""
mesh_2d = init_device_mesh(
"cuda", (self.world_size // 2, 2), mesh_dim_names=["dp", "tp"]
)
class TestModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.mlp = MLPModule("cuda")
self.mlp_norm = RMSNormPython(10)
def forward(self, x):
return self.mlp(self.mlp_norm(x))
model = TestModel().cuda(self.rank)
# Shard with TP and test gradient
tp_mesh = mesh_2d["tp"]
tp_model = parallelize_module(
model,
tp_mesh,
{
"mlp.net1": ColwiseParallel(input_layouts=Shard(0)),
"mlp.net2": RowwiseParallel(output_layouts=Shard(0)),
},
)
distribute_rmsnorm(tp_model.mlp_norm, tp_mesh)
fsdp_2d_model = FSDP(tp_model, device_mesh=mesh_2d["dp"])
comm_mode = CommDebugMode()
with comm_mode:
fsdp_2d_model(torch.rand(2, 10).cuda(self.rank)).sum().backward()
funcol = torch.ops.c10d_functional
comm_counts = comm_mode.get_comm_counts()
self.assertEqual(comm_mode.get_total_counts(), 5)
self.assertEqual(comm_counts[funcol.reduce_scatter_tensor], 2)
self.assertEqual(comm_counts[funcol.all_gather_into_tensor], 2)
self.assertEqual(comm_counts[funcol.all_reduce], 1)
grads = [p.grad for p in fsdp_2d_model.parameters() if p.grad is not None]
for grad in grads:
self.assertFalse(grad.isnan().any().item())
@skip_if_lt_x_gpu(4)
def test_fsdp_tp_sync_module_state(self):
mesh_2d = init_device_mesh(
"cuda", (self.world_size // 2, 2), mesh_dim_names=["dp", "tp"]
)
tp_mesh = mesh_2d["tp"]
dp_mesh = mesh_2d["dp"]
# set random seed for each rank
torch.manual_seed(mesh_2d.get_rank())
class TestModel(torch.nn.Module):
def __init__(self):
super().__init__()
replicated_dt = DTensor.from_local(
torch.randn(8, 8), tp_mesh, [Replicate()], run_check=False
)
replicated_buffer_dt = DTensor.from_local(
torch.randn(8, 8), tp_mesh, [Replicate()], run_check=False
)
self.param = torch.nn.Parameter(replicated_dt)
self.register_buffer("buf", replicated_buffer_dt)
def forward(self, x):
return self.param + self.buffer + 1
model = TestModel()
def assert_local_shard_across_ranks(local_tensor, group, check_equal=True):
gathered_tensors = [
torch.empty_like(local_tensor) for _ in range(group.size())
]
dist.all_gather(gathered_tensors, local_tensor, group=group)
# on dp mesh dim local tensor does not equal
tensor_to_compare = gathered_tensors[0]
for tensor in gathered_tensors[1:]:
if check_equal:
self.assertTrue(torch.equal(tensor, tensor_to_compare))
else:
self.assertFalse(torch.equal(tensor, tensor_to_compare))
dp_group = dp_mesh.get_group()
# check on dp mesh dim param local tensor does not equal
local_param = model.param.to_local()
assert_local_shard_across_ranks(local_param, dp_group, check_equal=False)
# check on dp mesh dim buffer local tensor does not equal
local_buf = model.buf.to_local()
assert_local_shard_across_ranks(local_buf, dp_group, check_equal=False)
# wrap with fsdp sync param should sync dp mesh dim
fsdp_mod = FSDP(model, device_mesh=dp_mesh, sync_module_states=True)
with fsdp_mod.summon_full_params(fsdp_mod):
# on dp mesh dim local param does equal after sync_module_states
local_param = fsdp_mod.param.to_local()
assert_local_shard_across_ranks(local_param, dp_group, check_equal=True)
# on dp mesh dim local buf does equal after sync_module_states
local_buf = fsdp_mod.buf.to_local()
assert_local_shard_across_ranks(local_buf, dp_group, check_equal=True)
instantiate_parametrized_tests(TestTPFSDPIntegration)

View File

@ -9,7 +9,7 @@ import torch.nn as nn
from torch.distributed._shard.sharded_tensor import ShardedTensor
from torch.distributed._tensor import DTensor, Replicate, Shard
from torch.distributed.device_mesh import _mesh_resources, init_device_mesh
from torch.distributed.device_mesh import init_device_mesh
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.distributed.fsdp.api import (
ShardedOptimStateDictConfig,
@ -50,24 +50,6 @@ class DenseModel(torch.nn.Module):
# TODO: Consolidate DeviceMesh based FSDP and HSDP test cases.
class TestHSDPWithDeviceMeshAndDTensor(DTensorTestBase):
@with_comms
@skip_if_lt_x_gpu(4)
def test_raises_tp_hsdp_not_supported_error(self):
mesh_2d = init_device_mesh(self.device_type, (2, self.world_size // 2))
# manually set a fake parent mesh to mesh_2d
fake_parent_mesh = init_device_mesh(self.device_type, (self.world_size,))
_mesh_resources.child_to_parent_mapping[mesh_2d] = fake_parent_mesh
with self.assertRaisesRegex(
RuntimeError,
r"Hybrid sharding \+ TP is not supported yet.",
):
model = FSDP(
DenseModel().cuda(),
device_mesh=mesh_2d,
sharding_strategy=ShardingStrategy.HYBRID_SHARD,
)
def _create_model(self, device_mesh=None):
if device_mesh:
model = FSDP(

View File

@ -269,6 +269,27 @@ class TestFakeDistributedSingleProc(torch._dynamo.test_case.TestCase):
opt_model()
@patch.object(config, "optimize_ddp", True)
def test_symbol_splitting(self):
class Model(nn.Module):
def __init__(self):
super().__init__()
self.weight1 = nn.Parameter(torch.randn(512, 512))
self.weight2 = nn.Parameter(torch.randn(512, 512))
def forward(self, x):
x = torch.cat([x, x])
y = x @ self.weight1
z = x + y @ self.weight2
return z
model = Model()
model = FakeDDP(model)
opt_model = torch.compile(dynamic=True)(model)
opt_model(torch.randn(20, 512))
# Are these tests failing? Check and see if TestFakeDistributedSingleProc has a
# single process version; if it's just a problem in the Dynamo distributed
# optimizer, you should be able to repro it single process!

View File

@ -2324,6 +2324,29 @@ utils_device.CURRENT_DEVICE == None""".split(
self.assertTrue(same(fn(x, y), opt_fn(x.clone(), y.clone())))
self.assertEqual(cnts.frame_count, 1)
def test_out_variants_with_resizing_on_graph_inputs_with_dynamic(self):
# https://github.com/pytorch/pytorch/issues/120482
class CustomModel(torch.nn.Module):
def __init__(self):
super().__init__()
def forward(self, inputs):
return torch.outer(**inputs)
compile_fn = torch.compile(CustomModel(), fullgraph=True)
shapes = [(2, 1), (6, 1), (4, 1)]
for shape in shapes:
vec1, vec2 = shape
input_tensor1 = torch.randn(vec1)
input_tensor2 = torch.randn(vec2)
out_tensor = torch.empty(shape)
args = {"input": input_tensor1, "vec2": input_tensor2, "out": out_tensor}
res = compile_fn(args)
opt_res = res.clone() # cuz this is out and we mutate it
res = CustomModel()(args)
self.assertEqual(res, opt_res)
def test_dict_mutation_side_effect(self):
def fn(d):
d["c"] = d["a"] + d.pop("b")

View File

@ -15,9 +15,10 @@ from torch.testing._internal.common_utils import (
parametrize,
run_tests,
TestCase,
IS_WINDOWS
)
@unittest.skipIf(IS_WINDOWS, "Windows not supported for this test")
@unittest.skipIf(not torchdynamo.is_dynamo_supported(), "dynamo doesn't support")
class ExampleTests(TestCase):
# TODO Maybe we should make this tests actually show up in a file?

View File

@ -23,7 +23,11 @@ from torch._subclasses import FakeTensorMode
from torch.export import Constraint, Dim, export
from torch.fx.experimental.proxy_tensor import make_fx
from torch.testing import FileCheck
from torch.testing._internal.common_utils import run_tests, TestCase
from torch.testing._internal.common_utils import (
run_tests,
TestCase,
IS_WINDOWS,
)
from torch.utils._pytree import (
LeafSpec,
tree_flatten,
@ -95,7 +99,7 @@ class TestDynamismExpression(TestCase):
# Being able to export means shape is preserved as static
export(branch_on_shape, inp)
@unittest.skipIf(IS_WINDOWS, "Windows isn't supported for this case")
@unittest.skipIf(not torchdynamo.is_dynamo_supported(), "dynamo isn't support")
class TestExport(TestCase):

View File

@ -7,7 +7,7 @@ from functorch.experimental import control_flow
from torch._dynamo.eval_frame import is_dynamo_supported
from torch._export import export
from torch._export.pass_base import _ExportPassBase
from torch.testing._internal.common_utils import run_tests, TestCase
from torch.testing._internal.common_utils import run_tests, TestCase, IS_WINDOWS
@unittest.skipIf(not is_dynamo_supported(), "Dynamo not supported")
@ -37,6 +37,7 @@ class TestPassInfra(TestCase):
self.assertEqual(new_node.op, old_node.op)
self.assertEqual(new_node.target, old_node.target)
@unittest.skipIf(IS_WINDOWS, "Windows not supported")
def test_cond(self) -> None:
class M(torch.nn.Module):
def __init__(self):

View File

@ -9,7 +9,7 @@ from typing import List, Set
import operator
import torch
from torch.testing._internal.common_utils import run_tests, TestCase
from torch.testing._internal.common_utils import run_tests, TestCase, IS_WINDOWS
from torch.testing import FileCheck
from torch._dynamo.eval_frame import is_dynamo_supported
from torch._export import export
@ -26,6 +26,7 @@ from torch._export.passes.functionalize_side_effectful_ops_pass import (
from functorch.experimental.control_flow import cond
from torch.fx.passes.operator_support import OperatorSupport
from torch.fx.passes.infra.partitioner import Partition
from torch.utils import _pytree as pytree
@ -274,6 +275,7 @@ class TestPasses(TestCase):
new_inp = torch.tensor([1, 1, 1, 1])
self.assertEqual(mod(new_inp), ep(new_inp))
@unittest.skipIf(IS_WINDOWS, "Windows not supported")
def test_runtime_assert_inline_constraints_for_cond(self) -> None:
class M(torch.nn.Module):
def __init__(self):

View File

@ -185,7 +185,7 @@ class TestSerialize(TestCase):
self.assertEqual(node.inputs[3].name, "side")
self.assertEqual(node.inputs[3].arg.as_string, "right")
@unittest.skipIf(IS_WINDOWS, "Windows not supported for this test")
@unittest.skipIf(not torchdynamo.is_dynamo_supported(), "dynamo doesn't support")
class TestDeserialize(TestCase):
def check_graph(self, fn, inputs, dynamic_shapes=None, _check_meta=True) -> None:

View File

@ -20,7 +20,11 @@ from torch._export.utils import (
)
from torch.fx.experimental.proxy_tensor import make_fx
from torch.testing import FileCheck
from torch.testing._internal.common_utils import run_tests, TestCase
from torch.testing._internal.common_utils import (
run_tests,
TestCase,
IS_WINDOWS,
)
from torch.utils._pytree import (
LeafSpec,
tree_flatten,
@ -188,6 +192,7 @@ class TestUnflatten(TestCase):
id(getattr(unflattened_module.sub_net, "2")),
)
@unittest.skipIf(IS_WINDOWS, "Windows not supported for this test")
def test_unflatten_preserve_signature(self):
class NestedChild(torch.nn.Module):
def forward(self, zx, y):

View File

@ -10,6 +10,7 @@ from torch._export.serde.upgrade import get_target_version, get_upgraders
from torch.testing._internal.common_utils import (
run_tests,
TestCase,
IS_WINDOWS,
)
TEST_UPGRADERS = {
@ -112,6 +113,7 @@ def div__Scalar_mode_0_3(self: torch.Tensor, other: Any, *, rounding_mode: Opti
custom_op_count = count_op(upgraded.graph, "aten::div__Scalar_mode_0_3")
self.assertEqual(custom_op_count, 1)
@unittest.skipIf(IS_WINDOWS, "Test case not supported on Windows")
def test_div_upgrader_pass_return_new_op_after_retrace(self):
def fn(a: torch.Tensor, b):
return torch.ops.aten.div.Scalar_mode(a, b, rounding_mode='trunc')

View File

@ -9,7 +9,7 @@ from torch._export import export
from torch._export.verifier import SpecViolationError, Verifier
from torch.export.exported_program import InputKind, InputSpec, TensorArgument
from torch.testing._internal.common_utils import run_tests, TestCase
from torch.testing._internal.common_utils import run_tests, TestCase, IS_WINDOWS
@unittest.skipIf(not is_dynamo_supported(), "dynamo isn't supported")
class TestVerifier(TestCase):
@ -50,6 +50,7 @@ class TestVerifier(TestCase):
with self.assertRaises(SpecViolationError):
verifier.check(ep)
@unittest.skipIf(IS_WINDOWS, "Windows not supported for this test")
def test_verifier_higher_order(self) -> None:
def f(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
def true_fn(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
@ -67,6 +68,7 @@ class TestVerifier(TestCase):
verifier = Verifier()
verifier.check(ep)
@unittest.skipIf(IS_WINDOWS, "Windows not supported for this test")
def test_verifier_nested_invalid_module(self) -> None:
def f(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
def true_fn(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:

View File

@ -13,6 +13,7 @@ from torch.testing._internal.common_utils import (
run_tests,
IS_ARM64,
IS_MACOS,
IS_WINDOWS,
IS_X86,
compare_equal_outs_and_grads,
outs_and_grads,
@ -2940,6 +2941,7 @@ class <lambda>(torch.nn.Module):
):
aot_export_module(mod, [inp], trace_joint=True, output_loss_index=1)
@unittest.skipIf(IS_WINDOWS, "Windows isn't supported for this case")
@unittest.skipIf(not torch._dynamo.is_dynamo_supported(), "Cond needs dynamo to run")
def test_aot_export_with_torch_cond(self):
class M(torch.nn.Module):

View File

@ -8,7 +8,7 @@ import torch.utils._pytree as pytree
from functorch.experimental import control_flow
from functorch.experimental.control_flow import UnsupportedAliasMutationException, cond
from torch.fx.experimental.proxy_tensor import make_fx
from torch.testing._internal.common_utils import run_tests, TestCase
from torch.testing._internal.common_utils import run_tests, TestCase, IS_WINDOWS
from torch.testing._internal.common_quantization import skipIfNoDynamoSupport
from torch._subclasses.functional_tensor import FunctionalTensor
@ -77,7 +77,7 @@ class ReduceMod(torch.nn.Module):
return self._reduce(*operands)
@unittest.skipIf(IS_WINDOWS, "Windows not supported for this test")
@skipIfNoDynamoSupport
class TestControlFlow(TestCase):
def setUp(self):
@ -250,6 +250,7 @@ class TestControlFlow(TestCase):
self.assertEqual(true_outs, fake_outs)
@unittest.skipIf(IS_WINDOWS, "Windows not supported for this test")
@skipIfNoDynamoSupport
class TestControlFlowTraced(TestCase):
def setUp(self):

View File

@ -145,3 +145,76 @@ class TestSplitByTags(TestCase):
},
f"{orig_to_split_fqn_mapping=}",
)
class TestSplitOutputType(TestCase):
class TestModule(torch.nn.Module):
def __init__(self):
super().__init__()
self.conv = torch.nn.Conv2d(3, 16, 3, stride=1, bias=True)
self.relu = torch.nn.ReLU()
def forward(self, x):
conv = self.conv(x)
conv = conv * 0.5
relu = self.relu(conv)
return relu
@staticmethod
def trace_and_tag(
module: torch.nn.Module, inputs: torch.Tensor, tags: List[str]
) -> Tuple[torch.fx.GraphModule, Dict[str, List[str]]]:
"""
Test simple gm consists of nodes with tag (only show call_module nodes here):
conv - tag: "red"
mul - tag: "blue"
relu - tag: "green"
At the beginning we have:
gm:
conv
mul
relu
split_gm = split_by_tags(gm, tags)
Then we have:
split_gm:
red:
conv
blue:
mul
green:
relu
"""
tag_node = defaultdict(list)
gm: torch.fx.GraphModule = torch.export.export(module, (inputs,)).module()
# Add tag to all nodes and build dictionary record tag to call_module nodes
for node in gm.graph.nodes:
if "conv" in node.name:
node.tag = tags[0]
tag_node[tags[0]].append(node.name)
elif "mul" in node.name:
node.tag = tags[1]
tag_node[tags[1]].append(node.name)
else:
node.tag = tags[2]
if node.op == "call_module":
tag_node[tags[2]].append(node.name)
return gm, tag_node
def test_split_by_tags(self) -> None:
tags = ["red", "blue", "green"]
module = TestSplitOutputType.TestModule()
inputs = torch.randn((1, 3, 224, 224))
gm, tag_node = TestSplitOutputType.trace_and_tag(module, inputs, tags)
split_gm, orig_to_split_fqn_mapping = split_by_tags(
gm, tags, return_fqn_mapping=True
)
gm_output = module(inputs)
split_gm_output = split_gm(inputs)
self.assertTrue(type(gm_output) == type(split_gm_output))
self.assertTrue(torch.equal(gm_output, split_gm_output))

View File

@ -32,3 +32,7 @@ class TestMetaKernel(TestCase):
fc_bias = torch.nn.Linear(2, 2, bias=True, dtype=float16).to("lazy")
out_bias = fc_bias(input)
self.assertEqual(out_bias.dtype, torch.float16)
def test_add_invalid_device(self):
with self.assertRaisesRegex(RuntimeError, '.*not a lazy tensor.*'):
_ = torch.tensor([1], device="cpu") + torch.tensor([1], device="lazy")

View File

@ -14,7 +14,7 @@ from torch.testing._internal.common_quantization import (
NodeSpec as ns,
QuantizationTestCase,
skipIfNoX86,
skipIfNoDynamoSupport,
skipIfNoInductorSupport,
)
from torch.testing._internal.common_quantized import override_quantized_engine
from enum import Enum
@ -321,7 +321,7 @@ class X86InductorQuantTestCase(QuantizationTestCase):
)
return export_model, prepare_model, convert_model
@skipIfNoDynamoSupport
@skipIfNoInductorSupport
class TestQuantizePT2EX86Inductor(X86InductorQuantTestCase):
@skipIfNoX86
def test_conv2d(self):

View File

@ -1,13 +1,19 @@
# Owner(s): ["oncall: pt2"]
import tempfile
import unittest
import torch
from torch._prims.debug_prims import load_tensor_reader
from torch._subclasses.fake_tensor import FakeTensor, FakeTensorMode
from torch.multiprocessing.reductions import StorageWeakRef
from torch.testing._internal.common_device_type import instantiate_device_type_tests
from torch.testing._internal.common_utils import run_tests, skipIfRocm, TestCase
from torch.testing._internal.common_utils import (
IS_WINDOWS,
run_tests,
skipIfRocm,
TestCase,
)
from torch.utils._content_store import (
ContentStoreReader,
ContentStoreWriter,
@ -15,6 +21,7 @@ from torch.utils._content_store import (
)
@unittest.skipIf(IS_WINDOWS, "Test case not supported on Windows")
class TestContentStore(TestCase):
def test_basic(self, device):
# setup test data

View File

@ -666,7 +666,13 @@ def set_default_device(device):
def set_default_tensor_type(t):
r"""Sets the default ``torch.Tensor`` type to floating point tensor type
r"""
.. warning::
This function is deprecated as of PyTorch 2.1, please use :func:`torch.set_default_dtype()` and
:func:`torch.set_default_device()` as alternatives.
Sets the default ``torch.Tensor`` type to floating point tensor type
``t``. This type will also be used as default floating point type for
type inference in :func:`torch.tensor`.

View File

@ -21,6 +21,7 @@ from .eval_frame import (
explain,
export,
is_dynamo_supported,
is_inductor_supported,
optimize,
optimize_assert,
OptimizedModule,

View File

@ -1,8 +1,15 @@
# mypy: ignore-errors
import sys
from torch._dynamo import register_backend
@register_backend
def inductor(*args, **kwargs):
if sys.platform == "win32":
raise RuntimeError("Windows not yet supported for inductor")
# do import here to avoid loading inductor into memory when it is not used
from torch._inductor.compile_fx import compile_fx

View File

@ -698,8 +698,6 @@ class _NullDecorator(contextlib.nullcontext): # type: ignore[type-arg]
def check_if_dynamo_supported():
if sys.platform == "win32":
raise RuntimeError("Windows not yet supported for torch.compile")
if sys.version_info >= (3, 12):
raise RuntimeError("Python 3.12+ not yet supported for torch.compile")
@ -712,6 +710,21 @@ def is_dynamo_supported():
return False
def check_if_inductor_supported():
check_if_dynamo_supported()
if sys.platform == "win32":
raise RuntimeError("Windows not yet supported for inductor")
def is_inductor_supported():
try:
check_if_inductor_supported()
return True
except Exception:
return False
def optimize(
backend="inductor",
*,

View File

@ -59,7 +59,7 @@ manual_torch_name_rule_map = {
"torch.distributed.is_initialized": TorchInGraphFunctionVariable,
"torch.distributed.get_rank": TorchInGraphFunctionVariable,
"torch.distributed.get_world_size": TorchInGraphFunctionVariable,
"torch.distributed._tensor.DTensor#from_local": TorchInGraphFunctionVariable,
"torch.distributed._tensor.api.DTensor#from_local": TorchInGraphFunctionVariable,
"torch.distributed.distributed_c10d._get_group_tag": TorchInGraphFunctionVariable,
"torch.distributed.distributed_c10d.get_process_group_ranks": TorchInGraphFunctionVariable,
"torch._utils.is_compiling": TorchInGraphFunctionVariable,

View File

@ -575,18 +575,18 @@ Either create the tensor outside the compiled region, or do not set the tensor t
tx.symbolic_locals[name] = tensor_variable.items[idx]
elif isinstance(tensor_variable, TensorVariable):
assert isinstance(kwargs["out"], TensorVariable)
assert "example_value" in kwargs["out"].proxy.node.meta
fake_tensor = tensor_variable.proxy.node.meta["example_value"]
fake_out = kwargs["out"].proxy.node.meta["example_value"]
if (
kwargs["out"].source
and kwargs["out"] in tx.output.graphargs
and kwargs["out"].size != tensor_variable.size
and fake_out.shape != fake_tensor.shape
):
# It's hard to get out variants with resizing on graph inputs work
# properly across dynamo/aot/inductor, just fall back.
unimplemented("out variants with resizing on graph inputs")
assert "example_value" in kwargs["out"].proxy.node.meta
if not torch._prims_common.is_contiguous(
kwargs["out"].proxy.node.meta["example_value"]
):
if not torch._prims_common.is_contiguous(fake_out):
# It's difficult to handle strides correctly in functionalization
# when calling an out= op with a non-contiguous out argument
unimplemented(

View File

@ -2277,6 +2277,20 @@ def caching_device_properties():
device_interface.Worker.get_device_properties()
def _set_triton_ptxas_path() -> None:
if os.environ.get("TRITON_PTXAS_PATH") is not None:
return
ptxas_path = os.path.abspath(
os.path.join(os.path.dirname(__file__), "..", "bin", "ptxas")
)
if not os.path.exists(ptxas_path):
return
if os.path.isfile(ptxas_path) and os.access(ptxas_path, os.X_OK):
os.environ["TRITON_PTXAS_PATH"] = ptxas_path
else:
warnings.warn(f"{ptxas_path} exists but is not an executable")
def _worker_compile(
kernel_name: str, source_code: str, cc: int, device: torch.device
) -> None:
@ -2287,6 +2301,7 @@ def _worker_compile(
def _load_kernel(kernel_name: str, source_code: str) -> ModuleType:
_set_triton_ptxas_path()
kernel = TritonCodeCache.load(kernel_name, source_code)
kernel.precompile()
return kernel

View File

@ -1,5 +1,6 @@
import functools
import logging
from typing import Any, Dict, List
from typing import Any, Dict, List, Optional
import torch
from torch._inductor.virtualized import V
@ -259,11 +260,19 @@ def fallback_mixed_mm(mat1, mat2, *, out):
aten_fallback_mixed_mm = ExternKernelChoice(fallback_mixed_mm, None)
@functools.lru_cache(None)
def _is_sm7x_or_older_gpu(index: Optional[int]) -> bool:
props = torch.cuda.get_device_properties(index or 0)
return props.major <= 7
def tuned_mixed_mm(mat1, mat2, mat2_dtype):
m, n, k, layout, mat1, mat2 = mm_args(mat1, mat2, layout=None)
choices = [aten_fallback_mixed_mm.bind((mat1, mat2), layout)]
if mat1.layout.dtype != torch.float32 and not mat2.layout.is_contiguous():
# can't use triton kernel unless one of these is true
if (
mat1.layout.dtype != torch.float32 and not mat2.layout.is_contiguous()
) or _is_sm7x_or_older_gpu(layout.device.index):
# can't use triton kernel unless one of these is true or if running on v100 (numerical issues)
return autotune_select_algorithm("mixed_mm", choices, [mat1, mat2], layout)
if inductor_config.force_mixed_mm:
choices = []

View File

@ -92,12 +92,12 @@ factory_common_args = merge_dicts(
parse_kwargs(
"""
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
(see :func:`torch.set_default_device`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
@ -148,7 +148,7 @@ factory_data_common_args = parse_kwargs(
Default: if ``None``, infers data type from :attr:`data`.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if ``None``, uses the current device for the default tensor type
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
(see :func:`torch.set_default_device`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
requires_grad (bool, optional): If autograd should record operations on the
returned tensor. Default: ``False``.
@ -4428,7 +4428,7 @@ elements.
Note that either of the following must be true:
1. :attr:`count` is a positive non-zero number, and the total number of bytes
in the buffer is less than :attr:`offset` plus :attr:`count` times the size
in the buffer is more than :attr:`offset` plus :attr:`count` times the size
(in bytes) of :attr:`dtype`.
2. :attr:`count` is negative, and the length (number of bytes) of the buffer
@ -4993,9 +4993,6 @@ Example::
>>> torch.set_default_dtype(torch.float64)
>>> torch.get_default_dtype() # default is now changed to torch.float64
torch.float64
>>> torch.set_default_tensor_type(torch.FloatTensor) # setting tensor type also affects this
>>> torch.get_default_dtype() # changed to torch.float32, the dtype for torch.FloatTensor
torch.float32
""",
)
@ -9217,8 +9214,22 @@ distribution).
.. math::
\text{{out}}_{{i}} \sim \mathcal{{N}}(0, 1)
For complex dtypes, the tensor is i.i.d. sampled from a `complex normal distribution`_ with zero mean and
unit variance as
.. math::
\text{{out}}_{{i}} \sim \mathcal{{CN}}(0, 1)
This is equivalent to separately sampling the real :math:`(\operatorname{{Re}})` and imaginary
:math:`(\operatorname{{Im}})` part of :math:`\text{{out}}_i` as
.. math::
\operatorname{{Re}}(\text{{out}}_{{i}}) \sim \mathcal{{N}}(0, \frac{{1}}{{2}}),\quad
\operatorname{{Im}}(\text{{out}}_{{i}}) \sim \mathcal{{N}}(0, \frac{{1}}{{2}})
The shape of the tensor is defined by the variable argument :attr:`size`.
Args:
size (int...): a sequence of integers defining the shape of the output tensor.
Can be a variable number of arguments or a collection like a list or tuple.
@ -9239,6 +9250,8 @@ Example::
>>> torch.randn(2, 3)
tensor([[ 1.5954, 2.8929, -1.0923],
[ 1.1719, -0.4709, -0.1996]])
.. _complex normal distribution: https://en.wikipedia.org/wiki/Complex_normal_distribution
""".format(
**factory_common_args
),
@ -9250,8 +9263,8 @@ add_docstr(
randn_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor
Returns a tensor with the same size as :attr:`input` that is filled with
random numbers from a normal distribution with mean 0 and variance 1.
``torch.randn_like(input)`` is equivalent to
random numbers from a normal distribution with mean 0 and variance 1. Please refer to :func:`torch.randn` for the
sampling process of complex dtypes. ``torch.randn_like(input)`` is equivalent to
``torch.randn(input.size(), dtype=input.dtype, layout=input.layout, device=input.device)``.
Args:
@ -10276,7 +10289,7 @@ Keyword args:
device (:class:`torch.device`, optional): the desired device of
returned tensor. Default: if None, uses the current device
for the default tensor type (see
:func:`torch.set_default_tensor_type`). :attr:`device` will be
:func:`torch.set_default_device`). :attr:`device` will be
the CPU for CPU tensor types and the current CUDA device for
CUDA tensor types.
{requires_grad}
@ -10337,7 +10350,7 @@ Keyword args:
device (:class:`torch.device`, optional): the desired device of
returned tensor. Default: if None, uses the current device
for the default tensor type (see
:func:`torch.set_default_tensor_type`). :attr:`device` will be
:func:`torch.set_default_device`). :attr:`device` will be
the CPU for CPU tensor types and the current CUDA device for
CUDA tensor types.
{requires_grad}
@ -10400,7 +10413,7 @@ Keyword args:
device (:class:`torch.device`, optional): the desired device of
returned tensor. Default: if None, uses the current device
for the default tensor type (see
:func:`torch.set_default_tensor_type`). :attr:`device` will be
:func:`torch.set_default_device`). :attr:`device` will be
the CPU for CPU tensor types and the current CUDA device for
CUDA tensor types.
{requires_grad}
@ -10465,7 +10478,7 @@ Keyword args:
device (:class:`torch.device`, optional): the desired device of
returned tensor. Default: if None, uses the current device
for the default tensor type (see
:func:`torch.set_default_tensor_type`). :attr:`device` will be
:func:`torch.set_default_device`). :attr:`device` will be
the CPU for CPU tensor types and the current CUDA device for
CUDA tensor types.
{requires_grad}
@ -10532,7 +10545,7 @@ Keyword args:
device (:class:`torch.device`, optional): the desired device of
returned tensor. Default: if None, uses the current device
for the default tensor type (see
:func:`torch.set_default_tensor_type`). :attr:`device` will be
:func:`torch.set_default_device`). :attr:`device` will be
the CPU for CPU tensor types and the current CUDA device for
CUDA tensor types.
{requires_grad}
@ -10591,7 +10604,7 @@ Keyword args:
Default: if None, infers data type from :attr:`values`.
device (:class:`torch.device`, optional): the desired device of returned tensor.
Default: if None, uses the current device for the default tensor type
(see :func:`torch.set_default_tensor_type`). :attr:`device` will be the CPU
(see :func:`torch.set_default_device`). :attr:`device` will be the CPU
for CPU tensor types and the current CUDA device for CUDA tensor types.
{requires_grad}
{check_invariants}

View File

@ -539,6 +539,7 @@ void TCPStoreMasterDaemon::run() {
int rawSocket = socket.handle();
sockets_.emplace_back(std::move(socket));
tcputil::addPollfd(fds, rawSocket, POLLIN);
addMiscellaneousSocket(rawSocket);
}
queryFds(fds);
}

View File

@ -159,7 +159,7 @@ TORCH_API std::vector<Shape> compute_shape_arange_out(
// From torch.arange docs:
// dtype (torch.dtype, optional) the desired data type of returned tensor.
// Default: if None, uses a global default (see
// torch.set_default_tensor_type()). If dtype is not given, infer the data
// torch.set_default_dtype()). If dtype is not given, infer the data
// type from the other input arguments. If any of start, end, or stop are
// floating-point, the dtype is inferred to be the default dtype, see
// get_default_dtype(). Otherwise, the dtype is inferred to be torch.int64.

View File

@ -358,8 +358,8 @@ LazyTensorPtr TryGetLtcTensor(const at::Tensor& tensor) {
LazyTensorPtr GetLtcTensor(const at::Tensor& tensor) {
auto lazy_tensor = TryGetLtcTensor(tensor);
CHECK(lazy_tensor) << "Input tensor is not a lazy tensor: "
<< tensor.toString();
TORCH_CHECK(
lazy_tensor, "Input tensor is not a lazy tensor: ", tensor.toString());
return lazy_tensor;
}

View File

@ -210,7 +210,7 @@ bool LTCTensorImpl::is_contiguous_custom(c10::MemoryFormat _unused) const {
return tensor_->CurrentTensorData()->is_contiguous();
}
// Only check that the storage is already contiguous.
CHECK(is_contiguous_) << "Non-contiguous storage for lazy tensor";
TORCH_CHECK(is_contiguous_, "Non-contiguous storage for lazy tensor");
// TODO: I don't think logic is right, we should check the requested memory
// format before returning true
return true;

View File

@ -261,7 +261,7 @@ void initLazyBindings(PyObject* module) {
if (tsDataPtr->HasValue()) {
ivalues.emplace_back(tsDataPtr->data());
} else {
CHECK(tsDataPtr->scalar.has_value());
TORCH_CHECK(tsDataPtr->scalar.has_value());
ivalues.emplace_back(tsDataPtr->scalar.value());
}
}

View File

@ -220,7 +220,7 @@ std::vector<torch::lazy::BackendDataPtr> TSBackendImpl::ExecuteComputation(
} else {
// TODO(whc) should this check be made more general? it's written somewhat
// oddly
CHECK(
TORCH_CHECK(
static_cast<c10::DeviceType>(default_device_type_->type) !=
at::kCUDA ||
ts_data->data().device().type() == at::kCUDA);

View File

@ -33,7 +33,7 @@ void TSLoweringContext::Lower(const Node* node) {
// First, we call the node lowering function, which exists for newly
// codegenned or refactored nodes
TSOpVector ops = tsnode->Lower(function_, this);
CHECK(!ops.empty()) << "Failed to lower: " << *node;
TORCH_CHECK(!ops.empty(), "Failed to lower: ", *node);
TORCH_CHECK_EQ(node->num_outputs(), ops.size());
for (size_t i = 0; i < ops.size(); ++i) {
AssignOutputOp(torch::lazy::Output(node, i), ops[i]);

View File

@ -69,14 +69,14 @@ at::Tensor LazyNativeFunctions::_copy_from(
if (!self_tensor) {
// providing a new 'eager' value (self) for an existing lazy tensor (dst)
static bool sync_update = FLAGS_torch_lazy_ts_tensor_update_sync;
CHECK(dst_tensor);
TORCH_CHECK(dst_tensor);
dst_tensor->UpdateFromTensor(self, /*sync=*/sync_update);
} else if (!dst_tensor) {
// materializing a lazy tensor (self) and copying its value into eager
// tensor (dst) detached=false lets us skip a copy in `ToTensor`, which
// should be safe because we are only going to use the tensor for
// dst.copy_()
CHECK(self_tensor);
TORCH_CHECK(self_tensor);
at::Tensor tensor = self_tensor->ToTensor(/*detached=*/false);
at::Tensor typed_tensor =
torch::lazy::CopyTensor(tensor, dst.scalar_type(), /*copy=*/false);
@ -87,7 +87,7 @@ at::Tensor LazyNativeFunctions::_copy_from(
// if dest is not backed by IR (e.g. result of some lazy operation),
// then it should have at::Tensor data backing it instead
auto dst_tensor_data = dst_tensor->CurrentTensorData();
CHECK(dst_tensor_data);
TORCH_CHECK(dst_tensor_data);
auto src_tensor_data = self_tensor->CurrentTensorData();
if (src_tensor_data) {
// both src/dst are simply backed by at::Tensor data, no IR- do a
@ -118,10 +118,10 @@ at::Tensor LazyNativeFunctions::_copy_from_and_resize(
auto dst_tensor = torch::lazy::TryGetLtcTensor(dst);
auto self_tensor = torch::lazy::TryGetLtcTensor(self);
if (!self_tensor) {
CHECK(dst_tensor);
TORCH_CHECK(dst_tensor);
dst_tensor->UpdateFromTensorOut(self);
} else if (!dst_tensor) {
CHECK(self_tensor);
TORCH_CHECK(self_tensor);
at::Tensor tensor = self_tensor->ToTensor(/*detached=*/true);
at::Tensor typed_tensor =
torch::lazy::CopyTensor(tensor, dst.scalar_type(), /*copy=*/false);

View File

@ -91,7 +91,7 @@ TSOpVector TensorList::Lower(
std::shared_ptr<torch::jit::GraphFunction> function,
TSLoweringContext* loctx) const {
std::vector<torch::jit::Value*> tensor_list;
CHECK(!operands().empty());
TORCH_CHECK(!operands().empty());
for (const torch::lazy::Output& operand : operands()) {
tensor_list.emplace_back(loctx->GetOutputOp(operand));
}

View File

@ -39,7 +39,7 @@ def empty(sharding_spec: shard_spec.ShardingSpec,
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the
@ -91,7 +91,7 @@ def ones(sharding_spec: shard_spec.ShardingSpec,
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the
@ -142,7 +142,7 @@ def zeros(sharding_spec: shard_spec.ShardingSpec,
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the
@ -195,7 +195,7 @@ def full(sharding_spec: shard_spec.ShardingSpec,
fill_value (Scalar) the value to fill the output tensor with.
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the
@ -247,7 +247,7 @@ def rand(sharding_spec: shard_spec.ShardingSpec,
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the
@ -301,7 +301,7 @@ def randn(sharding_spec: shard_spec.ShardingSpec,
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the

View File

@ -210,7 +210,7 @@ class ShardedTensor(ShardedTensorBase):
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned tensor.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned Tensor.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the

View File

@ -99,7 +99,7 @@ def ones(
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned :class:`DTensor`.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned DTensor.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the
@ -142,7 +142,7 @@ def empty(
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned :class:`DTensor`.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).\
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).\
layout (:class:`torch.layout`, optional): the desired layout of returned :class:`DTensor`.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the
@ -188,7 +188,7 @@ def full(
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned :class:`DTensor`.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned DTensor.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the
@ -233,7 +233,7 @@ def rand(
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned :class:`DTensor`.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned DTensor.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the
@ -277,7 +277,7 @@ def randn(
Keyword args:
dtype (:class:`torch.dtype`, optional): the desired data type of returned :class:`DTensor`.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned DTensor.
Default: ``torch.strided``.
requires_grad (bool, optional): If autograd should record operations on the
@ -320,7 +320,7 @@ def zeros(
requires_grad (bool, optional): If autograd should record operations on the
returned :class:`DTensor`. Default: ``False``.
dtype (:class:`torch.dtype`, optional): the desired data type of returned :class:`DTensor`.
Default: if ``None``, uses a global default (see :func:`torch.set_default_tensor_type`).
Default: if ``None``, uses a global default (see :func:`torch.set_default_dtype`).
layout (:class:`torch.layout`, optional): the desired layout of returned :class:`DTensor`.
Default: ``torch.strided``.
device_mesh: :class:`DeviceMesh` type, contains the mesh info of ranks

View File

@ -34,9 +34,27 @@ from torch.distributed.device_mesh import DeviceMesh
aten = torch.ops.aten
@register_op_strategy(
def default_strategy(mesh: DeviceMesh, op_schema: OpSchema) -> StrategyType:
# Default strategy by default just propagate the first input strategy
select_strategy = op_schema.args_schema[0]
assert isinstance(select_strategy, OpStrategy)
default_strategy = []
for strategy in select_strategy.strategies:
# we create new DTensorSpecs even for default strategy to assure that
# the tensor metas are distinct between the arguments and outputs
default_strategy.append(
PlacementStrategy(
output_spec=DTensorSpec(
mesh=strategy.output_spec.mesh,
placements=strategy.output_spec.placements,
)
)
)
return OpStrategy(default_strategy)
register_op_strategy(
[
aten._to_copy.default,
aten.clone.default,
aten.contiguous.default,
aten.copy_.default,
@ -44,17 +62,11 @@ aten = torch.ops.aten
aten.fill_.Scalar,
aten.zero_.default,
]
)
def default_strategy(mesh: DeviceMesh, op_schema: OpSchema) -> StrategyType:
# Default strategy by default just propagate the first input strategy
select_strategy = op_schema.args_schema[0]
assert isinstance(select_strategy, OpStrategy)
return OpStrategy(
[
PlacementStrategy(arg_strategy.output_spec)
for arg_strategy in select_strategy.strategies
]
)
)(default_strategy)
register_op_strategy(
aten._to_copy.default, schema_info=RuntimeSchemaInfo(static_kwargkey=["dtype"])
)(default_strategy)
@register_op_strategy(

View File

@ -1,12 +1,15 @@
import math
from typing import Any, Callable, Dict, Optional, Tuple
from typing import Any, Callable, Dict, Optional, Tuple, TYPE_CHECKING
import torch
import torch.distributed as dist
import torch.nn.functional as F
from torch.distributed import distributed_c10d
from torch.distributed._shard.sharded_tensor import ShardedTensor
from torch.distributed._tensor import DTensor, Replicate
from torch.distributed._functional_collectives import AsyncCollectiveTensor
if dist.is_available() or TYPE_CHECKING:
from torch.distributed import distributed_c10d
from torch.distributed._shard.sharded_tensor import ShardedTensor
from torch.distributed._tensor import DTensor, Replicate
def _all_gather_sharded_tensor(
@ -170,7 +173,12 @@ def _gather_state_dict(
device_mesh=value.device_mesh,
placements=placements,
)
# Call `wait()` to force the tensor is synchronous with respect
# to the main stream.
# See the discussion in https://github.com/pytorch/pytorch/pull/117799.
value = value.to_local()
if isinstance(value, AsyncCollectiveTensor):
value = value.wait()
return value
return _iterate_state_dict(

View File

@ -157,7 +157,7 @@ def _get_fqns(model: nn.Module, name: str, skip_ddp_prefix: bool = True) -> FQNS
if not skip_ddp_prefix:
fqn_obj_names.append(curr_obj_name)
elif isinstance(curr_obj, FSDP):
if obj_names[i + 1] == FLAT_PARAM:
if i < len(obj_names) - 1 and obj_names[i + 1] == FLAT_PARAM:
prefix = ".".join(fqn_obj_names)
flat_param = getattr(curr_obj, FLAT_PARAM)
if prefix:
@ -196,7 +196,7 @@ def _verify_options(
Union[str, torch.Tensor], Union[Set[str], torch.Tensor]
] = {}
all_fqns = set()
for name, param in model.named_parameters():
for name, param in chain(model.named_parameters(), model.named_buffers()):
fqns = _get_fqns(model, name)
fqn_param_mapping[param] = fqns
for fqn in fqns:
@ -395,7 +395,7 @@ def _load_model_state_dict(
if not info.handle_model or not state_dict:
return _IncompatibleKeys({}, {})
for key, _ in model.named_parameters():
for key, _ in chain(model.named_parameters(), model.named_buffers()):
fqns = _get_fqns(model, key)
fqns_with_ddp_prefix = _get_fqns(model, key, skip_ddp_prefix=False)
for fqn, fqn_with_ddp_prefix in zip(fqns, fqns_with_ddp_prefix):
@ -678,25 +678,25 @@ def get_state_dict(
optimizer parameter IDs to the canonical FQNs.
Example:
>>> # xdoctest: +SKIP
>>> import torch
>>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
>>> from torch.nn.parallel import DistributedDataParallel as DDP
>>> from torch.distributed.checkpoint.state_dict import get_state_dict
import torch
from torch.distributed.fsdp import FullyShardedDataParallel as FSDP
from torch.nn.parallel import DistributedDataParallel as DDP
from torch.distributed.checkpoint.state_dict import get_state_dict
fsdp_model = FSDP(copy.deepcopy(model))
fsdp_optim = torch.optim.Adam(model.parameters(), lr=1e-3)
ddp_model = DDP(copy.deepcopy(model))
ddp_optim = torch.optim.Adam(model.parameters(), lr=1e-3)
>>> fsdp_model = FSDP(copy.deepcopy(model))
>>> fsdp_optim = torch.optim.Adam(model.parameters(), lr=1e-3)
>>> ddp_model = DDP(copy.deepcopy(model))
>>> ddp_optim = torch.optim.Adam(model.parameters(), lr=1e-3)
ddp_state_dict, ddp_optim_state_dict = get_state_dict(ddp_model, ddp_optim)
fsdp_state_dict, fsdp_optim_state_dict = get_state_dict(fsdp_model, fsdp_optim)
>>> ddp_state_dict, ddp_optim_state_dict = get_state_dict(ddp_model, ddp_optim)
>>> fsdp_state_dict, fsdp_optim_state_dict = get_state_dict(fsdp_model, fsdp_optim)
# if we simply call ddp_model.state_dict() and fsdp_model.state_dict(),
# the asserts will fail.
assert ddp_state_dict == fsdp_state_dict
assert ddp_optim_state == fsdp_optim_state_dict
>>> # if we simply call ddp_model.state_dict() and fsdp_model.state_dict(),
>>> # the asserts will fail.
>>> assert ddp_state_dict == fsdp_state_dict
>>> assert ddp_optim_state == fsdp_optim_state_dict
Args:
@ -711,6 +711,8 @@ def get_state_dict(
Returns:
``Tuple`` that contain model state_dict and optimizer state_dict.
:rtype: typing.Tuple[typing.Dict[str, ValueType], OptimizerStateType]
"""
with gc_context():

Some files were not shown because too many files have changed in this diff Show More