mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 21:49:24 +08:00
Compare commits
68 Commits
greenconte
...
v2.1.0
Author | SHA1 | Date | |
---|---|---|---|
7bcf7da3a2 | |||
1841d54370 | |||
fca42334be | |||
539a971161 | |||
9287a0cf59 | |||
c464075d5d | |||
1b4161c686 | |||
28220534de | |||
da9639c752 | |||
e534243ec2 | |||
01fa8c140a | |||
5aae979614 | |||
ced78cc2a7 | |||
d8db5808ce | |||
889811ab5b | |||
1191449343 | |||
6d9fad8474 | |||
ed62318bea | |||
ee67c4dd6a | |||
5529b81631 | |||
7e23b4907d | |||
71c9d5c3a6 | |||
91e414957b | |||
ce3ed7f293 | |||
bd372d460b | |||
12b8c26f35 | |||
7397cf324c | |||
fa8259db8d | |||
d83c8287ea | |||
ba19c52e31 | |||
c5c9536aa7 | |||
6b7a777661 | |||
ebd3224303 | |||
6e4ae13657 | |||
265e46e193 | |||
da7290dfbd | |||
828992cf13 | |||
48246f3dfb | |||
7d6971dcee | |||
5417e23ba8 | |||
7a9101951d | |||
03e7f0b99d | |||
c0e7239f43 | |||
04c1e07fd7 | |||
cb4362ba5f | |||
bddd30ca7a | |||
9cc99906e9 | |||
a49fca4dd4 | |||
83964c761e | |||
085bd1da62 | |||
90452f41e3 | |||
35c3d5a080 | |||
d07ac50e26 | |||
8a3b017769 | |||
a82894b0d3 | |||
050fc31538 | |||
b3cb05b396 | |||
fec68a2799 | |||
f139dda1cc | |||
5252dfb762 | |||
da1ccca830 | |||
c9cbdaf24f | |||
f187e42a54 | |||
9175987fcc | |||
d8e6594fb8 | |||
f82c027774 | |||
6d20b39d3f | |||
17f400404f |
@ -1 +1 @@
|
||||
05d67b9418cacda0d356c2102d7c1a887948b013
|
||||
34f8189eae57a23cc15b4b4f032fe25757e0db8e
|
||||
|
@ -7,18 +7,14 @@ source "$(dirname "${BASH_SOURCE[0]}")/common_utils.sh"
|
||||
function install_huggingface() {
|
||||
local version
|
||||
version=$(get_pinned_commit huggingface)
|
||||
pip_install pandas
|
||||
pip_install scipy
|
||||
pip_install z3-solver
|
||||
pip_install pandas==2.0.3
|
||||
pip_install "transformers==${version}"
|
||||
}
|
||||
|
||||
function install_timm() {
|
||||
local commit
|
||||
commit=$(get_pinned_commit timm)
|
||||
pip_install pandas
|
||||
pip_install scipy
|
||||
pip_install z3-solver
|
||||
pip_install pandas==2.0.3
|
||||
pip_install "git+https://github.com/rwightman/pytorch-image-models@${commit}"
|
||||
}
|
||||
|
||||
|
17
.ci/docker/common/install_onnx.sh
Normal file → Executable file
17
.ci/docker/common/install_onnx.sh
Normal file → Executable file
@ -4,6 +4,10 @@ set -ex
|
||||
|
||||
source "$(dirname "${BASH_SOURCE[0]}")/common_utils.sh"
|
||||
|
||||
retry () {
|
||||
"$@" || (sleep 10 && "$@") || (sleep 20 && "$@") || (sleep 40 && "$@")
|
||||
}
|
||||
|
||||
# A bunch of custom pip dependencies for ONNX
|
||||
pip_install \
|
||||
beartype==0.10.4 \
|
||||
@ -18,22 +22,17 @@ pip_install \
|
||||
# onnx-weekly. Otherwise, onnx-weekly could be
|
||||
# overwritten by onnx.
|
||||
pip_install \
|
||||
onnxruntime==1.15.1 \
|
||||
parameterized==0.8.1 \
|
||||
pytest-cov==4.0.0 \
|
||||
pytest-subtests==0.10.0 \
|
||||
tabulate==0.9.0 \
|
||||
transformers==4.31.0
|
||||
|
||||
# Using 1.15dev branch for the following not yet released features and fixes.
|
||||
# - Segfault fix for shape inference.
|
||||
# - Inliner to workaround ORT segfault.
|
||||
pip_install onnx-weekly==1.15.0.dev20230717
|
||||
pip_install coloredlogs packaging
|
||||
retry pip_install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ --no-cache-dir --no-input ort-nightly==1.16.0.dev20230908001
|
||||
|
||||
# TODO: change this when onnx-script is on testPypi
|
||||
# pip_install onnxscript-preview==0.1.0.dev20230809 --no-deps
|
||||
# NOTE: temp change for CI to run on unpublished onnxscript PR.
|
||||
pip_install "onnxscript@git+https://github.com/microsoft/onnxscript@f69be19ebd3f2e0d7efe64b0c7be3329cbab3822" --no-deps
|
||||
pip_install onnx==1.14.1
|
||||
pip_install onnxscript-preview==0.1.0.dev20230828 --no-deps
|
||||
|
||||
# Cache the transformers model to be used later by ONNX tests. We need to run the transformers
|
||||
# package to download the model. By default, the model is cached at ~/.cache/huggingface/hub/
|
||||
|
@ -271,7 +271,12 @@ pytest-cpp==2.3.0
|
||||
#Pinned versions: 2.3.0
|
||||
#test that import:
|
||||
|
||||
z3-solver
|
||||
z3-solver==4.12.2.0
|
||||
#Description: The Z3 Theorem Prover Project
|
||||
#Pinned versions:
|
||||
#test that import:
|
||||
|
||||
tensorboard==2.13.0
|
||||
#Description: Also included in .ci/docker/requirements-docs.txt
|
||||
#Pinned versions:
|
||||
#test that import: test_tensorboard
|
||||
|
@ -180,7 +180,7 @@ function install_numpy_pytorch_interop() {
|
||||
|
||||
function clone_pytorch_xla() {
|
||||
if [[ ! -d ./xla ]]; then
|
||||
git clone --recursive --quiet https://github.com/pytorch/xla.git
|
||||
git clone --recursive -b r2.1 https://github.com/pytorch/xla.git
|
||||
pushd xla
|
||||
# pin the xla hash so that we don't get broken by changes to xla
|
||||
git checkout "$(cat ../.github/ci_commit_pins/xla.txt)"
|
||||
|
@ -544,6 +544,10 @@ test_without_numpy() {
|
||||
python -c "import sys;sys.path.insert(0, 'fake_numpy');from unittest import TestCase;import torch;x=torch.randn(3,3);TestCase().assertRaises(RuntimeError, lambda: x.numpy())"
|
||||
# Regression test for https://github.com/pytorch/pytorch/issues/66353
|
||||
python -c "import sys;sys.path.insert(0, 'fake_numpy');import torch;print(torch.tensor([torch.tensor(0.), torch.tensor(1.)]))"
|
||||
# Regression test for https://github.com/pytorch/pytorch/issues/109387
|
||||
if [[ "${TEST_CONFIG}" == *dynamo* ]]; then
|
||||
python -c "import sys;sys.path.insert(0, 'fake_numpy');import torch;torch.compile(lambda x:print(x))('Hello World')"
|
||||
fi
|
||||
popd
|
||||
}
|
||||
|
||||
|
@ -35,7 +35,7 @@ if [[ "$BUILD_ENVIRONMENT" == *cuda* ]]; then
|
||||
fi
|
||||
|
||||
# TODO: Move both of them to Windows AMI
|
||||
python -m pip install pytest-rerunfailures==10.3 pytest-cpp==2.3.0
|
||||
python -m pip install pytest-rerunfailures==10.3 pytest-cpp==2.3.0 tensorboard==2.13.0
|
||||
|
||||
# Install Z3 optional dependency for Windows builds.
|
||||
python -m pip install z3-solver
|
||||
|
@ -62,7 +62,7 @@ git --no-pager log --max-count 1
|
||||
popd
|
||||
|
||||
# Clone the Builder main repo
|
||||
retry git clone -q https://github.com/pytorch/builder.git "$BUILDER_ROOT"
|
||||
retry git clone -q https://github.com/pytorch/builder.git -b release/2.1 "$BUILDER_ROOT"
|
||||
pushd "$BUILDER_ROOT"
|
||||
echo "Using builder from "
|
||||
git --no-pager log --max-count 1
|
||||
|
@ -90,7 +90,7 @@ if [[ "$PACKAGE_TYPE" == conda ]]; then
|
||||
if [[ "\${TORCH_CONDA_BUILD_FOLDER}" == "pytorch-nightly" ]]; then
|
||||
PYTORCH_CHANNEL="pytorch-nightly"
|
||||
fi
|
||||
retry conda install \${EXTRA_CONDA_FLAGS} -yq -c nvidia -c "\${PYTORCH_CHANNEL}" "pytorch-cuda=\${cu_ver}"
|
||||
retry conda install \${EXTRA_CONDA_FLAGS} -yq -c nvidia -c pytorch-test "pytorch-cuda=\${cu_ver}"
|
||||
fi
|
||||
conda install \${EXTRA_CONDA_FLAGS} -y "\$pkg" --offline
|
||||
)
|
||||
@ -98,9 +98,9 @@ elif [[ "$PACKAGE_TYPE" != libtorch ]]; then
|
||||
if [[ "$(uname -m)" == aarch64 ]]; then
|
||||
# Using "extra-index-url" until all needed aarch64 dependencies are
|
||||
# added to "https://download.pytorch.org/whl/nightly/"
|
||||
pip install "\$pkg" --extra-index-url "https://download.pytorch.org/whl/nightly/${DESIRED_CUDA}"
|
||||
pip install "\$pkg" --extra-index-url "https://download.pytorch.org/whl/test/${DESIRED_CUDA}"
|
||||
else
|
||||
pip install "\$pkg" --index-url "https://download.pytorch.org/whl/nightly/${DESIRED_CUDA}"
|
||||
pip install "\$pkg" --index-url "https://download.pytorch.org/whl/test/${DESIRED_CUDA}"
|
||||
fi
|
||||
retry pip install -q numpy protobuf typing-extensions
|
||||
fi
|
||||
|
@ -11,7 +11,7 @@ PKG_DIR=${PKG_DIR:-/tmp/workspace/final_pkgs}
|
||||
# currently set within `designate_upload_channel`
|
||||
UPLOAD_CHANNEL=${UPLOAD_CHANNEL:-nightly}
|
||||
# Designates what subfolder to put packages into
|
||||
UPLOAD_SUBFOLDER=${UPLOAD_SUBFOLDER:-cpu}
|
||||
UPLOAD_SUBFOLDER=${UPLOAD_SUBFOLDER:-}
|
||||
UPLOAD_BUCKET="s3://pytorch"
|
||||
BACKUP_BUCKET="s3://pytorch-backup"
|
||||
BUILD_NAME=${BUILD_NAME:-}
|
||||
@ -64,12 +64,17 @@ s3_upload() {
|
||||
local pkg_type
|
||||
extension="$1"
|
||||
pkg_type="$2"
|
||||
s3_dir="${UPLOAD_BUCKET}/${pkg_type}/${UPLOAD_CHANNEL}/${UPLOAD_SUBFOLDER}/"
|
||||
s3_root_dir="${UPLOAD_BUCKET}/${pkg_type}/${UPLOAD_CHANNEL}"
|
||||
if [[ -z ${UPLOAD_SUBFOLDER:-} ]]; then
|
||||
s3_upload_dir="${s3_root_dir}/"
|
||||
else
|
||||
s3_upload_dir="${s3_root_dir}/${UPLOAD_SUBFOLDER}/"
|
||||
fi
|
||||
(
|
||||
for pkg in ${PKG_DIR}/*.${extension}; do
|
||||
(
|
||||
set -x
|
||||
${AWS_S3_CP} --no-progress --acl public-read "${pkg}" "${s3_dir}"
|
||||
${AWS_S3_CP} --no-progress --acl public-read "${pkg}" "${s3_upload_dir}"
|
||||
)
|
||||
done
|
||||
)
|
||||
@ -82,15 +87,17 @@ pip install -q awscli
|
||||
case "${PACKAGE_TYPE}" in
|
||||
conda)
|
||||
conda_upload
|
||||
# Fetch platform (eg. win-64, linux-64, etc.) from index file
|
||||
# Because there's no actual conda command to read this
|
||||
subdir=$(\
|
||||
tar -xOf ${PKG_DIR}/*.bz2 info/index.json \
|
||||
| grep subdir \
|
||||
| cut -d ':' -f2 \
|
||||
| sed -e 's/[[:space:]]//' -e 's/"//g' -e 's/,//' \
|
||||
)
|
||||
BACKUP_DIR="conda/${subdir}"
|
||||
for conda_archive in ${PKG_DIR}/*.tar.bz2; do
|
||||
# Fetch platform (eg. win-64, linux-64, etc.) from index file because
|
||||
# there's no actual conda command to read this
|
||||
subdir=$(\
|
||||
tar -xOf "${conda_archive}" info/index.json \
|
||||
| grep subdir \
|
||||
| cut -d ':' -f2 \
|
||||
| sed -e 's/[[:space:]]//' -e 's/"//g' -e 's/,//' \
|
||||
)
|
||||
BACKUP_DIR="conda/${subdir}"
|
||||
done
|
||||
;;
|
||||
libtorch)
|
||||
s3_upload "zip" "libtorch"
|
||||
|
2
.github/ci_commit_pins/xla.txt
vendored
2
.github/ci_commit_pins/xla.txt
vendored
@ -1 +1 @@
|
||||
e1ee592d9806216d7ac0bb711cae6307b0c5b68a
|
||||
r2.1
|
||||
|
1
.github/merge_rules.yaml
vendored
1
.github/merge_rules.yaml
vendored
@ -7,6 +7,7 @@
|
||||
- docs/source/onnx.rst
|
||||
- docs/source/onnx*
|
||||
- docs/source/scripts/onnx/**
|
||||
- docs/source/_static/img/onnx/**
|
||||
- scripts/onnx/**
|
||||
- test/onnx/**
|
||||
- tools/onnx/**
|
||||
|
@ -25,3 +25,4 @@ sympy==1.11.1
|
||||
pytest-cpp==2.3.0
|
||||
rockset==1.0.3
|
||||
z3-solver==4.12.2.0
|
||||
tensorboard==2.13.0
|
||||
|
17
.github/scripts/build_triton_wheel.py
vendored
17
.github/scripts/build_triton_wheel.py
vendored
@ -60,12 +60,18 @@ def build_triton(
|
||||
build_conda: bool = False,
|
||||
build_rocm: bool = False,
|
||||
py_version: Optional[str] = None,
|
||||
release: bool = False,
|
||||
) -> Path:
|
||||
env = os.environ.copy()
|
||||
if "MAX_JOBS" not in env:
|
||||
max_jobs = os.cpu_count() or 1
|
||||
env["MAX_JOBS"] = str(max_jobs)
|
||||
|
||||
if not release:
|
||||
# Nightly binaries include the triton commit hash, i.e. 2.1.0+e6216047b8
|
||||
# while release build should only include the version, i.e. 2.1.0
|
||||
version = f"{version}+{commit_hash[:10]}"
|
||||
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
triton_basedir = Path(tmpdir) / "triton"
|
||||
triton_pythondir = triton_basedir / "python"
|
||||
@ -80,7 +86,7 @@ def build_triton(
|
||||
if build_conda:
|
||||
with open(triton_basedir / "meta.yaml", "w") as meta:
|
||||
print(
|
||||
f"package:\n name: torchtriton\n version: {version}+{commit_hash[:10]}\n",
|
||||
f"package:\n name: torchtriton\n version: {version}\n",
|
||||
file=meta,
|
||||
)
|
||||
print("source:\n path: .\n", file=meta)
|
||||
@ -103,7 +109,7 @@ def build_triton(
|
||||
|
||||
patch_init_py(
|
||||
triton_pythondir / "triton" / "__init__.py",
|
||||
version=f"{version}+{commit_hash[:10]}",
|
||||
version=f"{version}",
|
||||
)
|
||||
if py_version is None:
|
||||
py_version = f"{sys.version_info.major}.{sys.version_info.minor}"
|
||||
@ -129,11 +135,11 @@ def build_triton(
|
||||
patch_setup_py(
|
||||
triton_pythondir / "setup.py",
|
||||
name=triton_pkg_name,
|
||||
version=f"{version}+{commit_hash[:10]}",
|
||||
version=f"{version}",
|
||||
)
|
||||
patch_init_py(
|
||||
triton_pythondir / "triton" / "__init__.py",
|
||||
version=f"{version}+{commit_hash[:10]}",
|
||||
version=f"{version}",
|
||||
)
|
||||
|
||||
if build_rocm:
|
||||
@ -157,12 +163,14 @@ def main() -> None:
|
||||
from argparse import ArgumentParser
|
||||
|
||||
parser = ArgumentParser("Build Triton binaries")
|
||||
parser.add_argument("--release", action="store_true")
|
||||
parser.add_argument("--build-conda", action="store_true")
|
||||
parser.add_argument("--build-rocm", action="store_true")
|
||||
parser.add_argument("--py-version", type=str)
|
||||
parser.add_argument("--commit-hash", type=str)
|
||||
parser.add_argument("--triton-version", type=str, default=read_triton_version())
|
||||
args = parser.parse_args()
|
||||
|
||||
build_triton(
|
||||
build_rocm=args.build_rocm,
|
||||
commit_hash=args.commit_hash
|
||||
@ -171,6 +179,7 @@ def main() -> None:
|
||||
version=args.triton_version,
|
||||
build_conda=args.build_conda,
|
||||
py_version=args.py_version,
|
||||
release=args.release,
|
||||
)
|
||||
|
||||
|
||||
|
@ -248,7 +248,8 @@ def generate_wheels_matrix(
|
||||
"nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | "
|
||||
"nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | "
|
||||
"nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | "
|
||||
"nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'",
|
||||
"nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | "
|
||||
"triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'",
|
||||
"build_name": f"{package_type}-py{python_version}-{gpu_arch_type}{gpu_arch_version}-with-pypi-cudnn".replace( # noqa: B950
|
||||
".", "_"
|
||||
),
|
||||
|
2
.github/templates/common.yml.j2
vendored
2
.github/templates/common.yml.j2
vendored
@ -8,7 +8,7 @@
|
||||
# NOTE: If testing pytorch/builder changes you can change this variable to change what pytorch/builder reference
|
||||
# the binary builds will check out
|
||||
{%- set builder_repo = "pytorch/builder" -%}
|
||||
{%- set builder_branch = "main" -%}
|
||||
{%- set builder_branch = "release/2.1" -%}
|
||||
|
||||
{%- macro concurrency(build_environment) -%}
|
||||
concurrency:
|
||||
|
@ -97,13 +97,13 @@ jobs:
|
||||
with:
|
||||
name: !{{ config["build_name"] }}
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
|
||||
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
|
||||
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
|
||||
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
|
||||
- name: ROCm set GPU_FLAG
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: !{{ config["container_image"] }}
|
||||
- name: Test Pytorch binary
|
||||
|
@ -74,8 +74,8 @@ jobs:
|
||||
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
|
||||
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
|
||||
echo "DEVELOPER_DIR=/Applications/Xcode_13.3.1.app/Contents/Developer" >> "${GITHUB_ENV}"
|
||||
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
|
||||
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
|
||||
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
|
||||
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
|
||||
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
|
||||
uses: nick-fields/retry@v2.8.2
|
||||
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}
|
||||
|
2
.github/templates/upload.yml.j2
vendored
2
.github/templates/upload.yml.j2
vendored
@ -67,6 +67,6 @@
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
{%- endmacro %}
|
||||
|
@ -62,8 +62,8 @@ jobs:
|
||||
steps:
|
||||
!{{ common.setup_ec2_windows() }}
|
||||
!{{ set_runner_specific_vars() }}
|
||||
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
|
||||
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
|
||||
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
|
||||
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
@ -102,8 +102,8 @@ jobs:
|
||||
with:
|
||||
name: !{{ config["build_name"] }}
|
||||
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
|
||||
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
|
||||
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
|
||||
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
|
||||
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
|
||||
- name: Populate binary env
|
||||
shell: bash
|
||||
run: |
|
||||
|
12
.github/workflows/_android-build-test.yml
vendored
12
.github/workflows/_android-build-test.yml
vendored
@ -36,7 +36,7 @@ jobs:
|
||||
keep-going: ${{ steps.filter.outputs.keep-going }}
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
fetch-depth: 1
|
||||
submodules: false
|
||||
@ -58,25 +58,25 @@ jobs:
|
||||
runs-on: ${{ matrix.runner }}
|
||||
steps:
|
||||
- name: Setup SSH (Click me for login details)
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
# [see note: pytorch repo ref]
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Setup Linux
|
||||
uses: ./.github/actions/setup-linux
|
||||
|
||||
- name: Calculate docker image
|
||||
id: calculate-docker-image
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
|
||||
with:
|
||||
docker-image-name: ${{ inputs.docker-image-name }}
|
||||
|
||||
- name: Pull docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
|
||||
|
||||
@ -140,5 +140,5 @@ jobs:
|
||||
if: always()
|
||||
|
||||
- name: Teardown Linux
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
if: always()
|
||||
|
12
.github/workflows/_android-full-build-test.yml
vendored
12
.github/workflows/_android-full-build-test.yml
vendored
@ -36,7 +36,7 @@ jobs:
|
||||
keep-going: ${{ steps.filter.outputs.keep-going }}
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
fetch-depth: 1
|
||||
submodules: false
|
||||
@ -58,25 +58,25 @@ jobs:
|
||||
runs-on: ${{ matrix.runner }}
|
||||
steps:
|
||||
- name: Setup SSH (Click me for login details)
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
# [see note: pytorch repo ref]
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Setup Linux
|
||||
uses: ./.github/actions/setup-linux
|
||||
|
||||
- name: Calculate docker image
|
||||
id: calculate-docker-image
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
|
||||
with:
|
||||
docker-image-name: ${{ inputs.docker-image-name }}
|
||||
|
||||
- name: Pull docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
|
||||
|
||||
@ -185,5 +185,5 @@ jobs:
|
||||
if: always()
|
||||
|
||||
- name: Teardown Linux
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
if: always()
|
||||
|
14
.github/workflows/_bazel-build-test.yml
vendored
14
.github/workflows/_bazel-build-test.yml
vendored
@ -41,7 +41,7 @@ jobs:
|
||||
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
fetch-depth: 1
|
||||
submodules: false
|
||||
@ -63,30 +63,30 @@ jobs:
|
||||
runs-on: ${{ matrix.runner }}
|
||||
steps:
|
||||
- name: Setup SSH (Click me for login details)
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
# [see note: pytorch repo ref]
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Setup Linux
|
||||
uses: ./.github/actions/setup-linux
|
||||
|
||||
- name: Calculate docker image
|
||||
id: calculate-docker-image
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
|
||||
with:
|
||||
docker-image-name: ${{ inputs.docker-image-name }}
|
||||
|
||||
- name: Pull docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
|
||||
|
||||
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
|
||||
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.1
|
||||
if: ${{ inputs.cuda-version != 'cpu' }}
|
||||
|
||||
- name: Output disk space left
|
||||
@ -197,5 +197,5 @@ jobs:
|
||||
file-suffix: bazel-${{ github.job }}_${{ steps.get-job-id.outputs.job-id }}
|
||||
|
||||
- name: Teardown Linux
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
if: always()
|
||||
|
15
.github/workflows/_binary-build-linux.yml
vendored
15
.github/workflows/_binary-build-linux.yml
vendored
@ -139,12 +139,12 @@ jobs:
|
||||
run: env
|
||||
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.github-token }}
|
||||
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }}
|
||||
|
||||
@ -159,10 +159,12 @@ jobs:
|
||||
- name: Clean workspace
|
||||
shell: bash
|
||||
run: |
|
||||
set -eux
|
||||
|
||||
rm -rf "${GITHUB_WORKSPACE}"
|
||||
mkdir "${GITHUB_WORKSPACE}"
|
||||
|
||||
if [[ inputs.build_environment == 'linux-aarch64-binary-manywheel' ]]; then
|
||||
if [[ ${{ inputs.build_environment }} == 'linux-aarch64-binary-manywheel' ]]; then
|
||||
rm -rf "${RUNNER_TEMP}/artifacts"
|
||||
mkdir "${RUNNER_TEMP}/artifacts"
|
||||
fi
|
||||
@ -170,7 +172,6 @@ jobs:
|
||||
- name: Checkout PyTorch to pytorch dir
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -184,7 +185,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder to builder dir
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -210,7 +211,7 @@ jobs:
|
||||
|
||||
- name: Pull Docker image
|
||||
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }}
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ inputs.DOCKER_IMAGE }}
|
||||
|
||||
@ -267,7 +268,7 @@ jobs:
|
||||
|
||||
- name: Teardown Linux
|
||||
if: always()
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
|
13
.github/workflows/_binary-test-linux.yml
vendored
13
.github/workflows/_binary-test-linux.yml
vendored
@ -127,13 +127,13 @@ jobs:
|
||||
} >> "${GITHUB_ENV} }}"
|
||||
|
||||
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.github-token }}
|
||||
|
||||
# Setup the environment
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }}
|
||||
|
||||
@ -154,7 +154,6 @@ jobs:
|
||||
- name: Checkout PyTorch to pytorch dir
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
|
||||
@ -167,7 +166,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder to builder dir
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -198,12 +197,12 @@ jobs:
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
|
||||
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
|
||||
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.1
|
||||
if: ${{ inputs.GPU_ARCH_TYPE == 'cuda' && steps.filter.outputs.is-test-matrix-empty == 'False' }}
|
||||
|
||||
- name: Pull Docker image
|
||||
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }}
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ inputs.DOCKER_IMAGE }}
|
||||
|
||||
@ -213,7 +212,7 @@ jobs:
|
||||
|
||||
- name: Teardown Linux
|
||||
if: always()
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
|
||||
- name: Chown workspace
|
||||
if: always()
|
||||
|
4
.github/workflows/_binary-upload.yml
vendored
4
.github/workflows/_binary-upload.yml
vendored
@ -97,7 +97,7 @@ jobs:
|
||||
SHA1: ${{ github.event.pull_request.head.sha || github.sha }}
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
no-sudo: true
|
||||
|
||||
@ -121,7 +121,7 @@ jobs:
|
||||
shell: bash -e -l {0}
|
||||
run: |
|
||||
# reference ends with an RC suffix
|
||||
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
|
||||
if [[ "${GITHUB_REF_NAME}" = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
|
||||
|
6
.github/workflows/_buck-build-test.yml
vendored
6
.github/workflows/_buck-build-test.yml
vendored
@ -22,7 +22,7 @@ jobs:
|
||||
keep-going: ${{ steps.filter.outputs.keep-going }}
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
fetch-depth: 1
|
||||
submodules: false
|
||||
@ -43,7 +43,7 @@ jobs:
|
||||
runs-on: ${{ matrix.runner }}
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Set up JDK 8
|
||||
uses: actions/setup-java@v3
|
||||
@ -52,7 +52,7 @@ jobs:
|
||||
distribution: 'temurin'
|
||||
|
||||
- name: Setup miniconda
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
|
||||
with:
|
||||
python-version: 3.8
|
||||
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
|
||||
|
10
.github/workflows/_docs.yml
vendored
10
.github/workflows/_docs.yml
vendored
@ -66,7 +66,7 @@ jobs:
|
||||
name: build-docs-${{ matrix.docs_type }}-${{ inputs.push }}
|
||||
steps:
|
||||
- name: Setup SSH (Click me for login details)
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.GITHUB_TOKEN }}
|
||||
instructions: |
|
||||
@ -77,19 +77,19 @@ jobs:
|
||||
|
||||
# [see note: pytorch repo ref]
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Setup Linux
|
||||
uses: ./.github/actions/setup-linux
|
||||
|
||||
- name: Calculate docker image
|
||||
id: calculate-docker-image
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
|
||||
with:
|
||||
docker-image-name: ${{ inputs.docker-image }}
|
||||
|
||||
- name: Pull docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
|
||||
|
||||
@ -187,5 +187,5 @@ jobs:
|
||||
s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}/functorchdocs
|
||||
|
||||
- name: Teardown Linux
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
if: always()
|
||||
|
371
.github/workflows/_ios-build-test.yml
vendored
371
.github/workflows/_ios-build-test.yml
vendored
@ -7,14 +7,6 @@ on:
|
||||
required: true
|
||||
type: string
|
||||
description: Top-level label for what's being built/tested.
|
||||
ios-platform:
|
||||
required: true
|
||||
type: string
|
||||
description: Which iOS platform to build for.
|
||||
ios-arch:
|
||||
required: true
|
||||
type: string
|
||||
description: Which iOS arch to build for.
|
||||
sync-tag:
|
||||
required: false
|
||||
type: string
|
||||
@ -31,8 +23,6 @@ on:
|
||||
env:
|
||||
GIT_DEFAULT_BRANCH: ${{ github.event.repository.default_branch }}
|
||||
BUILD_ENVIRONMENT: ${{ inputs.build-environment }}
|
||||
IOS_PLATFORM: ${{ inputs.ios-platform }}
|
||||
IOS_ARCH: ${{ inputs.ios-arch }}
|
||||
|
||||
jobs:
|
||||
filter:
|
||||
@ -43,7 +33,7 @@ jobs:
|
||||
keep-going: ${{ steps.filter.outputs.keep-going }}
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
fetch-depth: 1
|
||||
submodules: false
|
||||
@ -63,33 +53,30 @@ jobs:
|
||||
matrix: ${{ fromJSON(needs.filter.outputs.test-matrix) }}
|
||||
fail-fast: false
|
||||
runs-on: ${{ matrix.runner }}
|
||||
env:
|
||||
IOS_PLATFORM: ${{ matrix.ios_platform }}
|
||||
IOS_ARCH: ${{ matrix.ios_arch }}
|
||||
BUILD_LITE_INTERPRETER: ${{ matrix.use_lite_interpreter }}
|
||||
USE_PYTORCH_METAL: ${{ matrix.use_metal }}
|
||||
USE_COREML_DELEGATE: ${{ matrix.use_coreml }}
|
||||
CUSTOM_OP_LIST: ${{ matrix.use_custom_op_list }}
|
||||
# TODO: Bump it to 2.2.0 after cherry pick this or figure out a better way
|
||||
# to get this version instead of hard coding it here
|
||||
PYTORCH_VERSION: 2.1.0
|
||||
timeout-minutes: 240
|
||||
steps:
|
||||
# [see note: pytorch repo ref]
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Populate CI build options
|
||||
shell: bash
|
||||
run: |
|
||||
# Most builds use the lite interpreter, if certain builds shouldn't
|
||||
# build the lite interpreter this env variable should get over-written
|
||||
# in the following case statement
|
||||
echo "BUILD_LITE_INTERPRETER=1" >> "${GITHUB_ENV}"
|
||||
set -ex
|
||||
|
||||
case ${BUILD_ENVIRONMENT} in
|
||||
*metal*)
|
||||
echo "USE_PYTORCH_METAL=1" >> "${GITHUB_ENV}"
|
||||
;;
|
||||
*full_jit*)
|
||||
echo "BUILD_LITE_INTERPRETER=0" >> "${GITHUB_ENV}"
|
||||
;;
|
||||
*custom*)
|
||||
echo "SELECTED_OP_LIST=${GITHUB_WORKSPACE}/ios/TestApp/custom_build/mobilenetv2.yaml" >> "${GITHUB_ENV}"
|
||||
;;
|
||||
*coreml*)
|
||||
echo "USE_COREML_DELEGATE=1" >> "${GITHUB_ENV}"
|
||||
;;
|
||||
esac
|
||||
if [ -n "${CUSTOM_OP_LIST:-}" ]; then
|
||||
echo "SELECTED_OP_LIST=${GITHUB_WORKSPACE}/ios/TestApp/custom_build/${CUSTOM_OP_LIST}" >> "${GITHUB_ENV}"
|
||||
fi
|
||||
|
||||
- name: Install brew dependencies
|
||||
uses: nick-fields/retry@v2.8.2
|
||||
@ -102,7 +89,7 @@ jobs:
|
||||
brew install libtool
|
||||
|
||||
- name: Setup miniconda for iOS
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
|
||||
with:
|
||||
python-version: "3.9"
|
||||
environment-file: .github/requirements/conda-env-iOS
|
||||
@ -116,54 +103,67 @@ jobs:
|
||||
retry_wait_seconds: 90
|
||||
command: |
|
||||
set -x
|
||||
cd ios/TestApp
|
||||
# install fastlane
|
||||
|
||||
pushd ios/TestApp
|
||||
# Install fastlane
|
||||
sudo gem install bundler && bundle install
|
||||
bundle update fastlane
|
||||
popd
|
||||
|
||||
- name: Build PyTorch Mobile Runtime
|
||||
- name: Build PyTorch mobile runtime
|
||||
shell: bash
|
||||
run: |
|
||||
set -eux
|
||||
# shellcheck disable=SC1091
|
||||
export TCLLIBPATH="/usr/local/lib"
|
||||
python -VV
|
||||
${CONDA_RUN} scripts/build_ios.sh
|
||||
|
||||
- name: Build TestApp
|
||||
if: inputs.ios-platform == 'SIMULATOR'
|
||||
if: matrix.ios_platform == 'SIMULATOR'
|
||||
timeout-minutes: 15
|
||||
run: |
|
||||
# run the ruby build script
|
||||
# Run the ruby build script
|
||||
if ! [ -x "$(command -v xcodebuild)" ]; then
|
||||
echo 'Error: xcodebuild is not installed.'
|
||||
exit 1
|
||||
fi
|
||||
ruby scripts/xcode_build.rb -i build_ios/install -x ios/TestApp/TestApp.xcodeproj -p "${IOS_PLATFORM}"
|
||||
|
||||
- name: Run Simulator Tests
|
||||
if: inputs.ios-platform == 'SIMULATOR'
|
||||
- name: Run simulator tests
|
||||
if: matrix.ios_platform == 'SIMULATOR'
|
||||
shell: bash
|
||||
run: |
|
||||
set -eux
|
||||
# shellcheck disable=SC1091
|
||||
# use the pytorch nightly build to generate models
|
||||
${CONDA_RUN} pip3 install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
|
||||
# generate models for differnet backends
|
||||
cd "${GITHUB_WORKSPACE}/ios/TestApp/benchmark"
|
||||
# Use the pytorch nightly build to generate models
|
||||
${CONDA_RUN} pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
|
||||
|
||||
# Generate models for differnet backends
|
||||
pushd "${GITHUB_WORKSPACE}/ios/TestApp/benchmark"
|
||||
mkdir -p ../models
|
||||
# NB: Both of the following scripts only export models with lite interpreter
|
||||
if [ "${USE_COREML_DELEGATE}" == 1 ]; then
|
||||
${CONDA_RUN} python coreml_backend.py
|
||||
else
|
||||
cd "${GITHUB_WORKSPACE}"
|
||||
pushd "${GITHUB_WORKSPACE}"
|
||||
${CONDA_RUN} python test/mobile/model_test/gen_test_model.py ios-test
|
||||
popd
|
||||
fi
|
||||
cd "${GITHUB_WORKSPACE}/ios/TestApp/benchmark"
|
||||
|
||||
if [ "${BUILD_LITE_INTERPRETER}" == 1 ]; then
|
||||
echo "Setting up the TestApp for LiteInterpreter"
|
||||
ruby setup.rb --lite 1
|
||||
else
|
||||
# Generate some models for JIT without lite interpreter
|
||||
${CONDA_RUN} python trace_model.py
|
||||
|
||||
echo "Setting up the TestApp for Full JIT"
|
||||
ruby setup.rb
|
||||
fi
|
||||
cd "${GITHUB_WORKSPACE}/ios/TestApp"
|
||||
# instruments -s -devices
|
||||
popd
|
||||
|
||||
pushd "${GITHUB_WORKSPACE}/ios/TestApp"
|
||||
# Instruments -s -devices
|
||||
if [ "${BUILD_LITE_INTERPRETER}" == 1 ]; then
|
||||
if [ "${USE_COREML_DELEGATE}" == 1 ]; then
|
||||
bundle exec fastlane scan --only_testing TestAppTests/TestAppTests/testCoreML
|
||||
@ -173,9 +173,282 @@ jobs:
|
||||
else
|
||||
bundle exec fastlane scan --only_testing TestAppTests/TestAppTests/testFullJIT
|
||||
fi
|
||||
popd
|
||||
|
||||
- name: Dump Simulator Tests On a Failure
|
||||
if: failure() && inputs.ios-platform == 'SIMULATOR'
|
||||
- name: Dump simulator tests on failure
|
||||
if: failure() && matrix.ios_platform == 'SIMULATOR'
|
||||
run: |
|
||||
echo "Simulator Tests Logs:"
|
||||
cat /Users/runner/Library/Logs/scan/*.log
|
||||
|
||||
- name: Prepare the build artifacts for upload
|
||||
shell: bash
|
||||
run: |
|
||||
set -eux
|
||||
|
||||
# The structure of the folder is as follows:
|
||||
#
|
||||
# RUNNER_TEMP/
|
||||
# └── IOS_ARCH/
|
||||
# ├── LICENSE
|
||||
# ├── install
|
||||
# │ ├── include
|
||||
# │ │ └── headers
|
||||
# │ └── lib
|
||||
# │ ├── libXNNPACK.a
|
||||
# │ ├── libc10.a
|
||||
# │ ├── libclog.a
|
||||
# │ ├── libcpuinfo.a
|
||||
# │ ├── libeigen_blas.a
|
||||
# │ ├── libpthreadpool.a
|
||||
# │ ├── libpytorch_qnnpack.a
|
||||
# │ ├── libtorch.a
|
||||
# │ └── libtorch_cpu.a
|
||||
# ├── src
|
||||
# │ └── LibTorch-Lite.h
|
||||
# └── version.txt
|
||||
SETUP_DIR="${RUNNER_TEMP}/${IOS_ARCH}"
|
||||
mkdir -p "${SETUP_DIR}/src"
|
||||
|
||||
cp -R "${GITHUB_WORKSPACE}/build_ios/install" "${SETUP_DIR}"
|
||||
# Copy the umbrella header and license
|
||||
if [ "${BUILD_LITE_INTERPRETER}" == 1 ]; then
|
||||
cp "${GITHUB_WORKSPACE}/ios/LibTorch-Lite.h" "${SETUP_DIR}/src"
|
||||
else
|
||||
cp "${GITHUB_WORKSPACE}/ios/LibTorch.h" "${SETUP_DIR}/src"
|
||||
fi
|
||||
|
||||
# Copy license and version
|
||||
cp "${GITHUB_WORKSPACE}/LICENSE" "${SETUP_DIR}"
|
||||
echo "${PYTORCH_VERSION}" > "${SETUP_DIR}"/version.txt
|
||||
|
||||
# Save the podspec for the upload job later
|
||||
if [ "${BUILD_LITE_INTERPRETER}" == "1" ]; then
|
||||
DATE=$(date -u +%Y%m%d)
|
||||
cp "${GITHUB_WORKSPACE}"/ios/LibTorch-Lite-Nightly.podspec.template "${SETUP_DIR}"/LibTorch-Lite-Nightly.podspec
|
||||
sed -i '' -e "s/IOS_NIGHTLY_BUILD_VERSION/${PYTORCH_VERSION}.${DATE}/g" "${SETUP_DIR}"/LibTorch-Lite-Nightly.podspec
|
||||
|
||||
cp "${GITHUB_WORKSPACE}"/ios/LibTorch-Lite.podspec.template "${SETUP_DIR}"/LibTorch-Lite.podspec
|
||||
sed -i '' -e "s/IOS_BUILD_VERSION/${PYTORCH_VERSION}/g" "${SETUP_DIR}"/LibTorch-Lite.podspec
|
||||
else
|
||||
# NB: There is no nightly build without lite interpreter atm
|
||||
cp "${GITHUB_WORKSPACE}"/ios/LibTorch.podspec.template "${SETUP_DIR}"/LibTorch.podspec
|
||||
sed -i '' -e "s/IOS_BUILD_VERSION/${PYTORCH_VERSION}/g" "${SETUP_DIR}"/LibTorch.podspec
|
||||
fi
|
||||
|
||||
pushd "${SETUP_DIR}"
|
||||
# NB: It's important to zip all the files before uploading because the GHA will upload
|
||||
# all files sequentially which is both slow and has too many requests. More info is at
|
||||
# https://github.com/actions/upload-artifact#too-many-uploads-resulting-in-429-responses
|
||||
zip -r "${IOS_ARCH}.zip" install src version.txt LICENSE ./*.podspec
|
||||
popd
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: pytorch-ios-build-artifacts-${{ matrix.ios_arch }}
|
||||
if-no-files-found: error
|
||||
path: ${{ runner.temp }}/${{ matrix.ios_arch }}/${{ matrix.ios_arch }}.zip
|
||||
|
||||
upload-ios-artifacts:
|
||||
# NB: this job run on GitHub MacOS ephemeral runner so that it can use lipo
|
||||
# to create the fat iOS binaries for both x86_64 and arm64
|
||||
runs-on: macos-12
|
||||
needs: build
|
||||
# NB: Only upload release build, if we need it, we could also turn on nightly here
|
||||
environment: ${{ (github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/v'))) && 'ios-upload' || '' }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
# For awscli S3 upload
|
||||
- uses: actions/setup-python@v4
|
||||
with:
|
||||
python-version: '3.10'
|
||||
cache: pip
|
||||
|
||||
# For cocoapods pod upload
|
||||
- uses: ruby/setup-ruby@v1
|
||||
with:
|
||||
ruby-version: '3.2'
|
||||
bundler-cache: true
|
||||
|
||||
- name: Download arm64 artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: pytorch-ios-build-artifacts-arm64
|
||||
|
||||
- name: Download x86_64 artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: pytorch-ios-build-artifacts-x86_64
|
||||
|
||||
- name: Unzip arm64 and x86_64 artifacts
|
||||
shell: bash
|
||||
run: |
|
||||
set -eux
|
||||
|
||||
for ARCH in "arm64" "x86_64"; do
|
||||
TMP_DIR="${RUNNER_TEMP}/${ARCH}"
|
||||
mkdir -p "${TMP_DIR}"
|
||||
|
||||
cp "${ARCH}.zip" "${TMP_DIR}"
|
||||
|
||||
pushd "${TMP_DIR}"
|
||||
unzip -o "${ARCH}.zip"
|
||||
popd
|
||||
done
|
||||
|
||||
- name: Prepare the artifact
|
||||
env:
|
||||
IS_NIGHTLY: ${{ github.event.ref == 'refs/heads/nightly' }}
|
||||
shell: bash
|
||||
working-directory: ${{ runner.temp }}/arm64
|
||||
run: |
|
||||
set -eux
|
||||
|
||||
DEST_DIR="${RUNNER_TEMP}"/ios
|
||||
echo "DEST_DIR=${DEST_DIR}" >> "$GITHUB_ENV"
|
||||
|
||||
# Prepare all the sub directories
|
||||
mkdir -p "${DEST_DIR}"/install/lib
|
||||
|
||||
# Copy header and share files, arm64 or x86_64 both work
|
||||
cp -R install/include "${DEST_DIR}"/install
|
||||
cp -R install/share "${DEST_DIR}"/install
|
||||
# The last dash is important to copy only files under src
|
||||
cp -R src "${DEST_DIR}"
|
||||
cp LICENSE "${DEST_DIR}"
|
||||
|
||||
if [ "${IS_NIGHTLY}" == true ]; then
|
||||
PYTORCH_VERSION=$(cat version.txt)
|
||||
DATE=$(date -u +%Y%m%d)
|
||||
echo "${PYTORCH_VERSION}.${DATE}" > "${DEST_DIR}"/version.txt
|
||||
else
|
||||
cp version.txt "${DEST_DIR}"
|
||||
fi
|
||||
PYTORCH_VERSION=$(cat "${DEST_DIR}"/version.txt)
|
||||
echo "PYTORCH_VERSION=${PYTORCH_VERSION}" >> "$GITHUB_ENV"
|
||||
|
||||
pushd install/lib
|
||||
# shellcheck disable=SC2207
|
||||
LIBRARIES=($(ls ./*.a))
|
||||
popd
|
||||
|
||||
for LIB in "${LIBRARIES[@]}"; do
|
||||
FROM_LIBS=("${RUNNER_TEMP}"/arm64/install/lib/"${LIB}" "${RUNNER_TEMP}"/x86_64/install/lib/"${LIB}")
|
||||
# Create a fat binary for both arm64 and x86_64
|
||||
lipo -create "${FROM_LIBS[@]}" -o "${DEST_DIR}"/install/lib/"${LIB}"
|
||||
# Print the info
|
||||
lipo -i "${DEST_DIR}"/install/lib/"${LIB}"
|
||||
done
|
||||
|
||||
BUILD_LITE_INTERPRETER=1
|
||||
if [ -f "${RUNNER_TEMP}"/arm64/LibTorch.podspec ]; then
|
||||
# If LibTorch.podspec is used instead of LibTorch-Lite.podspec, the artifact is built
|
||||
# without lite interpreter
|
||||
BUILD_LITE_INTERPRETER=0
|
||||
fi
|
||||
echo "BUILD_LITE_INTERPRETER=${BUILD_LITE_INTERPRETER}" >> "$GITHUB_ENV"
|
||||
|
||||
- name: Prepare the podspec
|
||||
env:
|
||||
IS_NIGHTLY: ${{ github.event.ref == 'refs/heads/nightly' }}
|
||||
shell: bash
|
||||
working-directory: ${{ env.DEST_DIR }}
|
||||
run: |
|
||||
set -eux
|
||||
|
||||
ARTIFACT_NAME=libtorch
|
||||
SPEC_NAME=LibTorch
|
||||
|
||||
if [ "${BUILD_LITE_INTERPRETER}" == "1" ]; then
|
||||
ARTIFACT_NAME="${ARTIFACT_NAME}_lite_ios"
|
||||
SPEC_NAME="${SPEC_NAME}-Lite"
|
||||
else
|
||||
ARTIFACT_NAME="${ARTIFACT_NAME}_ios"
|
||||
fi
|
||||
|
||||
if [ "${IS_NIGHTLY}" == true ]; then
|
||||
ARTIFACT_NAME="${ARTIFACT_NAME}_nightly_${PYTORCH_VERSION}.zip"
|
||||
SPEC_NAME="${SPEC_NAME}-Nightly"
|
||||
else
|
||||
ARTIFACT_NAME="${ARTIFACT_NAME}_${PYTORCH_VERSION}.zip"
|
||||
fi
|
||||
|
||||
SPEC_NAME_WITH_VERSION="${SPEC_NAME}-${PYTORCH_VERSION}.podspec"
|
||||
SPEC_NAME="${SPEC_NAME}.podspec"
|
||||
|
||||
# Also copy the spec file
|
||||
cp "${RUNNER_TEMP}"/arm64/"${SPEC_NAME}" "${SPEC_NAME_WITH_VERSION}"
|
||||
|
||||
# NB: It's important to zip all the files before uploading because the GHA will upload
|
||||
# all files sequentially which is both slow and has too many requests. More info is at
|
||||
# https://github.com/actions/upload-artifact#too-many-uploads-resulting-in-429-responses
|
||||
zip -r "${ARTIFACT_NAME}" install src version.txt LICENSE
|
||||
|
||||
{
|
||||
echo "ARTIFACT_NAME=${ARTIFACT_NAME}"
|
||||
echo "SPEC_NAME_WITH_VERSION=${SPEC_NAME_WITH_VERSION}"
|
||||
echo "SPEC_NAME=${SPEC_NAME}"
|
||||
} >> "$GITHUB_ENV"
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: pytorch-ios-artifacts
|
||||
if-no-files-found: error
|
||||
path: ${{ env.DEST_DIR }}/${{ env.ARTIFACT_NAME }}
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: pytorch-ios-podspec
|
||||
if-no-files-found: error
|
||||
path: ${{ env.DEST_DIR }}/${{ env.SPEC_NAME_WITH_VERSION }}
|
||||
|
||||
- name: Set DRY_RUN
|
||||
if: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || (startsWith(github.event.ref, 'refs/tags/v'))) }}
|
||||
shell: bash
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
|
||||
- name: Upload the artifact to S3
|
||||
env:
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
IS_NIGHTLY: ${{ github.event.ref == 'refs/heads/nightly' }}
|
||||
shell: bash
|
||||
working-directory: ${{ env.DEST_DIR }}
|
||||
run: |
|
||||
set -eux
|
||||
|
||||
pip install -q awscli==1.29.40
|
||||
|
||||
DRY_RUN=${DRY_RUN:-enabled}
|
||||
AWS_S3_CP="aws s3 cp --dryrun"
|
||||
if [ "${DRY_RUN}" == "disabled" ]; then
|
||||
AWS_S3_CP="aws s3 cp"
|
||||
fi
|
||||
|
||||
if [ "${IS_NIGHTLY}" == true ]; then
|
||||
BUCKET_NAME="ossci-ios-build"
|
||||
else
|
||||
BUCKET_NAME="ossci-ios"
|
||||
fi
|
||||
|
||||
${AWS_S3_CP} "${ARTIFACT_NAME}" "s3://${BUCKET_NAME}/" --acl public-read
|
||||
${AWS_S3_CP} "${SPEC_NAME_WITH_VERSION}" "s3://${BUCKET_NAME}/" --acl public-read
|
||||
|
||||
- name: Upload the artifact to cocoapods (nightly only)
|
||||
env:
|
||||
# We need to set this secret to upload to cocoapods. However, we might want
|
||||
# to NOT set this for PROD release so that we can upload the artifacts manually
|
||||
COCOAPODS_TRUNK_TOKEN: ${{ secrets.COCOAPODS_TRUNK_TOKEN || '' }}
|
||||
if: ${{ github.event_name == 'push' && github.event.ref == 'refs/heads/nightly' && env.COCOAPODS_TRUNK_TOKEN != '' }}
|
||||
shell: bash
|
||||
working-directory: ${{ runner.temp }}/arm64
|
||||
run: |
|
||||
set -eux
|
||||
|
||||
gem install cocoapods
|
||||
|
||||
pod trunk me
|
||||
# Upload the spec to cocoapods
|
||||
pod trunk push --verbose --allow-warnings --use-libraries --skip-import-validation "${SPEC_NAME}"
|
||||
|
10
.github/workflows/_linux-build.yml
vendored
10
.github/workflows/_linux-build.yml
vendored
@ -73,7 +73,7 @@ jobs:
|
||||
test-matrix: ${{ steps.filter.outputs.test-matrix }}
|
||||
steps:
|
||||
- name: Setup SSH (Click me for login details)
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
@ -82,19 +82,19 @@ jobs:
|
||||
# checkout because when we run this action we don't *have* a local
|
||||
# checkout. In other cases you should prefer a local checkout.
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Setup Linux
|
||||
uses: ./.github/actions/setup-linux
|
||||
|
||||
- name: Calculate docker image
|
||||
id: calculate-docker-image
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
|
||||
with:
|
||||
docker-image-name: ${{ inputs.docker-image-name }}
|
||||
|
||||
- name: Pull docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
|
||||
|
||||
@ -192,5 +192,5 @@ jobs:
|
||||
path: sccache-stats-*.json
|
||||
|
||||
- name: Teardown Linux
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
if: always()
|
||||
|
12
.github/workflows/_linux-test.yml
vendored
12
.github/workflows/_linux-test.yml
vendored
@ -57,7 +57,7 @@ jobs:
|
||||
timeout-minutes: ${{ inputs.timeout-minutes }}
|
||||
steps:
|
||||
- name: Setup SSH (Click me for login details)
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
if: ${{ !contains(matrix.runner, 'gcp.a100') }}
|
||||
with:
|
||||
github-secret: ${{ secrets.GITHUB_TOKEN }}
|
||||
@ -66,25 +66,25 @@ jobs:
|
||||
docker exec -it $(docker container ps --format '{{.ID}}') bash
|
||||
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Setup Linux
|
||||
uses: ./.github/actions/setup-linux
|
||||
|
||||
- name: Calculate docker image
|
||||
id: calculate-docker-image
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
|
||||
with:
|
||||
docker-image-name: ${{ inputs.docker-image }}
|
||||
|
||||
- name: Pull docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
|
||||
|
||||
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
|
||||
id: install-nvidia-driver
|
||||
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.1
|
||||
if: contains(inputs.build-environment, 'cuda') && !contains(matrix.config, 'nogpu')
|
||||
|
||||
- name: Lock NVIDIA A100 40GB Frequency
|
||||
@ -292,7 +292,7 @@ jobs:
|
||||
path: ./**/core.[1-9]*
|
||||
|
||||
- name: Teardown Linux
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
if: always()
|
||||
|
||||
# NB: We are currently having an intermittent GPU-related issue on G5 runners with
|
||||
|
10
.github/workflows/_mac-build.yml
vendored
10
.github/workflows/_mac-build.yml
vendored
@ -71,11 +71,11 @@ jobs:
|
||||
test-matrix: ${{ steps.filter.outputs.test-matrix }}
|
||||
steps:
|
||||
- name: Clean up disk space before running MacOS workflow
|
||||
uses: pytorch/test-infra/.github/actions/check-disk-space@main
|
||||
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1
|
||||
|
||||
# [see note: pytorch repo ref]
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Set xcode version
|
||||
env:
|
||||
@ -87,7 +87,7 @@ jobs:
|
||||
|
||||
- name: Setup miniconda
|
||||
if: inputs.environment-file == ''
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
|
||||
@ -97,7 +97,7 @@ jobs:
|
||||
# environment even though the arch is x86-64
|
||||
- name: Setup miniconda using the provided environment file
|
||||
if: inputs.environment-file != ''
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
environment-file: ${{ inputs.environment-file }}
|
||||
@ -206,4 +206,4 @@ jobs:
|
||||
- name: Clean up disk space
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: pytorch/test-infra/.github/actions/check-disk-space@main
|
||||
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1
|
||||
|
4
.github/workflows/_mac-test-mps.yml
vendored
4
.github/workflows/_mac-test-mps.yml
vendored
@ -41,7 +41,7 @@ jobs:
|
||||
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
fetch-depth: 1
|
||||
submodules: false
|
||||
@ -85,7 +85,7 @@ jobs:
|
||||
use-gha: true
|
||||
|
||||
- name: Setup miniconda
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
|
||||
|
8
.github/workflows/_mac-test.yml
vendored
8
.github/workflows/_mac-test.yml
vendored
@ -69,11 +69,11 @@ jobs:
|
||||
done
|
||||
|
||||
- name: Clean up disk space before running MacOS workflow
|
||||
uses: pytorch/test-infra/.github/actions/check-disk-space@main
|
||||
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1
|
||||
|
||||
# [see note: pytorch repo ref]
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Download build artifacts
|
||||
uses: ./.github/actions/download-build-artifacts
|
||||
@ -82,7 +82,7 @@ jobs:
|
||||
use-gha: true
|
||||
|
||||
- name: Setup miniconda
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
|
||||
with:
|
||||
python-version: ${{ inputs.python-version }}
|
||||
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
|
||||
@ -205,4 +205,4 @@ jobs:
|
||||
- name: Clean up disk space
|
||||
if: always()
|
||||
continue-on-error: true
|
||||
uses: pytorch/test-infra/.github/actions/check-disk-space@main
|
||||
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1
|
||||
|
6
.github/workflows/_rocm-test.yml
vendored
6
.github/workflows/_rocm-test.yml
vendored
@ -48,7 +48,7 @@ jobs:
|
||||
steps:
|
||||
# [see note: pytorch repo ref]
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
no-sudo: true
|
||||
|
||||
@ -57,12 +57,12 @@ jobs:
|
||||
|
||||
- name: Calculate docker image
|
||||
id: calculate-docker-image
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
|
||||
with:
|
||||
docker-image-name: ${{ inputs.docker-image }}
|
||||
|
||||
- name: Pull docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
|
||||
|
||||
|
6
.github/workflows/_run_android_tests.yml
vendored
6
.github/workflows/_run_android_tests.yml
vendored
@ -22,7 +22,7 @@ jobs:
|
||||
keep-going: ${{ steps.filter.outputs.keep-going }}
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
fetch-depth: 1
|
||||
submodules: false
|
||||
@ -45,10 +45,10 @@ jobs:
|
||||
steps:
|
||||
# [see note: pytorch repo ref]
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Setup miniconda
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
|
||||
with:
|
||||
python-version: 3.8
|
||||
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
|
||||
|
6
.github/workflows/_win-build.yml
vendored
6
.github/workflows/_win-build.yml
vendored
@ -60,10 +60,10 @@ jobs:
|
||||
git config --global core.fsmonitor false
|
||||
|
||||
- name: Clean up leftover processes on non-ephemeral Windows runner
|
||||
uses: pytorch/test-infra/.github/actions/cleanup-runner@main
|
||||
uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.1
|
||||
|
||||
- name: Setup SSH (Click me for login details)
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.GITHUB_TOKEN }}
|
||||
instructions: |
|
||||
@ -78,7 +78,7 @@ jobs:
|
||||
|
||||
# [see note: pytorch repo ref]
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
no-sudo: true
|
||||
|
||||
|
6
.github/workflows/_win-test.yml
vendored
6
.github/workflows/_win-test.yml
vendored
@ -48,10 +48,10 @@ jobs:
|
||||
git config --global core.fsmonitor false
|
||||
|
||||
- name: Clean up leftover processes on non-ephemeral Windows runner
|
||||
uses: pytorch/test-infra/.github/actions/cleanup-runner@main
|
||||
uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.1
|
||||
|
||||
- name: Setup SSH (Click me for login details)
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.GITHUB_TOKEN }}
|
||||
instructions: |
|
||||
@ -67,7 +67,7 @@ jobs:
|
||||
|
||||
# [see note: pytorch repo ref]
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
no-sudo: true
|
||||
|
||||
|
70
.github/workflows/build-ios-binaries.yml
vendored
Normal file
70
.github/workflows/build-ios-binaries.yml
vendored
Normal file
@ -0,0 +1,70 @@
|
||||
name: Build iOS binaries
|
||||
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- nightly
|
||||
tags:
|
||||
# NOTE: Binary build pipelines should only get triggered on release candidate builds
|
||||
# Release candidate tags look like: v1.11.0-rc1
|
||||
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
|
||||
paths:
|
||||
- .github/workflows/build-ios-binaries.yml
|
||||
- .github/workflows/_ios-build-test.yml
|
||||
pull_request:
|
||||
paths:
|
||||
- .github/workflows/build-ios-binaries.yml
|
||||
- .github/workflows/_ios-build-test.yml
|
||||
# NB: We can use this workflow dispatch to test and build iOS binaries manually
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
use_lite_interpreter:
|
||||
description: "Use PyTorch lite interpreter?"
|
||||
type: string
|
||||
default: 1
|
||||
use_coreml:
|
||||
description: "Use Apple Core ML?"
|
||||
type: string
|
||||
default: 1
|
||||
use_custom_op_list:
|
||||
description: "Specify the custom ops list to include in the binaries"
|
||||
type: string
|
||||
default: ""
|
||||
|
||||
concurrency:
|
||||
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
|
||||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
# TODO: Figure out how to migrate this job to M1 runner
|
||||
ios-build-test:
|
||||
name: ios-build-test
|
||||
uses: ./.github/workflows/_ios-build-test.yml
|
||||
with:
|
||||
build-environment: ios-build-test
|
||||
sync-tag: ios-build-test
|
||||
test-matrix: |
|
||||
{ include: [
|
||||
{ config: "default",
|
||||
shard: 1,
|
||||
num_shards: 1,
|
||||
runner: "macos-12",
|
||||
ios_platform: "SIMULATOR",
|
||||
ios_arch: "x86_64",
|
||||
use_lite_interpreter: ${{ inputs.use_lite_interpreter || 1 }},
|
||||
use_metal: 0,
|
||||
use_coreml: ${{ inputs.use_coreml || 1 }},
|
||||
use_custom_op_list: ${{ inputs.use_custom_op_list || '' }}
|
||||
},
|
||||
{ config: "default",
|
||||
shard: 1,
|
||||
num_shards: 1,
|
||||
runner: "macos-12",
|
||||
ios_platform: "OS",
|
||||
ios_arch: "arm64",
|
||||
use_lite_interpreter: ${{ inputs.use_lite_interpreter || 1 }},
|
||||
use_metal: 1,
|
||||
use_coreml: ${{ inputs.use_coreml || 1 }},
|
||||
use_custom_op_list: ${{ inputs.use_custom_op_list || '' }}
|
||||
}
|
||||
]}
|
192
.github/workflows/build-triton-wheel.yml
vendored
192
.github/workflows/build-triton-wheel.yml
vendored
@ -3,7 +3,11 @@ name: Build Triton wheels
|
||||
on:
|
||||
push:
|
||||
branches:
|
||||
- main
|
||||
- release/2.1
|
||||
tags:
|
||||
# NOTE: Binary build pipelines should only get triggered on release candidate builds
|
||||
# Release candidate tags look like: v1.11.0-rc1
|
||||
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
|
||||
paths:
|
||||
- .github/workflows/build-triton-wheel.yml
|
||||
- .github/scripts/build_triton_wheel.py
|
||||
@ -43,12 +47,12 @@ jobs:
|
||||
BUILD_DEVICE: ${{ matrix.device }}
|
||||
steps:
|
||||
- name: Setup SSH (Click me for login details)
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
submodules: false
|
||||
|
||||
@ -56,11 +60,13 @@ jobs:
|
||||
uses: ./.github/actions/setup-linux
|
||||
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ env.DOCKER_IMAGE }}
|
||||
|
||||
- name: Build Triton wheel
|
||||
env:
|
||||
IS_RELEASE_TAG: ${{ startsWith(github.event.ref, 'refs/tags/v') }}
|
||||
run: |
|
||||
set -x
|
||||
mkdir -p "${RUNNER_TEMP}/artifacts/"
|
||||
@ -98,64 +104,75 @@ jobs:
|
||||
BUILD_ROCM="--build-rocm"
|
||||
fi
|
||||
|
||||
RELEASE=""
|
||||
if [[ "${IS_RELEASE_TAG}" == true ]]; then
|
||||
RELEASE="--release"
|
||||
fi
|
||||
|
||||
docker exec -t "${container_name}" yum install -y zlib-devel zip
|
||||
docker exec -t "${container_name}" "${PYTHON_EXECUTABLE}" -m pip install -U setuptools==67.4.0
|
||||
docker exec -t "${container_name}" "${PYTHON_EXECUTABLE}" /pytorch/.github/scripts/build_triton_wheel.py $BUILD_ROCM
|
||||
docker exec -t "${container_name}" "${PYTHON_EXECUTABLE}" /pytorch/.github/scripts/build_triton_wheel.py $BUILD_ROCM $RELEASE
|
||||
docker exec -t "${container_name}" chown -R 1000.1000 /artifacts
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: "pytorch-triton-wheel-${{ matrix.py_vers }}"
|
||||
# NB: Use the same name here and all wheels can be downloaded by referring to the same artifact
|
||||
name: pytorch-triton-wheel
|
||||
if-no-files-found: error
|
||||
path:
|
||||
${{ runner.temp }}/artifacts/*
|
||||
path: ${{ runner.temp }}/artifacts/*
|
||||
|
||||
- name: Teardown Linux
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
if: always()
|
||||
|
||||
upload-wheel:
|
||||
runs-on: linux.20_04.4x
|
||||
runs-on: ubuntu-22.04
|
||||
needs: build-wheel
|
||||
container:
|
||||
image: continuumio/miniconda3:4.12.0
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.github-token }}
|
||||
environment: ${{ (github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/v'))) && 'conda-aws-upload' || '' }}
|
||||
steps:
|
||||
- name: Download Build Artifacts (3.8)
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Download Build Artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: "pytorch-triton-wheel-3.8"
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Download Build Artifacts (3.9)
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: "pytorch-triton-wheel-3.9"
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Download Build Artifacts (3.10)
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: "pytorch-triton-wheel-3.10"
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Download Build Artifacts (3.11)
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: "pytorch-triton-wheel-3.11"
|
||||
path: "${{ runner.temp }}/artifacts/"
|
||||
- name: Upload binaries
|
||||
if: ${{ github.event_name == 'push' && github.event.ref == 'refs/heads/main' }}
|
||||
env:
|
||||
PKG_DIR: "${{ runner.temp }}/artifacts"
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_S3_UPDATE_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_S3_UPDATE_SECRET_ACCESS_KEY }}
|
||||
UPLOAD_BUCKET: "s3://pytorch"
|
||||
name: pytorch-triton-wheel
|
||||
path: ${{ runner.temp }}/artifacts/
|
||||
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || (startsWith(github.event.ref, 'refs/tags/v'))) }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -ex
|
||||
pip install -q awscli
|
||||
s3_dir="${UPLOAD_BUCKET}/whl/nightly/"
|
||||
for pkg in "${PKG_DIR}/"*.whl; do
|
||||
aws s3 cp --no-progress --acl public-read "${pkg}" "${s3_dir}"
|
||||
done
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/v') }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -ex
|
||||
|
||||
# reference ends with an RC suffix
|
||||
if [[ "${GITHUB_REF_NAME}" = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
|
||||
# NB: This step is gated by DRY_RUN, which is enabled everywhere except nightly and release branches
|
||||
- name: Upload binaries
|
||||
env:
|
||||
PACKAGE_TYPE: wheel
|
||||
# The UPLOAD_SUBFOLDER needs to be empty here so that triton wheels are uploaded
|
||||
# to nightly or test
|
||||
UPLOAD_SUBFOLDER: ""
|
||||
PKG_DIR: ${{ runner.temp }}/artifacts
|
||||
# When running these on pull_request events these should be blank
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -ex
|
||||
bash .circleci/scripts/binary_upload.sh
|
||||
|
||||
build-conda:
|
||||
name: "Build Triton Conda"
|
||||
runs-on: [self-hosted, linux.2xlarge]
|
||||
@ -164,19 +181,17 @@ jobs:
|
||||
matrix:
|
||||
py_vers: [ "3.8", "3.9", "3.10", "3.11" ]
|
||||
timeout-minutes: 40
|
||||
environment: ${{ (github.event_name == 'push' && github.event.ref == 'refs/heads/main') && 'conda-aws-upload' || '' }}
|
||||
env:
|
||||
DOCKER_IMAGE: pytorch/conda-builder:cpu
|
||||
PY_VERS: ${{ matrix.py_vers }}
|
||||
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
steps:
|
||||
- name: Setup SSH (Click me for login details)
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.GITHUB_TOKEN }}
|
||||
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
submodules: false
|
||||
|
||||
@ -184,11 +199,13 @@ jobs:
|
||||
uses: ./.github/actions/setup-linux
|
||||
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ env.DOCKER_IMAGE }}
|
||||
|
||||
- name: Build Triton conda package
|
||||
env:
|
||||
IS_RELEASE_TAG: ${{ startsWith(github.event.ref, 'refs/tags/v') }}
|
||||
run: |
|
||||
set -x
|
||||
mkdir -p "${RUNNER_TEMP}/artifacts/"
|
||||
@ -198,31 +215,76 @@ jobs:
|
||||
-v "${GITHUB_WORKSPACE}:/pytorch" \
|
||||
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
|
||||
-w /artifacts/ \
|
||||
-e ANACONDA_API_TOKEN \
|
||||
"${DOCKER_IMAGE}" \
|
||||
)
|
||||
|
||||
RELEASE=""
|
||||
if [[ "${IS_RELEASE_TAG}" == true ]]; then
|
||||
RELEASE="--release"
|
||||
fi
|
||||
|
||||
docker exec -t "${container_name}" yum install -y llvm11 llvm11-devel llvm11-static llvm11-libs zlib-devel
|
||||
docker exec -t "${container_name}" python /pytorch/.github/scripts/build_triton_wheel.py --build-conda --py-version="${PY_VERS}"
|
||||
|
||||
- name: Upload artifacts to Anaconda
|
||||
if: ${{ github.event_name == 'push' && github.event.ref == 'refs/heads/main' }}
|
||||
run: |
|
||||
container_name=$(docker container ps --format '{{.ID}}')
|
||||
docker exec -t "${container_name}" sh -c "anaconda upload /artifacts/torch*.tar.bz2 -u pytorch-nightly --label main --no-progress --force"
|
||||
|
||||
- name: Chown artifacts
|
||||
run: |
|
||||
container_name=$(docker container ps --format '{{.ID}}')
|
||||
docker exec -t "${container_name}" python /pytorch/.github/scripts/build_triton_wheel.py --build-conda --py-version="${PY_VERS}" $RELEASE
|
||||
docker exec -t "${container_name}" chown -R 1000.1000 /artifacts
|
||||
|
||||
- uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: "pytorch-triton-conda-${{ matrix.py_vers }}"
|
||||
# NB: Use the same name here and all wheels can be downloaded by referring to the same artifact
|
||||
name: pytorch-triton-conda
|
||||
if-no-files-found: error
|
||||
path:
|
||||
${{ runner.temp }}/artifacts/*
|
||||
path: ${{ runner.temp }}/artifacts/*
|
||||
|
||||
- name: Teardown Linux
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
if: always()
|
||||
|
||||
upload-conda:
|
||||
runs-on: ubuntu-22.04
|
||||
needs: build-conda
|
||||
container:
|
||||
image: continuumio/miniconda3:4.12.0
|
||||
environment: ${{ (github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/v'))) && 'conda-aws-upload' || '' }}
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
|
||||
- name: Download Build Artifacts
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
name: pytorch-triton-conda
|
||||
path: ${{ runner.temp }}/artifacts/
|
||||
|
||||
- name: Set DRY_RUN (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || (startsWith(github.event.ref, 'refs/tags/v'))) }}
|
||||
shell: bash
|
||||
run: |
|
||||
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
|
||||
|
||||
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
|
||||
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/v') }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -ex
|
||||
|
||||
# reference ends with an RC suffix
|
||||
if [[ "${GITHUB_REF_NAME}" = *-rc[0-9]* ]]; then
|
||||
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
|
||||
fi
|
||||
|
||||
# NB: This step is gated by DRY_RUN, which is enabled everywhere except nightly and release branches
|
||||
- name: Upload binaries to Anaconda
|
||||
env:
|
||||
PACKAGE_TYPE: conda
|
||||
PKG_DIR: ${{ runner.temp }}/artifacts
|
||||
# When running these on pull_request events these should be blank
|
||||
CONDA_PYTORCHBOT_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
CONDA_PYTORCHBOT_TOKEN_TEST: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
shell: bash
|
||||
run: |
|
||||
set -ex
|
||||
|
||||
if [[ "${UPLOAD_CHANNEL}" = "nightly" ]]; then
|
||||
export ANACONDA_API_TOKEN="${CONDA_PYTORCHBOT_TOKEN}"
|
||||
else
|
||||
export ANACONDA_API_TOKEN="${CONDA_PYTORCHBOT_TOKEN_TEST}"
|
||||
fi
|
||||
bash .circleci/scripts/binary_upload.sh
|
||||
|
2
.github/workflows/check-labels.yml
vendored
2
.github/workflows/check-labels.yml
vendored
@ -29,7 +29,7 @@ jobs:
|
||||
runs-on: linux.20_04.4x
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
submodules: false
|
||||
fetch-depth: 1
|
||||
|
@ -10,7 +10,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Run close_nonexistent_disable_issues.py
|
||||
env:
|
||||
|
8
.github/workflows/docker-builds.yml
vendored
8
.github/workflows/docker-builds.yml
vendored
@ -61,21 +61,21 @@ jobs:
|
||||
# [see note: pytorch repo ref]
|
||||
# deep clone (fetch-depth 0) required for git merge-base
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- name: Setup Linux
|
||||
uses: ./.github/actions/setup-linux
|
||||
|
||||
- name: Build docker image
|
||||
id: build-docker-image
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
|
||||
with:
|
||||
docker-image-name: ${{ matrix.docker-image-name }}
|
||||
always-rebuild: true
|
||||
push: true
|
||||
|
||||
- name: Pull docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: ${{ steps.build-docker-image.outputs.docker-image }}
|
||||
|
||||
@ -105,5 +105,5 @@ jobs:
|
||||
if: always()
|
||||
|
||||
- name: Teardown Linux
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
if: always()
|
||||
|
10
.github/workflows/docker-release.yml
vendored
10
.github/workflows/docker-release.yml
vendored
@ -47,7 +47,7 @@ jobs:
|
||||
BUILD_PLATFORMS: ${{ matrix.platform }}
|
||||
steps:
|
||||
- name: Setup SSH (Click me for login details)
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@main
|
||||
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
|
||||
with:
|
||||
github-secret: ${{ secrets.GITHUB_TOKEN }}
|
||||
# [see note: pytorch repo ref]
|
||||
@ -82,12 +82,10 @@ jobs:
|
||||
echo "${RUNNER_TEMP}/bin" >> "${GITHUB_PATH}"
|
||||
# Generate PyTorch version to use
|
||||
echo "PYTORCH_VERSION=$(python3 .github/scripts/generate_pytorch_version.py)" >> "${GITHUB_ENV}"
|
||||
- name: Setup nightly specific variables
|
||||
if: ${{ github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/ciflow/nightly/') }}
|
||||
- name: Setup release specific variables
|
||||
run: |
|
||||
{
|
||||
echo "DOCKER_IMAGE=pytorch-nightly";
|
||||
echo "INSTALL_CHANNEL=pytorch-nightly";
|
||||
echo "INSTALL_CHANNEL=pytorch-test";
|
||||
echo "TRITON_VERSION=$(cut -f 1 .ci/docker/triton_version.txt)+$(cut -c -10 .ci/docker/ci_commit_pins/triton.txt)";
|
||||
} >> "${GITHUB_ENV}"
|
||||
- name: Run docker build / push
|
||||
@ -109,5 +107,5 @@ jobs:
|
||||
ghcr.io/pytorch/pytorch-nightly:latest
|
||||
docker push ghcr.io/pytorch/pytorch-nightly:latest
|
||||
- name: Teardown Linux
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@main
|
||||
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
|
||||
if: always()
|
||||
|
8
.github/workflows/generated-linux-aarch64-binary-manywheel-nightly.yml
generated
vendored
8
.github/workflows/generated-linux-aarch64-binary-manywheel-nightly.yml
generated
vendored
@ -93,7 +93,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_9-cpu-aarch64-build:
|
||||
@ -153,7 +153,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_10-cpu-aarch64-build:
|
||||
@ -213,7 +213,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_11-cpu-aarch64-build:
|
||||
@ -273,5 +273,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
24
.github/workflows/generated-linux-binary-conda-nightly.yml
generated
vendored
24
.github/workflows/generated-linux-binary-conda-nightly.yml
generated
vendored
@ -90,7 +90,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
conda-py3_8-cuda11_8-build:
|
||||
@ -150,7 +150,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
conda-py3_8-cuda12_1-build:
|
||||
@ -210,7 +210,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
conda-py3_9-cpu-build:
|
||||
@ -267,7 +267,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
conda-py3_9-cuda11_8-build:
|
||||
@ -327,7 +327,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
conda-py3_9-cuda12_1-build:
|
||||
@ -387,7 +387,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
conda-py3_10-cpu-build:
|
||||
@ -444,7 +444,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
conda-py3_10-cuda11_8-build:
|
||||
@ -504,7 +504,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
conda-py3_10-cuda12_1-build:
|
||||
@ -564,7 +564,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
conda-py3_11-cpu-build:
|
||||
@ -621,7 +621,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
conda-py3_11-cuda11_8-build:
|
||||
@ -681,7 +681,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
conda-py3_11-cuda12_1-build:
|
||||
@ -741,5 +741,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
52
.github/workflows/generated-linux-binary-libtorch-cxx11-abi-nightly.yml
generated
vendored
52
.github/workflows/generated-linux-binary-libtorch-cxx11-abi-nightly.yml
generated
vendored
@ -93,7 +93,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cpu-shared-without-deps-cxx11-abi-build:
|
||||
@ -153,7 +153,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cpu-static-with-deps-cxx11-abi-build:
|
||||
@ -213,7 +213,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cpu-static-without-deps-cxx11-abi-build:
|
||||
@ -273,7 +273,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda11_8-shared-with-deps-cxx11-abi-build:
|
||||
@ -336,7 +336,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda11_8-shared-without-deps-cxx11-abi-build:
|
||||
@ -399,7 +399,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda11_8-static-with-deps-cxx11-abi-build:
|
||||
@ -462,7 +462,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda11_8-static-without-deps-cxx11-abi-build:
|
||||
@ -525,7 +525,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda12_1-shared-with-deps-cxx11-abi-build:
|
||||
@ -588,7 +588,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda12_1-shared-without-deps-cxx11-abi-build:
|
||||
@ -651,7 +651,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda12_1-static-with-deps-cxx11-abi-build:
|
||||
@ -714,7 +714,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda12_1-static-without-deps-cxx11-abi-build:
|
||||
@ -777,7 +777,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-rocm5_5-shared-with-deps-cxx11-abi-build:
|
||||
@ -828,7 +828,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -840,7 +839,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -854,7 +853,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/libtorch-cxx11-builder:rocm5.5
|
||||
- name: Test Pytorch binary
|
||||
@ -881,7 +880,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-rocm5_5-static-with-deps-cxx11-abi-build:
|
||||
@ -932,7 +931,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -944,7 +942,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -958,7 +956,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/libtorch-cxx11-builder:rocm5.5
|
||||
- name: Test Pytorch binary
|
||||
@ -985,7 +983,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-rocm5_6-shared-with-deps-cxx11-abi-build:
|
||||
@ -1036,7 +1034,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1048,7 +1045,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1062,7 +1059,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/libtorch-cxx11-builder:rocm5.6
|
||||
- name: Test Pytorch binary
|
||||
@ -1089,7 +1086,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-rocm5_6-static-with-deps-cxx11-abi-build:
|
||||
@ -1140,7 +1137,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1152,7 +1148,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1166,7 +1162,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/libtorch-cxx11-builder:rocm5.6
|
||||
- name: Test Pytorch binary
|
||||
@ -1193,5 +1189,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
52
.github/workflows/generated-linux-binary-libtorch-pre-cxx11-nightly.yml
generated
vendored
52
.github/workflows/generated-linux-binary-libtorch-pre-cxx11-nightly.yml
generated
vendored
@ -93,7 +93,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cpu-shared-without-deps-pre-cxx11-build:
|
||||
@ -153,7 +153,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cpu-static-with-deps-pre-cxx11-build:
|
||||
@ -213,7 +213,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cpu-static-without-deps-pre-cxx11-build:
|
||||
@ -273,7 +273,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda11_8-shared-with-deps-pre-cxx11-build:
|
||||
@ -336,7 +336,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda11_8-shared-without-deps-pre-cxx11-build:
|
||||
@ -399,7 +399,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda11_8-static-with-deps-pre-cxx11-build:
|
||||
@ -462,7 +462,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda11_8-static-without-deps-pre-cxx11-build:
|
||||
@ -525,7 +525,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda12_1-shared-with-deps-pre-cxx11-build:
|
||||
@ -588,7 +588,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda12_1-shared-without-deps-pre-cxx11-build:
|
||||
@ -651,7 +651,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda12_1-static-with-deps-pre-cxx11-build:
|
||||
@ -714,7 +714,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-cuda12_1-static-without-deps-pre-cxx11-build:
|
||||
@ -777,7 +777,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-rocm5_5-shared-with-deps-pre-cxx11-build:
|
||||
@ -828,7 +828,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -840,7 +839,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -854,7 +853,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.5
|
||||
- name: Test Pytorch binary
|
||||
@ -881,7 +880,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-rocm5_5-static-with-deps-pre-cxx11-build:
|
||||
@ -932,7 +931,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -944,7 +942,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -958,7 +956,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.5
|
||||
- name: Test Pytorch binary
|
||||
@ -985,7 +983,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-rocm5_6-shared-with-deps-pre-cxx11-build:
|
||||
@ -1036,7 +1034,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1048,7 +1045,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1062,7 +1059,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.6
|
||||
- name: Test Pytorch binary
|
||||
@ -1089,7 +1086,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
libtorch-rocm5_6-static-with-deps-pre-cxx11-build:
|
||||
@ -1140,7 +1137,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1152,7 +1148,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1166,7 +1162,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.6
|
||||
- name: Test Pytorch binary
|
||||
@ -1193,5 +1189,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
2
.github/workflows/generated-linux-binary-manywheel-main.yml
generated
vendored
2
.github/workflows/generated-linux-binary-manywheel-main.yml
generated
vendored
@ -86,7 +86,7 @@ jobs:
|
||||
DESIRED_PYTHON: "3.8"
|
||||
build_name: manywheel-py3_8-cuda12_1-with-pypi-cudnn
|
||||
build_environment: linux-binary-manywheel
|
||||
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'
|
||||
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
|
||||
secrets:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
manywheel-py3_8-cuda12_1-with-pypi-cudnn-test: # Testing
|
||||
|
104
.github/workflows/generated-linux-binary-manywheel-nightly.yml
generated
vendored
104
.github/workflows/generated-linux-binary-manywheel-nightly.yml
generated
vendored
@ -90,7 +90,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_8-cpu-cxx11-abi-build:
|
||||
@ -150,7 +150,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_8-cuda11_8-build:
|
||||
@ -210,7 +210,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_8-cuda12_1-with-pypi-cudnn-build:
|
||||
@ -229,7 +229,7 @@ jobs:
|
||||
DESIRED_PYTHON: "3.8"
|
||||
build_name: manywheel-py3_8-cuda12_1-with-pypi-cudnn
|
||||
build_environment: linux-binary-manywheel
|
||||
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'
|
||||
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
|
||||
secrets:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
manywheel-py3_8-cuda12_1-with-pypi-cudnn-test: # Testing
|
||||
@ -271,7 +271,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_8-cuda12_1-build:
|
||||
@ -331,7 +331,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_8-rocm5_5-build:
|
||||
@ -380,7 +380,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -392,7 +391,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -406,7 +405,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.5
|
||||
- name: Test Pytorch binary
|
||||
@ -432,7 +431,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_8-rocm5_6-build:
|
||||
@ -481,7 +480,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -493,7 +491,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -507,7 +505,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.6
|
||||
- name: Test Pytorch binary
|
||||
@ -533,7 +531,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_9-cpu-build:
|
||||
@ -590,7 +588,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_9-cpu-cxx11-abi-build:
|
||||
@ -650,7 +648,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_9-cuda11_8-build:
|
||||
@ -710,7 +708,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_9-cuda12_1-with-pypi-cudnn-build:
|
||||
@ -729,7 +727,7 @@ jobs:
|
||||
DESIRED_PYTHON: "3.9"
|
||||
build_name: manywheel-py3_9-cuda12_1-with-pypi-cudnn
|
||||
build_environment: linux-binary-manywheel
|
||||
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'
|
||||
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
|
||||
secrets:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
manywheel-py3_9-cuda12_1-with-pypi-cudnn-test: # Testing
|
||||
@ -771,7 +769,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_9-cuda12_1-build:
|
||||
@ -831,7 +829,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_9-rocm5_5-build:
|
||||
@ -880,7 +878,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -892,7 +889,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -906,7 +903,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.5
|
||||
- name: Test Pytorch binary
|
||||
@ -932,7 +929,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_9-rocm5_6-build:
|
||||
@ -981,7 +978,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -993,7 +989,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1007,7 +1003,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.6
|
||||
- name: Test Pytorch binary
|
||||
@ -1033,7 +1029,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_10-cpu-build:
|
||||
@ -1090,7 +1086,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_10-cpu-cxx11-abi-build:
|
||||
@ -1150,7 +1146,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_10-cuda11_8-build:
|
||||
@ -1210,7 +1206,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_10-cuda12_1-with-pypi-cudnn-build:
|
||||
@ -1229,7 +1225,7 @@ jobs:
|
||||
DESIRED_PYTHON: "3.10"
|
||||
build_name: manywheel-py3_10-cuda12_1-with-pypi-cudnn
|
||||
build_environment: linux-binary-manywheel
|
||||
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'
|
||||
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
|
||||
secrets:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
manywheel-py3_10-cuda12_1-with-pypi-cudnn-test: # Testing
|
||||
@ -1271,7 +1267,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_10-cuda12_1-build:
|
||||
@ -1331,7 +1327,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_10-rocm5_5-build:
|
||||
@ -1380,7 +1376,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1392,7 +1387,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1406,7 +1401,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.5
|
||||
- name: Test Pytorch binary
|
||||
@ -1432,7 +1427,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_10-rocm5_6-build:
|
||||
@ -1481,7 +1476,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1493,7 +1487,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1507,7 +1501,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.6
|
||||
- name: Test Pytorch binary
|
||||
@ -1533,7 +1527,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_11-cpu-build:
|
||||
@ -1590,7 +1584,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_11-cpu-cxx11-abi-build:
|
||||
@ -1650,7 +1644,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_11-cuda11_8-build:
|
||||
@ -1710,7 +1704,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_11-cuda12_1-with-pypi-cudnn-build:
|
||||
@ -1729,7 +1723,7 @@ jobs:
|
||||
DESIRED_PYTHON: "3.11"
|
||||
build_name: manywheel-py3_11-cuda12_1-with-pypi-cudnn
|
||||
build_environment: linux-binary-manywheel
|
||||
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'
|
||||
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
|
||||
secrets:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
manywheel-py3_11-cuda12_1-with-pypi-cudnn-test: # Testing
|
||||
@ -1771,7 +1765,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_11-cuda12_1-build:
|
||||
@ -1831,7 +1825,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_11-rocm5_5-build:
|
||||
@ -1880,7 +1874,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1892,7 +1885,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1906,7 +1899,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.5
|
||||
- name: Test Pytorch binary
|
||||
@ -1932,7 +1925,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
||||
manywheel-py3_11-rocm5_6-build:
|
||||
@ -1981,7 +1974,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1993,7 +1985,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2007,7 +1999,7 @@ jobs:
|
||||
run: |
|
||||
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
|
||||
- name: Pull Docker image
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
|
||||
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
|
||||
with:
|
||||
docker-image: pytorch/manylinux-builder:rocm5.6
|
||||
- name: Test Pytorch binary
|
||||
@ -2033,5 +2025,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
20
.github/workflows/generated-macos-arm64-binary-conda-nightly.yml
generated
vendored
20
.github/workflows/generated-macos-arm64-binary-conda-nightly.yml
generated
vendored
@ -75,7 +75,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -87,7 +86,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -144,7 +143,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_9-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -187,7 +186,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -199,7 +197,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -256,7 +254,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_10-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -299,7 +297,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -311,7 +308,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -368,7 +365,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_11-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -411,7 +408,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -423,7 +419,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -480,5 +476,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
20
.github/workflows/generated-macos-arm64-binary-wheel-nightly.yml
generated
vendored
20
.github/workflows/generated-macos-arm64-binary-wheel-nightly.yml
generated
vendored
@ -75,7 +75,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -87,7 +86,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -144,7 +143,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_9-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -187,7 +186,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -199,7 +197,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -256,7 +254,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_10-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -299,7 +297,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -311,7 +308,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -368,7 +365,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_11-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -411,7 +408,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -423,7 +419,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -480,5 +476,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
20
.github/workflows/generated-macos-binary-conda-nightly.yml
generated
vendored
20
.github/workflows/generated-macos-binary-conda-nightly.yml
generated
vendored
@ -73,7 +73,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -85,7 +84,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -142,7 +141,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_9-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -185,7 +184,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -197,7 +195,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -254,7 +252,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_10-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -297,7 +295,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -309,7 +306,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -366,7 +363,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_11-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -409,7 +406,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -421,7 +417,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -478,5 +474,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
20
.github/workflows/generated-macos-binary-libtorch-cxx11-abi-nightly.yml
generated
vendored
20
.github/workflows/generated-macos-binary-libtorch-cxx11-abi-nightly.yml
generated
vendored
@ -77,7 +77,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -89,7 +88,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -147,7 +146,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cpu-shared-without-deps-cxx11-abi-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -194,7 +193,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -206,7 +204,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -264,7 +262,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cpu-static-with-deps-cxx11-abi-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -311,7 +309,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -323,7 +320,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -381,7 +378,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cpu-static-without-deps-cxx11-abi-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -428,7 +425,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -440,7 +436,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -498,5 +494,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
20
.github/workflows/generated-macos-binary-wheel-nightly.yml
generated
vendored
20
.github/workflows/generated-macos-binary-wheel-nightly.yml
generated
vendored
@ -73,7 +73,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -85,7 +84,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -142,7 +141,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_9-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -185,7 +184,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -197,7 +195,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -254,7 +252,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_10-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -297,7 +295,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -309,7 +306,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -366,7 +363,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_11-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -409,7 +406,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -421,7 +417,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -478,5 +474,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
96
.github/workflows/generated-windows-binary-conda-nightly.yml
generated
vendored
96
.github/workflows/generated-windows-binary-conda-nightly.yml
generated
vendored
@ -92,7 +92,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -104,7 +103,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -208,7 +207,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -220,7 +218,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -268,7 +266,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_8-cuda11_8-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -331,7 +329,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -343,7 +340,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -448,7 +445,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -460,7 +456,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -509,7 +505,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_8-cuda12_1-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -572,7 +568,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -584,7 +579,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -689,7 +684,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -701,7 +695,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -750,7 +744,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_9-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -812,7 +806,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -824,7 +817,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -928,7 +921,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -940,7 +932,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -988,7 +980,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_9-cuda11_8-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1051,7 +1043,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1063,7 +1054,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1168,7 +1159,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1180,7 +1170,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1229,7 +1219,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_9-cuda12_1-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1292,7 +1282,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1304,7 +1293,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1409,7 +1398,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1421,7 +1409,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1470,7 +1458,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_10-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1532,7 +1520,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1544,7 +1531,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1648,7 +1635,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1660,7 +1646,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1708,7 +1694,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_10-cuda11_8-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1771,7 +1757,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1783,7 +1768,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1888,7 +1873,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1900,7 +1884,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1949,7 +1933,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_10-cuda12_1-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2012,7 +1996,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2024,7 +2007,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2129,7 +2112,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2141,7 +2123,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2190,7 +2172,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_11-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2252,7 +2234,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2264,7 +2245,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2368,7 +2349,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2380,7 +2360,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2428,7 +2408,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_11-cuda11_8-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2491,7 +2471,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2503,7 +2482,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2608,7 +2587,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2620,7 +2598,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2669,7 +2647,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
conda-py3_11-cuda12_1-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2732,7 +2710,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2744,7 +2721,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2849,7 +2826,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2861,7 +2837,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2910,5 +2886,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
6
.github/workflows/generated-windows-binary-libtorch-debug-main.yml
generated
vendored
6
.github/workflows/generated-windows-binary-libtorch-debug-main.yml
generated
vendored
@ -89,7 +89,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -101,7 +100,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -209,7 +208,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -221,7 +219,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
|
96
.github/workflows/generated-windows-binary-libtorch-debug-nightly.yml
generated
vendored
96
.github/workflows/generated-windows-binary-libtorch-debug-nightly.yml
generated
vendored
@ -96,7 +96,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -108,7 +107,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -216,7 +215,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -228,7 +226,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -280,7 +278,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cpu-shared-without-deps-debug-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -346,7 +344,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -358,7 +355,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -466,7 +463,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -478,7 +474,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -530,7 +526,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cpu-static-with-deps-debug-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -596,7 +592,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -608,7 +603,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -716,7 +711,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -728,7 +722,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -780,7 +774,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cpu-static-without-deps-debug-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -846,7 +840,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -858,7 +851,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -966,7 +959,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -978,7 +970,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1030,7 +1022,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda11_8-shared-with-deps-debug-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1097,7 +1089,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1109,7 +1100,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1218,7 +1209,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1230,7 +1220,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1283,7 +1273,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda11_8-shared-without-deps-debug-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1350,7 +1340,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1362,7 +1351,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1471,7 +1460,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1483,7 +1471,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1536,7 +1524,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda11_8-static-with-deps-debug-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1603,7 +1591,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1615,7 +1602,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1724,7 +1711,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1736,7 +1722,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1789,7 +1775,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda11_8-static-without-deps-debug-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1856,7 +1842,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1868,7 +1853,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1977,7 +1962,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1989,7 +1973,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2042,7 +2026,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda12_1-shared-with-deps-debug-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2109,7 +2093,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2121,7 +2104,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2230,7 +2213,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2242,7 +2224,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2295,7 +2277,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda12_1-shared-without-deps-debug-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2362,7 +2344,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2374,7 +2355,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2483,7 +2464,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2495,7 +2475,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2548,7 +2528,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda12_1-static-with-deps-debug-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2615,7 +2595,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2627,7 +2606,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2736,7 +2715,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2748,7 +2726,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2801,7 +2779,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda12_1-static-without-deps-debug-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2868,7 +2846,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2880,7 +2857,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2989,7 +2966,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -3001,7 +2977,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -3054,5 +3030,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
6
.github/workflows/generated-windows-binary-libtorch-release-main.yml
generated
vendored
6
.github/workflows/generated-windows-binary-libtorch-release-main.yml
generated
vendored
@ -89,7 +89,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -101,7 +100,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -209,7 +208,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -221,7 +219,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
|
96
.github/workflows/generated-windows-binary-libtorch-release-nightly.yml
generated
vendored
96
.github/workflows/generated-windows-binary-libtorch-release-nightly.yml
generated
vendored
@ -96,7 +96,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -108,7 +107,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -216,7 +215,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -228,7 +226,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -280,7 +278,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cpu-shared-without-deps-release-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -346,7 +344,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -358,7 +355,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -466,7 +463,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -478,7 +474,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -530,7 +526,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cpu-static-with-deps-release-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -596,7 +592,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -608,7 +603,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -716,7 +711,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -728,7 +722,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -780,7 +774,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cpu-static-without-deps-release-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -846,7 +840,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -858,7 +851,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -966,7 +959,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -978,7 +970,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1030,7 +1022,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda11_8-shared-with-deps-release-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1097,7 +1089,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1109,7 +1100,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1218,7 +1209,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1230,7 +1220,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1283,7 +1273,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda11_8-shared-without-deps-release-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1350,7 +1340,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1362,7 +1351,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1471,7 +1460,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1483,7 +1471,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1536,7 +1524,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda11_8-static-with-deps-release-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1603,7 +1591,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1615,7 +1602,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1724,7 +1711,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1736,7 +1722,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1789,7 +1775,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda11_8-static-without-deps-release-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1856,7 +1842,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1868,7 +1853,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1977,7 +1962,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1989,7 +1973,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2042,7 +2026,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda12_1-shared-with-deps-release-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2109,7 +2093,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2121,7 +2104,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2230,7 +2213,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2242,7 +2224,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2295,7 +2277,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda12_1-shared-without-deps-release-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2362,7 +2344,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2374,7 +2355,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2483,7 +2464,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2495,7 +2475,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2548,7 +2528,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda12_1-static-with-deps-release-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2615,7 +2595,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2627,7 +2606,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2736,7 +2715,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2748,7 +2726,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2801,7 +2779,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
libtorch-cuda12_1-static-without-deps-release-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2868,7 +2846,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2880,7 +2857,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2989,7 +2966,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -3001,7 +2977,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -3054,5 +3030,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
96
.github/workflows/generated-windows-binary-wheel-nightly.yml
generated
vendored
96
.github/workflows/generated-windows-binary-wheel-nightly.yml
generated
vendored
@ -92,7 +92,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -104,7 +103,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -208,7 +207,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -220,7 +218,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -268,7 +266,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_8-cuda11_8-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -331,7 +329,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -343,7 +340,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -448,7 +445,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -460,7 +456,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -509,7 +505,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_8-cuda12_1-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -572,7 +568,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -584,7 +579,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -689,7 +684,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -701,7 +695,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -750,7 +744,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_9-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -812,7 +806,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -824,7 +817,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -928,7 +921,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -940,7 +932,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -988,7 +980,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_9-cuda11_8-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1051,7 +1043,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1063,7 +1054,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1168,7 +1159,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1180,7 +1170,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1229,7 +1219,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_9-cuda12_1-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1292,7 +1282,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1304,7 +1293,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1409,7 +1398,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1421,7 +1409,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1470,7 +1458,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_10-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1532,7 +1520,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1544,7 +1531,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1648,7 +1635,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1660,7 +1646,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1708,7 +1694,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_10-cuda11_8-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -1771,7 +1757,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1783,7 +1768,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1888,7 +1873,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -1900,7 +1884,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -1949,7 +1933,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_10-cuda12_1-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2012,7 +1996,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2024,7 +2007,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2129,7 +2112,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2141,7 +2123,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2190,7 +2172,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_11-cpu-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2252,7 +2234,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2264,7 +2245,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2368,7 +2349,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2380,7 +2360,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2428,7 +2408,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_11-cuda11_8-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2491,7 +2471,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2503,7 +2482,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2608,7 +2587,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2620,7 +2598,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2669,7 +2647,7 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
wheel-py3_11-cuda12_1-build:
|
||||
if: ${{ github.repository_owner == 'pytorch' }}
|
||||
@ -2732,7 +2710,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2744,7 +2721,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2849,7 +2826,6 @@ jobs:
|
||||
- name: Checkout PyTorch
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
|
||||
submodules: recursive
|
||||
path: pytorch
|
||||
quiet-checkout: true
|
||||
@ -2861,7 +2837,7 @@ jobs:
|
||||
- name: Checkout pytorch/builder
|
||||
uses: malfet/checkout@silent-checkout
|
||||
with:
|
||||
ref: main
|
||||
ref: release/2.1
|
||||
submodules: recursive
|
||||
repository: pytorch/builder
|
||||
path: builder
|
||||
@ -2910,5 +2886,5 @@ jobs:
|
||||
github-token: ${{ secrets.GITHUB_TOKEN }}
|
||||
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
|
||||
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
|
||||
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
|
||||
uses: ./.github/workflows/_binary-upload.yml
|
||||
|
2
.github/workflows/lint-bc.yml
vendored
2
.github/workflows/lint-bc.yml
vendored
@ -26,7 +26,7 @@ jobs:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: Run BC Lint Action
|
||||
uses: pytorch/test-infra/.github/actions/bc-lint@main
|
||||
uses: pytorch/test-infra/.github/actions/bc-lint@release/2.1
|
||||
with:
|
||||
repo: ${{ github.event.pull_request.head.repo.full_name }}
|
||||
base_sha: ${{ github.event.pull_request.base.sha }}
|
||||
|
16
.github/workflows/lint.yml
vendored
16
.github/workflows/lint.yml
vendored
@ -15,7 +15,7 @@ on:
|
||||
# When any other step fails, it's job will be retried once by retryBot.
|
||||
jobs:
|
||||
lintrunner:
|
||||
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
|
||||
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
|
||||
with:
|
||||
runner: linux.2xlarge
|
||||
docker-image: pytorch-linux-focal-linter
|
||||
@ -62,7 +62,7 @@ jobs:
|
||||
exit $RC
|
||||
|
||||
quick-checks:
|
||||
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
|
||||
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
|
||||
with:
|
||||
runner: linux.2xlarge
|
||||
docker-image: pytorch-linux-focal-linter
|
||||
@ -103,7 +103,7 @@ jobs:
|
||||
if: github.event_name == 'pull_request' && !contains(github.event.pull_request.labels.*.name, 'skip-pr-sanity-checks')
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
submodules: false
|
||||
fetch-depth: -1
|
||||
@ -116,7 +116,7 @@ jobs:
|
||||
bash .github/scripts/pr-sanity-check.sh
|
||||
|
||||
workflow-checks:
|
||||
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
|
||||
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
|
||||
with:
|
||||
runner: linux.2xlarge
|
||||
docker-image: pytorch-linux-focal-linter
|
||||
@ -151,7 +151,7 @@ jobs:
|
||||
exit $RC
|
||||
|
||||
toc:
|
||||
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
|
||||
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
|
||||
with:
|
||||
runner: linux.2xlarge
|
||||
docker-image: pytorch-linux-focal-linter
|
||||
@ -189,7 +189,7 @@ jobs:
|
||||
test-tools:
|
||||
name: Test tools
|
||||
if: ${{ github.repository == 'pytorch/pytorch' }}
|
||||
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
|
||||
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
|
||||
with:
|
||||
runner: linux.2xlarge
|
||||
docker-image: pytorch-linux-focal-linter
|
||||
@ -210,7 +210,7 @@ jobs:
|
||||
runs-on: linux.20_04.4x
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
submodules: false
|
||||
fetch-depth: 1
|
||||
@ -240,7 +240,7 @@ jobs:
|
||||
# [see note: pytorch repo ref]
|
||||
# deep clone (fetch-depth 0) required, to allow us to use git log
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
submodules: false
|
||||
fetch-depth: 1
|
||||
|
@ -21,7 +21,7 @@ jobs:
|
||||
environment: upload-stats
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
fetch-depth: 1
|
||||
submodules: false
|
||||
|
46
.github/workflows/periodic.yml
vendored
46
.github/workflows/periodic.yml
vendored
@ -112,30 +112,38 @@ jobs:
|
||||
cuda-version: "11.8"
|
||||
test-matrix: ${{ needs.win-vs2019-cuda11_8-py3-build.outputs.test-matrix }}
|
||||
|
||||
ios-12-5-1-x86-64-coreml:
|
||||
name: ios-12-5-1-x86-64-coreml
|
||||
# TODO: Figure out how to migrate this job to M1 runner
|
||||
ios-build-test:
|
||||
name: ios-build-test
|
||||
if: github.event_name != 'schedule' || github.event.schedule == '45 0,8,16 * * 1-5' || github.event.schedule == '45 4 * * 0,6'
|
||||
uses: ./.github/workflows/_ios-build-test.yml
|
||||
with:
|
||||
build-environment: ios-12-5-1-x86-64-coreml
|
||||
ios-platform: SIMULATOR
|
||||
ios-arch: x86_64
|
||||
build-environment: ios-build-test
|
||||
sync-tag: ios-build-test
|
||||
test-matrix: |
|
||||
{ include: [
|
||||
{ config: "default", shard: 1, num_shards: 1, runner: "macos-12" },
|
||||
]}
|
||||
|
||||
ios-12-5-1-arm64-custom-ops:
|
||||
name: ios-12-5-1-arm64-custom-ops
|
||||
if: github.event_name != 'schedule' || github.event.schedule == '45 0,8,16 * * 1-5' || github.event.schedule == '45 4 * * 0,6'
|
||||
uses: ./.github/workflows/_ios-build-test.yml
|
||||
with:
|
||||
build-environment: ios-12-5-1-arm64-custom-ops
|
||||
ios-platform: OS
|
||||
ios-arch: arm64
|
||||
test-matrix: |
|
||||
{ include: [
|
||||
{ config: "default", shard: 1, num_shards: 1, runner: "macos-12" },
|
||||
{ config: "default",
|
||||
shard: 1,
|
||||
num_shards: 1,
|
||||
runner: "macos-12",
|
||||
ios_platform: "SIMULATOR",
|
||||
ios_arch: "x86_64",
|
||||
use_lite_interpreter: 1,
|
||||
use_metal: 0,
|
||||
use_coreml: 1,
|
||||
use_custom_op_list: ""
|
||||
},
|
||||
{ config: "default",
|
||||
shard: 1,
|
||||
num_shards: 1,
|
||||
runner: "macos-12",
|
||||
ios_platform: "OS",
|
||||
ios_arch: "arm64",
|
||||
use_lite_interpreter: 1,
|
||||
use_metal: 1,
|
||||
use_coreml: 1,
|
||||
use_custom_op_list: "mobilenetv2.yaml"
|
||||
}
|
||||
]}
|
||||
|
||||
buck-build-test:
|
||||
|
2
.github/workflows/update_pytorch_labels.yml
vendored
2
.github/workflows/update_pytorch_labels.yml
vendored
@ -14,7 +14,7 @@ jobs:
|
||||
if: ${{ github.repository == 'pytorch/pytorch' }}
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
fetch-depth: 1
|
||||
submodules: false
|
||||
|
2
.github/workflows/upload-alerts.yml
vendored
2
.github/workflows/upload-alerts.yml
vendored
@ -44,7 +44,7 @@ jobs:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
|
||||
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
|
||||
uses: pytorch/test-infra/.github/actions/upload-alerts@main
|
||||
uses: pytorch/test-infra/.github/actions/upload-alerts@release/2.1
|
||||
with:
|
||||
alerts: '${{ steps.alert_creation_step.outputs.script-output }}'
|
||||
organization: "pytorch"
|
||||
|
2
.github/workflows/upload-test-stats.yml
vendored
2
.github/workflows/upload-test-stats.yml
vendored
@ -37,7 +37,7 @@ jobs:
|
||||
run: echo "${TRIGGERING_WORKFLOW}"
|
||||
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
|
||||
- uses: actions/setup-python@v4
|
||||
with:
|
||||
|
@ -29,7 +29,7 @@ jobs:
|
||||
name: Upload dynamo performance stats for ${{ github.event.workflow_run.id }}, attempt ${{ github.event.workflow_run.run_attempt }}
|
||||
steps:
|
||||
- name: Checkout PyTorch
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
|
||||
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
|
||||
with:
|
||||
submodules: false
|
||||
fetch-depth: 1
|
||||
|
@ -73,8 +73,8 @@ ARG TARGETPLATFORM
|
||||
|
||||
# On arm64 we can only install wheel packages.
|
||||
RUN case ${TARGETPLATFORM} in \
|
||||
"linux/arm64") pip install --extra-index-url https://download.pytorch.org/whl/cpu/ torch torchvision torchaudio torchtext ;; \
|
||||
*) /opt/conda/bin/conda install -c "${INSTALL_CHANNEL}" -c "${CUDA_CHANNEL}" -y "python=${PYTHON_VERSION}" pytorch torchvision torchaudio torchtext "pytorch-cuda=$(echo $CUDA_VERSION | cut -d'.' -f 1-2)" ;; \
|
||||
"linux/arm64") pip install --extra-index-url https://download.pytorch.org/whl/cpu/ torch torchvision torchaudio ;; \
|
||||
*) /opt/conda/bin/conda install -c "${INSTALL_CHANNEL}" -c "${CUDA_CHANNEL}" -y "python=${PYTHON_VERSION}" pytorch torchvision torchaudio "pytorch-cuda=$(echo $CUDA_VERSION | cut -d'.' -f 1-2)" ;; \
|
||||
esac && \
|
||||
/opt/conda/bin/conda clean -ya
|
||||
RUN /opt/conda/bin/pip install torchelastic
|
||||
|
@ -125,6 +125,7 @@ file(GLOB native_ao_sparse_h
|
||||
"native/ao_sparse/quantized/cpu/*.h")
|
||||
file(GLOB native_quantized_h "native/quantized/*.h" "native/quantized/cpu/*.h" "native/quantized/cudnn/*.h")
|
||||
file(GLOB native_cpu_h "native/cpu/*.h")
|
||||
file(GLOB native_utils_h "native/utils/*.h")
|
||||
|
||||
file(GLOB native_cuda_cu "native/cuda/*.cu")
|
||||
file(GLOB native_cuda_cpp "native/cuda/*.cpp")
|
||||
@ -540,7 +541,7 @@ install(FILES "${CMAKE_CURRENT_BINARY_DIR}/cmake-exports/ATenConfig.cmake"
|
||||
|
||||
set(INSTALL_HEADERS ${base_h} ${ATen_CORE_HEADERS})
|
||||
if(NOT INTERN_BUILD_MOBILE)
|
||||
list(APPEND INSTALL_HEADERS ${native_h} ${native_cpu_h} ${native_ao_sparse_h} ${native_quantized_h} ${cuda_h} ${native_cuda_h} ${native_hip_h} ${cudnn_h} ${hip_h} ${mps_h} ${native_mps_h} ${miopen_h})
|
||||
list(APPEND INSTALL_HEADERS ${native_h} ${native_cpu_h} ${native_ao_sparse_h} ${native_quantized_h} ${cuda_h} ${native_cuda_h} ${native_hip_h} ${cudnn_h} ${hip_h} ${mps_h} ${native_mps_h} ${native_utils_h} ${miopen_h})
|
||||
# Metal
|
||||
if(USE_PYTORCH_METAL_EXPORT)
|
||||
# Add files needed from exporting metal models(optimized_for_mobile)
|
||||
|
@ -371,6 +371,22 @@ inline void deprecated_AT_DISPATCH_ALL_TYPES_AND_HALF_AND_COMPLEX() {}
|
||||
AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES_AND3( \
|
||||
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, __VA_ARGS__))
|
||||
|
||||
#define AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES_AND4( \
|
||||
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, SCALARTYPE4, ...) \
|
||||
AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES(__VA_ARGS__) \
|
||||
AT_DISPATCH_CASE(SCALARTYPE1, __VA_ARGS__) \
|
||||
AT_DISPATCH_CASE(SCALARTYPE2, __VA_ARGS__) \
|
||||
AT_DISPATCH_CASE(SCALARTYPE3, __VA_ARGS__) \
|
||||
AT_DISPATCH_CASE(SCALARTYPE4, __VA_ARGS__)
|
||||
|
||||
#define AT_DISPATCH_FLOATING_AND_COMPLEX_TYPES_AND4( \
|
||||
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, SCALARTYPE4, TYPE, NAME, ...) \
|
||||
AT_DISPATCH_SWITCH( \
|
||||
TYPE, \
|
||||
NAME, \
|
||||
AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES_AND4( \
|
||||
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, SCALARTYPE4, __VA_ARGS__))
|
||||
|
||||
#define AT_DISPATCH_CASE_INTEGRAL_TYPES(...) \
|
||||
AT_DISPATCH_CASE(at::ScalarType::Byte, __VA_ARGS__) \
|
||||
AT_DISPATCH_CASE(at::ScalarType::Char, __VA_ARGS__) \
|
||||
|
@ -389,8 +389,9 @@ static inline bool mkldnn_conv_use_channels_last(const at::Tensor& input, const
|
||||
(input_memory_format == at::MemoryFormat::ChannelsLast) ||
|
||||
(weight_memory_format == at::MemoryFormat::ChannelsLast);
|
||||
|
||||
// TODO: add channels last 3d support
|
||||
bool can_use_mkldnn_channels_last_3d = false;
|
||||
bool can_use_mkldnn_channels_last_3d =
|
||||
(input_memory_format == at::MemoryFormat::ChannelsLast3d) ||
|
||||
(weight_memory_format == at::MemoryFormat::ChannelsLast3d);
|
||||
|
||||
return can_use_mkldnn_channels_last_2d || can_use_mkldnn_channels_last_3d;
|
||||
}
|
||||
|
@ -508,9 +508,6 @@ struct ConvParams {
|
||||
if (transposed && is_output_padding_big()) {
|
||||
return false;
|
||||
}
|
||||
if (transposed && groups > 1 && at::symint::size<T>(input, 1) == groups) {
|
||||
return false;
|
||||
}
|
||||
if (input.device().is_cpu() && input.scalar_type() == kBFloat16 && mkldnn_bf16_device_check()) {
|
||||
return true;
|
||||
}
|
||||
|
@ -253,7 +253,9 @@ static Tensor & copy_impl(Tensor & self, const Tensor & src, bool non_blocking)
|
||||
self.storage_offset() == src.storage_offset() &&
|
||||
self.strides().equals(src.strides()) &&
|
||||
self.sizes().equals(src.sizes()) &&
|
||||
self.scalar_type() == src.scalar_type()
|
||||
self.scalar_type() == src.scalar_type() &&
|
||||
self.is_conj() == src.is_conj() &&
|
||||
self.is_neg() == src.is_neg()
|
||||
);
|
||||
if (is_same_data) {
|
||||
return self;
|
||||
|
@ -727,7 +727,7 @@ Tensor _mkldnn_convolution_transpose(
|
||||
|
||||
if (bias.defined()) {
|
||||
const ideep::tensor b = itensor_from_tensor(bias);
|
||||
ideep::convolution_transpose_forward::compute(
|
||||
ideep::convolution_transpose_forward::compute_v3(
|
||||
x,
|
||||
w,
|
||||
b,
|
||||
@ -738,9 +738,10 @@ Tensor _mkldnn_convolution_transpose(
|
||||
padding_r(padding_expanded, output_padding_expanded),
|
||||
dilation.vec(),
|
||||
groups,
|
||||
use_channels_last,
|
||||
op_attr);
|
||||
} else {
|
||||
ideep::convolution_transpose_forward::compute(
|
||||
ideep::convolution_transpose_forward::compute_v3(
|
||||
x,
|
||||
w,
|
||||
output_sizes,
|
||||
@ -750,6 +751,7 @@ Tensor _mkldnn_convolution_transpose(
|
||||
padding_r(padding_expanded, output_padding_expanded),
|
||||
dilation.vec(),
|
||||
groups,
|
||||
use_channels_last,
|
||||
op_attr);
|
||||
}
|
||||
if (input.is_mkldnn()) {
|
||||
@ -988,7 +990,7 @@ Tensor mkldnn_convolution_transpose_backward_input(
|
||||
grad_input.resize_(input_size, memory_format);
|
||||
grad_x = itensor_from_tensor(grad_input);
|
||||
}
|
||||
ideep::convolution_transpose_backward_data::compute(
|
||||
ideep::convolution_transpose_backward_data::compute_v3(
|
||||
grad_y,
|
||||
w,
|
||||
input_size.vec(),
|
||||
@ -997,7 +999,8 @@ Tensor mkldnn_convolution_transpose_backward_input(
|
||||
padding.vec(),
|
||||
padding_r(padding, output_padding),
|
||||
dilation.vec(),
|
||||
groups);
|
||||
groups,
|
||||
is_channels_last);
|
||||
|
||||
if (grad_output.is_mkldnn()) {
|
||||
return MKLDNNTensor(grad_x, grad_output.options());
|
||||
@ -1024,7 +1027,7 @@ std::tuple<Tensor,Tensor> mkldnn_convolution_transpose_backward_weights(
|
||||
|
||||
ideep::tensor grad_w, grad_b;
|
||||
if (bias_defined) {
|
||||
ideep::convolution_transpose_backward_weights::compute(
|
||||
ideep::convolution_transpose_backward_weights::compute_v3(
|
||||
x,
|
||||
grad_y,
|
||||
weight_size.vec(),
|
||||
@ -1034,9 +1037,10 @@ std::tuple<Tensor,Tensor> mkldnn_convolution_transpose_backward_weights(
|
||||
padding.vec(),
|
||||
padding_r(padding, output_padding),
|
||||
dilation.vec(),
|
||||
groups);
|
||||
groups,
|
||||
is_channels_last);
|
||||
} else {
|
||||
ideep::convolution_transpose_backward_weights::compute(
|
||||
ideep::convolution_transpose_backward_weights::compute_v3(
|
||||
x,
|
||||
grad_y,
|
||||
weight_size.vec(),
|
||||
@ -1045,7 +1049,8 @@ std::tuple<Tensor,Tensor> mkldnn_convolution_transpose_backward_weights(
|
||||
padding.vec(),
|
||||
padding_r(padding, output_padding),
|
||||
dilation.vec(),
|
||||
groups);
|
||||
groups,
|
||||
is_channels_last);
|
||||
}
|
||||
|
||||
if (!is_channels_last) {
|
||||
@ -1061,18 +1066,21 @@ std::tuple<Tensor,Tensor> mkldnn_convolution_transpose_backward_weights(
|
||||
}
|
||||
|
||||
std::tuple<Tensor, Tensor, Tensor> mkldnn_convolution_transpose_backward(
|
||||
const Tensor& input, const Tensor& grad_output_t, const Tensor& weight,
|
||||
const Tensor& input_t, const Tensor& grad_output_t, const Tensor& weight_t,
|
||||
IntArrayRef padding, IntArrayRef output_padding, IntArrayRef stride, IntArrayRef dilation, int64_t groups,
|
||||
std::array<bool,3> output_mask)
|
||||
{
|
||||
bool is_channels_last = mkldnn_conv_use_channels_last(input, weight);
|
||||
auto memory_format = mkldnn_convolution_memory_format(input.ndimension(), is_channels_last);
|
||||
bool is_channels_last = mkldnn_conv_use_channels_last(input_t, weight_t);
|
||||
auto memory_format = mkldnn_convolution_memory_format(input_t.ndimension(), is_channels_last);
|
||||
Tensor grad_output = grad_output_t.is_mkldnn() ? grad_output_t : grad_output_t.contiguous(memory_format);
|
||||
auto input = input_t.is_mkldnn() ? input_t : input_t.contiguous(memory_format);
|
||||
auto weight = weight_t.is_mkldnn() ? weight_t : weight_t.contiguous(memory_format);
|
||||
int64_t dim = input.ndimension() - 2;
|
||||
const auto padding_expanded = expand_param_if_needed(padding, "padding", dim);
|
||||
const auto stride_expanded = expand_param_if_needed(stride, "stride", dim);
|
||||
const auto dilation_expanded = expand_param_if_needed(dilation, "dilation", dim);
|
||||
const auto output_padding_expanded = expand_param_if_needed(output_padding, "output_padding", dim);
|
||||
|
||||
Tensor grad_input, grad_weight, grad_bias;
|
||||
if (output_mask[0]) {
|
||||
grad_input = mkldnn_convolution_transpose_backward_input(
|
||||
|
@ -293,7 +293,8 @@ at::Tensor& mps_copy_(at::Tensor& dst, const at::Tensor& src, bool non_blocking)
|
||||
dst.resize_as_(src);
|
||||
}
|
||||
|
||||
TORCH_CHECK(dst.dim() >= src.dim());
|
||||
TORCH_CHECK(
|
||||
dst.dim() >= src.dim(), "Destination ", dst.sym_sizes(), " doesn't match the broadcast shape ", src.sym_sizes());
|
||||
if (dst.dim() > src.dim()) {
|
||||
needs_broadcasting = true;
|
||||
} else {
|
||||
|
@ -16,15 +16,14 @@ namespace at::native {
|
||||
Scalar _local_scalar_dense_mps(const Tensor& self) {
|
||||
Scalar r;
|
||||
|
||||
auto output = at::empty_like(self, TensorOptions(kCPU));
|
||||
mps::mps_copy_(output, self, false);
|
||||
AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3(at::ScalarType::Half,
|
||||
at::ScalarType::Bool,
|
||||
at::ScalarType::BFloat16,
|
||||
self.scalar_type(),
|
||||
"_local_scalar_dense_mps",
|
||||
[&] {
|
||||
Tensor output = at::empty({1}, TensorOptions(at::CPU(self.scalar_type())));
|
||||
|
||||
mps::mps_copy_(output, self, false);
|
||||
scalar_t value = *output.data_ptr<scalar_t>();
|
||||
r = Scalar(value);
|
||||
});
|
||||
|
@ -53,9 +53,9 @@ set(CMAKE_RANLIB ranlib CACHE FILEPATH "" FORCE)
|
||||
set(PKG_CONFIG_EXECUTABLE pkg-config CACHE FILEPATH "" FORCE)
|
||||
|
||||
# Setup iOS platform unless specified manually with IOS_PLATFORM
|
||||
if(NOT DEFINED IOS_PLATFORM)
|
||||
if(NOT IOS_PLATFORM)
|
||||
set(IOS_PLATFORM "OS")
|
||||
endif(NOT DEFINED IOS_PLATFORM)
|
||||
endif(NOT IOS_PLATFORM)
|
||||
set(IOS_PLATFORM ${IOS_PLATFORM} CACHE STRING "Type of iOS Platform")
|
||||
|
||||
# Check the platform selection and setup for developer root
|
||||
@ -118,9 +118,9 @@ set(CMAKE_FIND_LIBRARY_SUFFIXES ".dylib" ".so" ".a")
|
||||
# (where install_name_tool was hardcoded) and where CMAKE_INSTALL_NAME_TOOL isn't in the cache
|
||||
# and still cmake didn't fail in CMakeFindBinUtils.cmake (because it isn't rerun)
|
||||
# hardcode CMAKE_INSTALL_NAME_TOOL here to install_name_tool, so it behaves as it did before, Alex
|
||||
if(NOT DEFINED CMAKE_INSTALL_NAME_TOOL)
|
||||
if(NOT CMAKE_INSTALL_NAME_TOOL)
|
||||
find_program(CMAKE_INSTALL_NAME_TOOL install_name_tool)
|
||||
endif(NOT DEFINED CMAKE_INSTALL_NAME_TOOL)
|
||||
endif(NOT CMAKE_INSTALL_NAME_TOOL)
|
||||
|
||||
# Setup iOS deployment target
|
||||
set(IOS_DEPLOYMENT_TARGET ${IOS_DEPLOYMENT_TARGET} CACHE STRING "Minimum iOS version")
|
||||
@ -130,17 +130,17 @@ set(IOS_DEPLOYMENT_TARGET ${IOS_DEPLOYMENT_TARGET} CACHE STRING "Minimum iOS ver
|
||||
exec_program(/usr/bin/xcode-select ARGS -print-path OUTPUT_VARIABLE CMAKE_XCODE_DEVELOPER_DIR)
|
||||
set(XCODE_POST_43_ROOT "${CMAKE_XCODE_DEVELOPER_DIR}/Platforms/${IOS_PLATFORM_LOCATION}/Developer")
|
||||
set(XCODE_PRE_43_ROOT "/Developer/Platforms/${IOS_PLATFORM_LOCATION}/Developer")
|
||||
if(NOT DEFINED CMAKE_IOS_DEVELOPER_ROOT)
|
||||
if(NOT CMAKE_IOS_DEVELOPER_ROOT)
|
||||
if(EXISTS ${XCODE_POST_43_ROOT})
|
||||
set(CMAKE_IOS_DEVELOPER_ROOT ${XCODE_POST_43_ROOT})
|
||||
elseif(EXISTS ${XCODE_PRE_43_ROOT})
|
||||
set(CMAKE_IOS_DEVELOPER_ROOT ${XCODE_PRE_43_ROOT})
|
||||
endif(EXISTS ${XCODE_POST_43_ROOT})
|
||||
endif(NOT DEFINED CMAKE_IOS_DEVELOPER_ROOT)
|
||||
endif(NOT CMAKE_IOS_DEVELOPER_ROOT)
|
||||
set(CMAKE_IOS_DEVELOPER_ROOT ${CMAKE_IOS_DEVELOPER_ROOT} CACHE PATH "Location of iOS Platform")
|
||||
|
||||
# Find and use the most recent iOS sdk unless specified manually with CMAKE_IOS_SDK_ROOT
|
||||
if(NOT DEFINED CMAKE_IOS_SDK_ROOT)
|
||||
if(NOT CMAKE_IOS_SDK_ROOT)
|
||||
file(GLOB _CMAKE_IOS_SDKS "${CMAKE_IOS_DEVELOPER_ROOT}/SDKs/*")
|
||||
if(_CMAKE_IOS_SDKS)
|
||||
list(SORT _CMAKE_IOS_SDKS)
|
||||
@ -150,7 +150,7 @@ if(NOT DEFINED CMAKE_IOS_SDK_ROOT)
|
||||
message(FATAL_ERROR "No iOS SDK's found in default search path ${CMAKE_IOS_DEVELOPER_ROOT}. Manually set CMAKE_IOS_SDK_ROOT or install the iOS SDK.")
|
||||
endif(_CMAKE_IOS_SDKS)
|
||||
message(STATUS "Toolchain using default iOS SDK: ${CMAKE_IOS_SDK_ROOT}")
|
||||
endif(NOT DEFINED CMAKE_IOS_SDK_ROOT)
|
||||
endif(NOT CMAKE_IOS_SDK_ROOT)
|
||||
set(CMAKE_IOS_SDK_ROOT ${CMAKE_IOS_SDK_ROOT} CACHE PATH "Location of the selected iOS SDK")
|
||||
|
||||
# Set the sysroot default to the most recent SDK
|
||||
|
@ -18,8 +18,8 @@ figures:
|
||||
@$(PYCMD) source/scripts/build_quantization_configs.py
|
||||
|
||||
onnx:
|
||||
@$(PYCMD) source/scripts/onnx/build_onnx_supported_aten_op_csv_table.py
|
||||
@$(PYCMD) source/scripts/onnx/build_onnx_diagnostics_rules_md.py $(SOURCEDIR)/generated/onnx_diagnostics_rules
|
||||
@$(PYCMD) source/scripts/onnx/build_onnx_torchscript_supported_aten_op_csv_table.py
|
||||
@$(PYCMD) source/scripts/onnx/build_onnx_dynamo_diagnostics_rules_md.py $(SOURCEDIR)/generated/onnx_dynamo_diagnostics_rules
|
||||
|
||||
opset:
|
||||
@$(PYCMD) source/scripts/build_opsets.py
|
||||
|
BIN
docs/source/_static/img/onnx/onnx_dynamo_mlp_model.png
Normal file
BIN
docs/source/_static/img/onnx/onnx_dynamo_mlp_model.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 36 KiB |
Binary file not shown.
After Width: | Height: | Size: 11 KiB |
Binary file not shown.
After Width: | Height: | Size: 5.7 KiB |
@ -179,6 +179,7 @@ Tensor autograd functions
|
||||
torch.Tensor.detach
|
||||
torch.Tensor.detach_
|
||||
torch.Tensor.register_hook
|
||||
torch.Tensor.register_post_accumulate_grad_hook
|
||||
torch.Tensor.retain_grad
|
||||
|
||||
:hidden:`Function`
|
||||
|
@ -1,10 +1,551 @@
|
||||
.. _torch.export:
|
||||
|
||||
torch.export
|
||||
=====================
|
||||
|
||||
.. TODO: Add torch.export() tutorial here.
|
||||
|
||||
.. warning::
|
||||
This feature is a prototype and may have compatibility breaking changes in the future.
|
||||
This feature is a prototype under active development and there WILL BE
|
||||
BREAKING CHANGES in the future.
|
||||
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
:func:`torch.export.export` takes an arbitrary Python callable (a
|
||||
:class:`torch.nn.Module`, a function or a method) and produces a traced graph
|
||||
representing only the Tensor computation of the function in an Ahead-of-Time
|
||||
(AOT) fashion, which can subsequently be executed with different outputs or
|
||||
serialized.
|
||||
|
||||
::
|
||||
|
||||
import torch
|
||||
from torch.export import export
|
||||
|
||||
def f(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
|
||||
a = torch.sin(x)
|
||||
b = torch.cos(y)
|
||||
return a + b
|
||||
|
||||
example_args = (torch.randn(10, 10), torch.randn(10, 10))
|
||||
|
||||
exported_program: torch.export.ExportedProgram = export(
|
||||
f, args=example_args
|
||||
)
|
||||
print(exported_program)
|
||||
|
||||
.. code-block::
|
||||
|
||||
ExportedProgram:
|
||||
class GraphModule(torch.nn.Module):
|
||||
def forward(self, arg0_1: f32[10, 10], arg1_1: f32[10, 10]):
|
||||
# code: a = torch.sin(x)
|
||||
sin: f32[10, 10] = torch.ops.aten.sin.default(arg0_1);
|
||||
|
||||
# code: b = torch.cos(y)
|
||||
cos: f32[10, 10] = torch.ops.aten.cos.default(arg1_1);
|
||||
|
||||
# code: return a + b
|
||||
add: f32[10, 10] = torch.ops.aten.add.Tensor(sin, cos);
|
||||
return (add,)
|
||||
|
||||
Graph signature: ExportGraphSignature(
|
||||
parameters=[],
|
||||
buffers=[],
|
||||
user_inputs=['arg0_1', 'arg1_1'],
|
||||
user_outputs=['add'],
|
||||
inputs_to_parameters={},
|
||||
inputs_to_buffers={},
|
||||
buffers_to_mutate={},
|
||||
backward_signature=None,
|
||||
assertion_dep_token=None,
|
||||
)
|
||||
Range constraints: {}
|
||||
Equality constraints: []
|
||||
|
||||
``torch.export`` produces a clean intermediate representation (IR) with the
|
||||
following invariants. More specifications about the IR can be found here (coming
|
||||
soon!).
|
||||
|
||||
* **Soundness**: It is guaranteed to be a sound representation of the original
|
||||
program, and maintains the same calling conventions of the original program.
|
||||
|
||||
* **Normalized**: There are no Python semantics within the graph. Submodules
|
||||
from the original programs are inlined to form one fully flattened
|
||||
computational graph.
|
||||
|
||||
* **Defined Operator Set**: The graph produced contains only a small defined
|
||||
:ref:`Core ATen IR <torch.compiler_ir>` opset and registered custom
|
||||
operators.
|
||||
|
||||
* **Graph properties**: The graph is purely functional, meaning it does not
|
||||
contain operations with side effects such as mutations or aliasing. It does
|
||||
not mutate any intermediate values, parameters, or buffers.
|
||||
|
||||
* **Metadata**: The graph contains metadata captured during tracing, such as a
|
||||
stacktrace from user's code.
|
||||
|
||||
Under the hood, ``torch.export`` leverages the following latest technologies:
|
||||
|
||||
* **TorchDynamo (torch._dynamo)** is an internal API that uses a CPython feature
|
||||
called the Frame Evaluation API to safely trace PyTorch graphs. This
|
||||
provides a massively improved graph capturing experience, with much fewer
|
||||
rewrites needed in order to fully trace the PyTorch code.
|
||||
|
||||
* **AOT Autograd** provides a functionalized PyTorch graph and ensures the graph
|
||||
is decomposed/lowered to the small defined Core ATen operator set.
|
||||
|
||||
* **Torch FX (torch.fx)** is the underlying representation of the graph,
|
||||
allowing flexible Python-based transformations.
|
||||
|
||||
|
||||
Existing frameworks
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
:func:`torch.compile` also utilizes the same PT2 stack as ``torch.export``, but
|
||||
is slightly different:
|
||||
|
||||
* **JIT vs. AOT**: :func:`torch.compile` is a JIT compiler whereas
|
||||
which is not intended to be used to produce compiled artifacts outside of
|
||||
deployment.
|
||||
|
||||
* **Partial vs. Full Graph Capture**: When :func:`torch.compile` runs into an
|
||||
untraceable part of a model, it will "graph break" and fall back to running
|
||||
the program in the eager Python runtime. In comparison, ``torch.export`` aims
|
||||
to get a full graph representation of a PyTorch model, so it will error out
|
||||
when something untraceable is reached. Since ``torch.export`` produces a full
|
||||
graph disjoint from any Python features or runtime, this graph can then be
|
||||
saved, loaded, and run in different environments and languages.
|
||||
|
||||
* **Usability tradeoff**: Since :func:`torch.compile` is able to fallback to the
|
||||
Python runtime whenever it reaches something untraceable, it is a lot more
|
||||
flexible. ``torch.export`` will instead require users to provide more
|
||||
information or rewrite their code to make it traceable.
|
||||
|
||||
Compared to :func:`torch.fx.symbolic_trace`, ``torch.export`` traces using
|
||||
TorchDynamo which operates at the Python bytecode level, giving it the ability
|
||||
to trace arbitrary Python constructs not limited by what Python operator
|
||||
overloading supports. Additionally, ``torch.export`` keeps fine-grained track of
|
||||
tensor metadata, so that conditionals on things like tensor shapes do not
|
||||
fail tracing. In general, ``torch.export`` is expected to work on more user
|
||||
programs, and produce lower-level graphs (at the ``torch.ops.aten`` operator
|
||||
level). Note that users can still use :func:`torch.fx.symbolic_trace` as a
|
||||
preprocessing step before ``torch.export``.
|
||||
|
||||
Compared to :func:`torch.jit.script`, ``torch.export`` does not capture Python
|
||||
control flow or data structures, but it supports more Python language features
|
||||
than TorchScript (as it is easier to have comprehensive coverage over Python
|
||||
bytecodes). The resulting graphs are simpler and only have straight line control
|
||||
flow (except for explicit control flow operators).
|
||||
|
||||
Compared to :func:`torch.jit.trace`, ``torch.export`` is sound: it is able to
|
||||
trace code that performs integer computation on sizes and records all of the
|
||||
side-conditions necessary to show that a particular trace is valid for other
|
||||
inputs.
|
||||
|
||||
|
||||
Exporting a PyTorch Model
|
||||
-------------------------
|
||||
|
||||
An Example
|
||||
^^^^^^^^^^
|
||||
|
||||
The main entrypoint is through :func:`torch.export.export`, which takes a
|
||||
callable (:class:`torch.nn.Module`, function, or method) and sample inputs, and
|
||||
captures the computation graph into an :class:`torch.export.ExportedProgram`. An
|
||||
example:
|
||||
|
||||
::
|
||||
|
||||
import torch
|
||||
from torch.export import export
|
||||
|
||||
# Simple module for demonstration
|
||||
class M(torch.nn.Module):
|
||||
def __init__(self) -> None:
|
||||
super().__init__()
|
||||
self.conv = torch.nn.Conv2d(
|
||||
in_channels=3, out_channels=16, kernel_size=3, padding=1
|
||||
)
|
||||
self.relu = torch.nn.ReLU()
|
||||
self.maxpool = torch.nn.MaxPool2d(kernel_size=3)
|
||||
|
||||
def forward(self, x: torch.Tensor, *, constant=None) -> torch.Tensor:
|
||||
a = self.conv(x)
|
||||
a.add_(constant)
|
||||
return self.maxpool(self.relu(a))
|
||||
|
||||
example_args = (torch.randn(1, 3, 256, 256),)
|
||||
example_kwargs = {"constant": torch.ones(1, 16, 256, 256)}
|
||||
|
||||
exported_program: torch.export.ExportedProgram = export(
|
||||
M(), args=example_args, kwargs=example_kwargs
|
||||
)
|
||||
print(exported_program)
|
||||
|
||||
.. code-block::
|
||||
|
||||
ExportedProgram:
|
||||
class GraphModule(torch.nn.Module):
|
||||
def forward(self, arg0_1: f32[16, 3, 3, 3], arg1_1: f32[16], arg2_1: f32[1, 3, 256, 256], arg3_1: f32[1, 16, 256, 256]):
|
||||
|
||||
# code: a = self.conv(x)
|
||||
convolution: f32[1, 16, 256, 256] = torch.ops.aten.convolution.default(
|
||||
arg2_1, arg0_1, arg1_1, [1, 1], [1, 1], [1, 1], False, [0, 0], 1
|
||||
);
|
||||
|
||||
# code: a.add_(constant)
|
||||
add: f32[1, 16, 256, 256] = torch.ops.aten.add.Tensor(convolution, arg3_1);
|
||||
|
||||
# code: return self.maxpool(self.relu(a))
|
||||
relu: f32[1, 16, 256, 256] = torch.ops.aten.relu.default(add);
|
||||
max_pool2d_with_indices = torch.ops.aten.max_pool2d_with_indices.default(
|
||||
relu, [3, 3], [3, 3]
|
||||
);
|
||||
getitem: f32[1, 16, 85, 85] = max_pool2d_with_indices[0];
|
||||
return (getitem,)
|
||||
|
||||
Graph signature: ExportGraphSignature(
|
||||
parameters=['L__self___conv.weight', 'L__self___conv.bias'],
|
||||
buffers=[],
|
||||
user_inputs=['arg2_1', 'arg3_1'],
|
||||
user_outputs=['getitem'],
|
||||
inputs_to_parameters={
|
||||
'arg0_1': 'L__self___conv.weight',
|
||||
'arg1_1': 'L__self___conv.bias',
|
||||
},
|
||||
inputs_to_buffers={},
|
||||
buffers_to_mutate={},
|
||||
backward_signature=None,
|
||||
assertion_dep_token=None,
|
||||
)
|
||||
Range constraints: {}
|
||||
Equality constraints: []
|
||||
|
||||
Inspecting the ``ExportedProgram``, we can note the following:
|
||||
|
||||
* The :class:`torch.fx.Graph` contains the computation graph of the original
|
||||
program, along with records of the original code for easy debugging.
|
||||
|
||||
* The graph contains only ``torch.ops.aten`` operators found in the
|
||||
:ref:`Core ATen IR <torch.compiler_ir>` opset and custom operators, and is
|
||||
fully functional, without any inplace operators such as ``torch.add_``.
|
||||
|
||||
* The parameters (weight and bias to conv) are lifted as inputs to the graph,
|
||||
resulting in no ``get_attr`` nodes in the graph, which previously existed in
|
||||
the result of :func:`torch.fx.symbolic_trace`.
|
||||
|
||||
* The :class:`torch.export.ExportGraphSignature` models the input and output
|
||||
signature, along with specifying which inputs are parameters.
|
||||
|
||||
* The resulting shape and dtype of tensors produced by each node in the graph is
|
||||
noted. For example, the ``convolution`` node will result in a tensor of dtype
|
||||
``torch.float32`` and shape (1, 16, 256, 256).
|
||||
|
||||
|
||||
Expressing Dynamism
|
||||
^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
By default ``torch.export`` will trace the program assuming all input shapes are
|
||||
**static**, and specializing the exported program to those dimensions. However,
|
||||
some dimensions, such as a batch dimension, can be dynamic and vary from run to
|
||||
run. Such dimensions must be marked dynamic using the
|
||||
:func:`torch.export.dynamic_dim` API, and passed into
|
||||
:func:`torch.export.export` through the ``constraints`` argument. An example:
|
||||
|
||||
::
|
||||
|
||||
import torch
|
||||
from torch.export import export, dynamic_dim
|
||||
|
||||
class M(torch.nn.Module):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
|
||||
self.branch1 = torch.nn.Sequential(
|
||||
torch.nn.Linear(64, 32), torch.nn.ReLU()
|
||||
)
|
||||
self.branch2 = torch.nn.Sequential(
|
||||
torch.nn.Linear(128, 64), torch.nn.ReLU()
|
||||
)
|
||||
self.buffer = torch.ones(32)
|
||||
|
||||
def forward(self, x1, x2):
|
||||
out1 = self.branch1(x1)
|
||||
out2 = self.branch2(x2)
|
||||
return (out1 + self.buffer, out2)
|
||||
|
||||
example_args = (torch.randn(32, 64), torch.randn(32, 128))
|
||||
constraints = [
|
||||
# First dimension of each input is a dynamic batch size
|
||||
dynamic_dim(example_args[0], 0),
|
||||
dynamic_dim(example_args[1], 0),
|
||||
# The dynamic batch size between the inputs are equal
|
||||
dynamic_dim(example_args[0], 0) == dynamic_dim(example_args[1], 0),
|
||||
]
|
||||
|
||||
exported_program: torch.export.ExportedProgram = export(
|
||||
M(), args=example_args, constraints=constraints
|
||||
)
|
||||
print(exported_program)
|
||||
|
||||
.. code-block::
|
||||
|
||||
ExportedProgram:
|
||||
class GraphModule(torch.nn.Module):
|
||||
def forward(self, arg0_1: f32[32, 64], arg1_1: f32[32], arg2_1: f32[64, 128], arg3_1: f32[64], arg4_1: f32[32], arg5_1: f32[s0, 64], arg6_1: f32[s0, 128]):
|
||||
|
||||
# code: out1 = self.branch1(x1)
|
||||
permute: f32[64, 32] = torch.ops.aten.permute.default(arg0_1, [1, 0]);
|
||||
addmm: f32[s0, 32] = torch.ops.aten.addmm.default(arg1_1, arg5_1, permute);
|
||||
relu: f32[s0, 32] = torch.ops.aten.relu.default(addmm);
|
||||
|
||||
# code: out2 = self.branch2(x2)
|
||||
permute_1: f32[128, 64] = torch.ops.aten.permute.default(arg2_1, [1, 0]);
|
||||
addmm_1: f32[s0, 64] = torch.ops.aten.addmm.default(arg3_1, arg6_1, permute_1);
|
||||
relu_1: f32[s0, 64] = torch.ops.aten.relu.default(addmm_1); addmm_1 = None
|
||||
|
||||
# code: return (out1 + self.buffer, out2)
|
||||
add: f32[s0, 32] = torch.ops.aten.add.Tensor(relu, arg4_1);
|
||||
return (add, relu_1)
|
||||
|
||||
Graph signature: ExportGraphSignature(
|
||||
parameters=[
|
||||
'branch1.0.weight',
|
||||
'branch1.0.bias',
|
||||
'branch2.0.weight',
|
||||
'branch2.0.bias',
|
||||
],
|
||||
buffers=['L__self___buffer'],
|
||||
user_inputs=['arg5_1', 'arg6_1'],
|
||||
user_outputs=['add', 'relu_1'],
|
||||
inputs_to_parameters={
|
||||
'arg0_1': 'branch1.0.weight',
|
||||
'arg1_1': 'branch1.0.bias',
|
||||
'arg2_1': 'branch2.0.weight',
|
||||
'arg3_1': 'branch2.0.bias',
|
||||
},
|
||||
inputs_to_buffers={'arg4_1': 'L__self___buffer'},
|
||||
buffers_to_mutate={},
|
||||
backward_signature=None,
|
||||
assertion_dep_token=None,
|
||||
)
|
||||
Range constraints: {s0: RangeConstraint(min_val=2, max_val=9223372036854775806)}
|
||||
Equality constraints: [(InputDim(input_name='arg5_1', dim=0), InputDim(input_name='arg6_1', dim=0))]
|
||||
|
||||
Some additional things to note:
|
||||
|
||||
* Through the :func:`torch.export.dynamic_dim` API, we specified the first
|
||||
dimension of each input to be dynamic. Looking at the inputs ``arg5_1`` and
|
||||
``arg6_1``, they have a symbolic shape of (s0, 64) and (s0, 128), instead of
|
||||
the (32, 64) and (32, 128) shaped tensors that we passed in as example inputs.
|
||||
``s0`` is a symbol representing that this dimension can be a range
|
||||
of values.
|
||||
|
||||
* ``exported_program.range_constraints`` describes the ranges of each symbol
|
||||
appearing in the graph. In this case, we see that ``s0`` has the range
|
||||
[2, inf]. For technical reasons that are difficult to explain here, they are
|
||||
assumed to be not 0 or 1. This is not a bug, and does not necessarily mean
|
||||
that the exported program will not work for dimensions 0 or 1. See
|
||||
`The 0/1 Specialization Problem <https://docs.google.com/document/d/16VPOa3d-Liikf48teAOmxLc92rgvJdfosIy-yoT38Io/edit?fbclid=IwAR3HNwmmexcitV0pbZm_x1a4ykdXZ9th_eJWK-3hBtVgKnrkmemz6Pm5jRQ#heading=h.ez923tomjvyk>`_
|
||||
for an in-depth discussion of this topic.
|
||||
|
||||
* ``exported_program.equality_constraints`` describes which dimensions are
|
||||
required to be equal. Since we specified in the constraints that the first
|
||||
dimension of each argument is equivalent,
|
||||
(``dynamic_dim(example_args[0], 0) == dynamic_dim(example_args[1], 0)``),
|
||||
we see in the equality constraints the tuple specifying that ``arg5_1``
|
||||
dimension 0 and ``arg6_1`` dimension 0 are equal.
|
||||
|
||||
|
||||
Serialization
|
||||
^^^^^^^^^^^^^
|
||||
|
||||
To save the ``ExportedProgram``, users can use the :func:`torch.export.save` and
|
||||
:func:`torch.export.load` APIs. A convention is to save the ``ExportedProgram``
|
||||
using a ``.pt2`` file extension.
|
||||
|
||||
An example:
|
||||
|
||||
::
|
||||
|
||||
import torch
|
||||
import io
|
||||
|
||||
class MyModule(torch.nn.Module):
|
||||
def forward(self, x):
|
||||
return x + 10
|
||||
|
||||
exported_program = torch.export.export(MyModule(), torch.randn(5))
|
||||
|
||||
torch.export.save(exported_program, 'exported_program.pt2')
|
||||
saved_exported_program = torch.export.load('exported_program.pt2')
|
||||
|
||||
|
||||
Specialization
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
Input shapes
|
||||
~~~~~~~~~~~~
|
||||
|
||||
As mentioned before, by default, ``torch.export`` will trace the program
|
||||
specializing on the input tensors' shapes, unless a dimension is specified as
|
||||
dynamic via the :func:`torch.export.dynamic_dim` API. This means that if there
|
||||
exists shape-dependent control flow, ``torch.export`` will specialize on the
|
||||
branch that is being taken with the given sample inputs. For example:
|
||||
|
||||
::
|
||||
|
||||
import torch
|
||||
from torch.export import export
|
||||
|
||||
def fn(x):
|
||||
if x.shape[0] > 5:
|
||||
return x + 1
|
||||
else:
|
||||
return x - 1
|
||||
|
||||
example_inputs = (torch.rand(10, 2),)
|
||||
exported_program = export(fn, example_inputs)
|
||||
print(exported_program)
|
||||
|
||||
.. code-block::
|
||||
|
||||
ExportedProgram:
|
||||
class GraphModule(torch.nn.Module):
|
||||
def forward(self, arg0_1: f32[10, 2]):
|
||||
add: f32[10, 2] = torch.ops.aten.add.Tensor(arg0_1, 1);
|
||||
return (add,)
|
||||
|
||||
The conditional of (``x.shape[0] > 5``) does not appear in the
|
||||
``ExportedProgram`` because the example inputs have the static
|
||||
shape of (10, 2). Since ``torch.export`` specializes on the inputs' static
|
||||
shapes, the else branch (``x - 1``) will never be reached. To preserve the dynamic
|
||||
branching behavior based on the shape of a tensor in the traced graph,
|
||||
:func:`torch.export.dynamic_dim` will need to be used to specify the dimension
|
||||
of the input tensor (``x.shape[0]``) to be dynamic, and the source code will
|
||||
need to be :ref:`rewritten <Data/Shape-Dependent Control Flow>`.
|
||||
|
||||
Non-tensor inputs
|
||||
~~~~~~~~~~~~~~~~~
|
||||
|
||||
``torch.export`` also specializes the traced graph based on the values of inputs
|
||||
that are not ``torch.Tensor``, such as ``int``, ``float``, ``bool``, and ``str``.
|
||||
However, we will likely change this in the near future to not specialize on
|
||||
inputs of primitive types.
|
||||
|
||||
For example:
|
||||
|
||||
::
|
||||
|
||||
import torch
|
||||
from torch.export import export
|
||||
|
||||
def fn(x: torch.Tensor, const: int, times: int):
|
||||
for i in range(times):
|
||||
x = x + const
|
||||
return x
|
||||
|
||||
example_inputs = (torch.rand(2, 2), 1, 3)
|
||||
exported_program = export(fn, example_inputs)
|
||||
print(exported_program)
|
||||
|
||||
.. code-block::
|
||||
|
||||
ExportedProgram:
|
||||
class GraphModule(torch.nn.Module):
|
||||
def forward(self, arg0_1: f32[2, 2], arg1_1, arg2_1):
|
||||
add: f32[2, 2] = torch.ops.aten.add.Tensor(arg0_1, 1);
|
||||
add_1: f32[2, 2] = torch.ops.aten.add.Tensor(add, 1);
|
||||
add_2: f32[2, 2] = torch.ops.aten.add.Tensor(add_1, 1);
|
||||
return (add_2,)
|
||||
|
||||
Because integers are specialized, the ``torch.ops.aten.add.Tensor`` operations
|
||||
are all computed with the inlined constant ``1``, rather than ``arg1_1``.
|
||||
Additionally, the ``times`` iterator used in the ``for`` loop is also "inlined"
|
||||
in the graph through the 3 repeated ``torch.ops.aten.add.Tensor`` calls, and the
|
||||
input ``arg2_1`` is never used.
|
||||
|
||||
|
||||
Limitations of torch.export
|
||||
---------------------------
|
||||
|
||||
Graph Breaks
|
||||
^^^^^^^^^^^^
|
||||
|
||||
As ``torch.export`` is a one-shot process for capturing a computation graph from
|
||||
a PyTorch program, it might ultimately run into untraceable parts of programs as
|
||||
it is nearly impossible to support tracing all PyTorch and Python features. In
|
||||
the case of ``torch.compile``, an unsupported operation will cause a "graph
|
||||
break" and the unsupported operation will be run with default Python evaluation.
|
||||
In contrast, ``torch.export`` will require users to provide additional
|
||||
information or rewrite parts of their code to make it traceable. As the
|
||||
tracing is based on TorchDynamo, which evaluates at the Python
|
||||
bytecode level, there will be significantly fewer rewrites required compared to
|
||||
previous tracing frameworks.
|
||||
|
||||
When a graph break is encountered, :ref:`ExportDB <torch.export_db>` is a great
|
||||
resource for learning about the kinds of programs that are supported and
|
||||
unsupported, along with ways to rewrite programs to make them traceable.
|
||||
|
||||
.. _Data/Shape-Dependent Control Flow:
|
||||
|
||||
Data/Shape-Dependent Control Flow
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Graph breaks can also be encountered on data-dependent control flow (``if
|
||||
x.shape[0] > 2``) when shapes are not being specialized, as a tracing compiler cannot
|
||||
possibly deal with without generating code for a combinatorially exploding
|
||||
number of paths. In such cases, users will need to rewrite their code using
|
||||
special control flow operators (coming soon!).
|
||||
|
||||
Data-Dependent Accesses
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Data dependent behavior such as using the value inside of a tensor to construct
|
||||
another tensor, or using the value of a tensor to slice into another tensor, is
|
||||
also something the tracer cannot fully determine. Users will need to rewrite
|
||||
their code using the inline constraint APIs
|
||||
:func:`torch.export.constrain_as_size` and
|
||||
:func:`torch.export.constrain_as_value`.
|
||||
|
||||
Missing Meta Kernels for Operators
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
When tracing, a META implementation (or "meta kernel") is required for all
|
||||
operators. This is used to reason about the input/output shapes for this
|
||||
operator.
|
||||
|
||||
Note that the official API for registering custom meta kernels for custom ops is
|
||||
currently undergoing development. While the final API is being refined, you can
|
||||
refer to the documentation `here <https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0>`_.
|
||||
|
||||
In the unfortunate case where your model uses an ATen operator that is does not
|
||||
have a meta kernel implementation yet, please file an issue.
|
||||
|
||||
|
||||
Read More
|
||||
---------
|
||||
|
||||
.. toctree::
|
||||
:caption: Additional Links for Export Users
|
||||
:maxdepth: 1
|
||||
|
||||
torch.compiler_transformations
|
||||
torch.compiler_ir
|
||||
generated/exportdb/index
|
||||
|
||||
.. toctree::
|
||||
:caption: Deep Dive for PyTorch Developers
|
||||
:maxdepth: 1
|
||||
|
||||
torch.compiler_deepdive
|
||||
torch.compiler_dynamic_shapes
|
||||
torch.compiler_fake_tensor
|
||||
|
||||
|
||||
API Reference
|
||||
-------------
|
||||
|
||||
.. automodule:: torch.export
|
||||
.. autofunction:: export
|
||||
@ -24,10 +565,3 @@ torch.export
|
||||
.. autoclass:: ArgumentSpec
|
||||
.. autoclass:: ModuleCallSignature
|
||||
.. autoclass:: ModuleCallEntry
|
||||
|
||||
|
||||
.. toctree::
|
||||
:glob:
|
||||
:maxdepth: 1
|
||||
|
||||
generated/exportdb/index
|
||||
|
@ -94,7 +94,6 @@ Features described in this documentation are classified by release status:
|
||||
profiler
|
||||
nn.init
|
||||
onnx
|
||||
onnx_diagnostics
|
||||
optim
|
||||
complex_numbers
|
||||
ddp_comm_hooks
|
||||
|
@ -185,6 +185,7 @@ If you don't see an operation listed here, but it would help your use case, plea
|
||||
:meth:`Tensor.reciprocal_`,None
|
||||
:meth:`Tensor.refine_names`,See documentation
|
||||
:meth:`Tensor.register_hook`,None
|
||||
:meth:`Tensor.register_post_accumulate_grad_hook`,None
|
||||
:meth:`Tensor.rename`,See documentation
|
||||
:meth:`Tensor.rename_`,See documentation
|
||||
:attr:`Tensor.requires_grad`,None
|
||||
|
@ -1,745 +1,64 @@
|
||||
torch.onnx
|
||||
==========
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
.. automodule:: torch.onnx
|
||||
Overview
|
||||
--------
|
||||
|
||||
`Open Neural Network eXchange (ONNX) <https://onnx.ai/>`_ is an open standard
|
||||
format for representing machine learning models. The torch.onnx module can export
|
||||
PyTorch models to ONNX. The model can then be consumed by any of the many
|
||||
`runtimes that support ONNX <https://onnx.ai/supported-tools.html#deployModel>`_.
|
||||
format for representing machine learning models. The ``torch.onnx`` module captures the computation graph from a
|
||||
native PyTorch :class:`torch.nn.Module` model and converts it into an
|
||||
`ONNX graph <https://github.com/onnx/onnx/blob/main/docs/IR.md>`_.
|
||||
|
||||
Example: AlexNet from PyTorch to ONNX
|
||||
-------------------------------------
|
||||
The exported model can be consumed by any of the many
|
||||
`runtimes that support ONNX <https://onnx.ai/supported-tools.html#deployModel>`_, including
|
||||
Microsoft's `ONNX Runtime <https://www.onnxruntime.ai>`_.
|
||||
|
||||
Here is a simple script which exports a pretrained AlexNet to an ONNX file named ``alexnet.onnx``.
|
||||
The call to ``torch.onnx.export`` runs the model once to trace its execution and then exports the
|
||||
traced model to the specified file::
|
||||
**There are two flavors of ONNX exporter API that you can use, as listed below:**
|
||||
|
||||
import torch
|
||||
import torchvision
|
||||
TorchDynamo-based ONNX Exporter
|
||||
-------------------------------
|
||||
|
||||
dummy_input = torch.randn(10, 3, 224, 224, device="cuda")
|
||||
model = torchvision.models.alexnet(pretrained=True).cuda()
|
||||
*The TorchDynamo-based ONNX exporter is the newest (and Beta) exporter for PyTorch 2.0 and newer*
|
||||
|
||||
# Providing input and output names sets the display names for values
|
||||
# within the model's graph. Setting these does not change the semantics
|
||||
# of the graph; it is only for readability.
|
||||
#
|
||||
# The inputs to the network consist of the flat list of inputs (i.e.
|
||||
# the values you would pass to the forward() method) followed by the
|
||||
# flat list of parameters. You can partially specify names, i.e. provide
|
||||
# a list here shorter than the number of inputs to the model, and we will
|
||||
# only set that subset of names, starting from the beginning.
|
||||
input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
|
||||
output_names = [ "output1" ]
|
||||
TorchDynamo engine is leveraged to hook into Python's frame evaluation API and dynamically rewrite its
|
||||
bytecode into an FX Graph. The resulting FX Graph is then polished before it is finally translated into an
|
||||
ONNX graph.
|
||||
|
||||
torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names)
|
||||
The main advantage of this approach is that the `FX graph <https://pytorch.org/docs/stable/fx.html>`_ is captured using
|
||||
bytecode analysis that preserves the dynamic nature of the model instead of using traditional static tracing techniques.
|
||||
|
||||
The resulting ``alexnet.onnx`` file contains a binary `protocol buffer <https://developers.google.com/protocol-buffers/>`_
|
||||
which contains both the network structure and parameters of the model you exported
|
||||
(in this case, AlexNet). The argument ``verbose=True`` causes the
|
||||
exporter to print out a human-readable representation of the model::
|
||||
:doc:`Learn more about the TorchDynamo-based ONNX Exporter <onnx_dynamo>`
|
||||
|
||||
# These are the inputs and parameters to the network, which have taken on
|
||||
# the names we specified earlier.
|
||||
graph(%actual_input_1 : Float(10, 3, 224, 224)
|
||||
%learned_0 : Float(64, 3, 11, 11)
|
||||
%learned_1 : Float(64)
|
||||
%learned_2 : Float(192, 64, 5, 5)
|
||||
%learned_3 : Float(192)
|
||||
# ---- omitted for brevity ----
|
||||
%learned_14 : Float(1000, 4096)
|
||||
%learned_15 : Float(1000)) {
|
||||
# Every statement consists of some output tensors (and their types),
|
||||
# the operator to be run (with its attributes, e.g., kernels, strides,
|
||||
# etc.), its input tensors (%actual_input_1, %learned_0, %learned_1)
|
||||
%17 : Float(10, 64, 55, 55) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[11, 11], pads=[2, 2, 2, 2], strides=[4, 4]](%actual_input_1, %learned_0, %learned_1), scope: AlexNet/Sequential[features]/Conv2d[0]
|
||||
%18 : Float(10, 64, 55, 55) = onnx::Relu(%17), scope: AlexNet/Sequential[features]/ReLU[1]
|
||||
%19 : Float(10, 64, 27, 27) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%18), scope: AlexNet/Sequential[features]/MaxPool2d[2]
|
||||
# ---- omitted for brevity ----
|
||||
%29 : Float(10, 256, 6, 6) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%28), scope: AlexNet/Sequential[features]/MaxPool2d[12]
|
||||
# Dynamic means that the shape is not known. This may be because of a
|
||||
# limitation of our implementation (which we would like to fix in a
|
||||
# future release) or shapes which are truly dynamic.
|
||||
%30 : Dynamic = onnx::Shape(%29), scope: AlexNet
|
||||
%31 : Dynamic = onnx::Slice[axes=[0], ends=[1], starts=[0]](%30), scope: AlexNet
|
||||
%32 : Long() = onnx::Squeeze[axes=[0]](%31), scope: AlexNet
|
||||
%33 : Long() = onnx::Constant[value={9216}](), scope: AlexNet
|
||||
# ---- omitted for brevity ----
|
||||
%output1 : Float(10, 1000) = onnx::Gemm[alpha=1, beta=1, broadcast=1, transB=1](%45, %learned_14, %learned_15), scope: AlexNet/Sequential[classifier]/Linear[6]
|
||||
return (%output1);
|
||||
}
|
||||
TorchScript-based ONNX Exporter
|
||||
-------------------------------
|
||||
|
||||
You can also verify the output using the `ONNX <https://github.com/onnx/onnx/>`_ library,
|
||||
which you can install using ``pip``::
|
||||
*The TorchScript-based ONNX exporter is available since PyTorch 1.2.0*
|
||||
|
||||
pip install onnx
|
||||
`TorchScript <https://pytorch.org/docs/stable/jit.html>`_ is leveraged to trace (through :func:`torch.jit.trace`)
|
||||
the model and capture a static computation graph.
|
||||
|
||||
Then, you can run::
|
||||
As a consequence, the resulting graph has a couple limitations:
|
||||
|
||||
import onnx
|
||||
* It does not record any control-flow, like if-statements or loops;
|
||||
* Does not handle nuances between ``training`` and ``eval`` mode;
|
||||
* Does not truly handle dynamic inputs
|
||||
|
||||
# Load the ONNX model
|
||||
model = onnx.load("alexnet.onnx")
|
||||
As an attempt to support the static tracing limitations, the exporter also supports TorchScript scripting
|
||||
(through :func:`torch.jit.script`), which adds support for data-dependent control-flow, for example. However, TorchScript
|
||||
itself is a subset of the Python language, so not all features in Python are supported, such as in-place operations.
|
||||
|
||||
# Check that the model is well formed
|
||||
onnx.checker.check_model(model)
|
||||
:doc:`Learn more about the TorchScript-based ONNX Exporter <onnx_torchscript>`
|
||||
|
||||
# Print a human readable representation of the graph
|
||||
print(onnx.helper.printable_graph(model.graph))
|
||||
|
||||
You can also run the exported model with one of the many
|
||||
`runtimes that support ONNX <https://onnx.ai/supported-tools.html#deployModel>`_.
|
||||
For example after installing `ONNX Runtime <https://www.onnxruntime.ai>`_, you can
|
||||
load and run the model::
|
||||
|
||||
import onnxruntime as ort
|
||||
import numpy as np
|
||||
|
||||
ort_session = ort.InferenceSession("alexnet.onnx")
|
||||
|
||||
outputs = ort_session.run(
|
||||
None,
|
||||
{"actual_input_1": np.random.randn(10, 3, 224, 224).astype(np.float32)},
|
||||
)
|
||||
print(outputs[0])
|
||||
|
||||
Here is a more involved `tutorial on exporting a model and running it with ONNX Runtime <https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html>`_.
|
||||
|
||||
.. _tracing-vs-scripting:
|
||||
|
||||
Tracing vs Scripting
|
||||
--------------------
|
||||
|
||||
Internally, :func:`torch.onnx.export()` requires a :class:`torch.jit.ScriptModule` rather than
|
||||
a :class:`torch.nn.Module`. If the passed-in model is not already a ``ScriptModule``,
|
||||
``export()`` will use *tracing* to convert it to one:
|
||||
|
||||
.. TODO(justinchuby): Add a word on recommending tracing over scripting for most use cases.
|
||||
|
||||
* **Tracing**: If ``torch.onnx.export()`` is called with a Module that is not already a
|
||||
``ScriptModule``, it first does the equivalent of :func:`torch.jit.trace`, which executes the model
|
||||
once with the given ``args`` and records all operations that happen during that execution. This
|
||||
means that if your model is dynamic, e.g., changes behavior depending on input data, the exported
|
||||
model will *not* capture this dynamic behavior.
|
||||
We recommend examining the exported model and making sure the operators look
|
||||
reasonable. Tracing will unroll loops and if statements, exporting a static graph that is exactly
|
||||
the same as the traced run. If you want to export your model with dynamic control flow, you will
|
||||
need to use *scripting*.
|
||||
|
||||
* **Scripting**: Compiling a model via scripting preserves dynamic control flow and is valid for inputs
|
||||
of different sizes. To use scripting:
|
||||
|
||||
* Use :func:`torch.jit.script` to produce a ``ScriptModule``.
|
||||
* Call ``torch.onnx.export()`` with the ``ScriptModule`` as the model. The ``args`` are still required,
|
||||
but they will be used internally only to produce example outputs, so that the types and shapes of the
|
||||
outputs can be captured. No tracing will be performed.
|
||||
|
||||
See `Introduction to TorchScript <https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html>`_
|
||||
and `TorchScript <jit.html>`_ for more details, including how to compose tracing and scripting to suit the
|
||||
particular requirements of different models.
|
||||
|
||||
|
||||
Avoiding Pitfalls
|
||||
-----------------
|
||||
|
||||
Avoid NumPy and built-in Python types
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
PyTorch models can be written using NumPy or Python types and functions, but
|
||||
during :ref:`tracing<tracing-vs-scripting>`, any variables of NumPy or Python
|
||||
types (rather than torch.Tensor) are converted to constants, which will produce
|
||||
the wrong result if those values should change depending on the inputs.
|
||||
|
||||
For example, rather than using numpy functions on numpy.ndarrays: ::
|
||||
|
||||
# Bad! Will be replaced with constants during tracing.
|
||||
x, y = np.random.rand(1, 2), np.random.rand(1, 2)
|
||||
np.concatenate((x, y), axis=1)
|
||||
|
||||
Use torch operators on torch.Tensors: ::
|
||||
|
||||
# Good! Tensor operations will be captured during tracing.
|
||||
x, y = torch.randn(1, 2), torch.randn(1, 2)
|
||||
torch.cat((x, y), dim=1)
|
||||
|
||||
|
||||
And rather than use :func:`torch.Tensor.item` (which converts a Tensor to a Python
|
||||
built-in number): ::
|
||||
|
||||
# Bad! y.item() will be replaced with a constant during tracing.
|
||||
def forward(self, x, y):
|
||||
return x.reshape(y.item(), -1)
|
||||
|
||||
Use torch's support for implicit casting of single-element tensors: ::
|
||||
|
||||
# Good! y will be preserved as a variable during tracing.
|
||||
def forward(self, x, y):
|
||||
return x.reshape(y, -1)
|
||||
|
||||
Avoid Tensor.data
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
Using the Tensor.data field can produce an incorrect trace and therefore an incorrect ONNX graph.
|
||||
Use :func:`torch.Tensor.detach` instead. (Work is ongoing to
|
||||
`remove Tensor.data entirely <https://github.com/pytorch/pytorch/issues/30987>`_).
|
||||
|
||||
Avoid in-place operations when using tensor.shape in tracing mode
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In tracing mode, shapes obtained from ``tensor.shape`` are traced as tensors,
|
||||
and share the same memory. This might cause a mismatch the final output values.
|
||||
As a workaround, avoid the use of inplace operations in these scenarios.
|
||||
For example, in the model::
|
||||
|
||||
class Model(torch.nn.Module):
|
||||
def forward(self, states):
|
||||
batch_size, seq_length = states.shape[:2]
|
||||
real_seq_length = seq_length
|
||||
real_seq_length += 2
|
||||
return real_seq_length + seq_length
|
||||
|
||||
``real_seq_length`` and ``seq_length`` share the same memory in tracing mode.
|
||||
This could be avoided by rewriting the inplace operation::
|
||||
|
||||
real_seq_length = real_seq_length + 2
|
||||
|
||||
Limitations
|
||||
-----------
|
||||
|
||||
Types
|
||||
^^^^^
|
||||
|
||||
* Only :class:`torch.Tensors`, numeric types that can be trivially converted to torch.Tensors (e.g. float, int),
|
||||
and tuples and lists of those types are supported as model inputs or outputs. Dict and str inputs and
|
||||
outputs are accepted in :ref:`tracing<tracing-vs-scripting>` mode, but:
|
||||
|
||||
* Any computation that depends on the value of a dict or a str input **will be replaced with the
|
||||
constant value** seen during the one traced execution.
|
||||
* Any output that is a dict will be silently replaced with a **flattened sequence of its values
|
||||
(keys will be removed)**. E.g. ``{"foo": 1, "bar": 2}`` becomes ``(1, 2)``.
|
||||
* Any output that is a str will be silently removed.
|
||||
|
||||
* Certain operations involving tuples and lists are not supported in
|
||||
:ref:`scripting<tracing-vs-scripting>` mode due to limited support in ONNX for nested sequences.
|
||||
In particular appending a tuple to a list is not supported. In tracing mode, the nested sequences
|
||||
will be flattened automatically during the tracing.
|
||||
|
||||
Differences in Operator Implementations
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Due to differences in implementations of operators, running the exported model on different runtimes
|
||||
may produce different results from each other or from PyTorch. Normally these differences are
|
||||
numerically small, so this should only be a concern if your application is sensitive to these
|
||||
small differences.
|
||||
|
||||
.. _tensor-indexing:
|
||||
|
||||
Unsupported Tensor Indexing Patterns
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Tensor indexing patterns that cannot be exported are listed below.
|
||||
If you are experiencing issues exporting a model that does not include any of
|
||||
the unsupported patterns below, please double check that you are exporting with
|
||||
the latest ``opset_version``.
|
||||
|
||||
Reads / Gets
|
||||
~~~~~~~~~~~~
|
||||
|
||||
When indexing into a tensor for reading, the following patterns are not supported: ::
|
||||
|
||||
# Tensor indices that includes negative values.
|
||||
data[torch.tensor([[1, 2], [2, -3]]), torch.tensor([-2, 3])]
|
||||
# Workarounds: use positive index values.
|
||||
|
||||
Writes / Sets
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
When indexing into a Tensor for writing, the following patterns are not supported: ::
|
||||
|
||||
# Multiple tensor indices if any has rank >= 2
|
||||
data[torch.tensor([[1, 2], [2, 3]]), torch.tensor([2, 3])] = new_data
|
||||
# Workarounds: use single tensor index with rank >= 2,
|
||||
# or multiple consecutive tensor indices with rank == 1.
|
||||
|
||||
# Multiple tensor indices that are not consecutive
|
||||
data[torch.tensor([2, 3]), :, torch.tensor([1, 2])] = new_data
|
||||
# Workarounds: transpose `data` such that tensor indices are consecutive.
|
||||
|
||||
# Tensor indices that includes negative values.
|
||||
data[torch.tensor([1, -2]), torch.tensor([-2, 3])] = new_data
|
||||
# Workarounds: use positive index values.
|
||||
|
||||
# Implicit broadcasting required for new_data.
|
||||
data[torch.tensor([[0, 2], [1, 1]]), 1:3] = new_data
|
||||
# Workarounds: expand new_data explicitly.
|
||||
# Example:
|
||||
# data shape: [3, 4, 5]
|
||||
# new_data shape: [5]
|
||||
# expected new_data shape after broadcasting: [2, 2, 2, 5]
|
||||
|
||||
Adding support for operators
|
||||
----------------------------
|
||||
|
||||
When exporting a model that includes unsupported operators, you'll see an error message like:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
RuntimeError: ONNX export failed: Couldn't export operator foo
|
||||
|
||||
When that happens, there are a few things you can do:
|
||||
|
||||
#. Change the model to not use that operator.
|
||||
#. Create a symbolic function to convert the operator and register it as a custom symbolic function.
|
||||
#. Contribute to PyTorch to add the same symbolic function to :mod:`torch.onnx` itself.
|
||||
|
||||
If you decided to implement a symbolic function (we hope you will contribute it back to PyTorch!), here is how you can get started:
|
||||
|
||||
ONNX exporter internals
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A "symbolic function" is a function that decomposes a PyTorch operator into a
|
||||
composition of a series of ONNX operators.
|
||||
|
||||
During export, each node (which contains a PyTorch operator) in the TorchScript
|
||||
graph is visited by the exporter in topological order.
|
||||
Upon visiting a node, the exporter looks for a registered symbolic functions for
|
||||
that operator. Symbolic functions are implemented in Python. A symbolic function for
|
||||
an op named ``foo`` would look something like::
|
||||
|
||||
|
||||
def foo(
|
||||
g,
|
||||
input_0: torch._C.Value,
|
||||
input_1: torch._C.Value) -> Union[None, torch._C.Value, List[torch._C.Value]]:
|
||||
"""
|
||||
Adds the ONNX operations representing this PyTorch function by updating the
|
||||
graph g with `g.op()` calls.
|
||||
|
||||
Args:
|
||||
g (Graph): graph to write the ONNX representation into.
|
||||
input_0 (Value): value representing the variables which contain
|
||||
the first input for this operator.
|
||||
input_1 (Value): value representing the variables which contain
|
||||
the second input for this operator.
|
||||
|
||||
Returns:
|
||||
A Value or List of Values specifying the ONNX nodes that compute something
|
||||
equivalent to the original PyTorch operator with the given inputs.
|
||||
|
||||
None if it cannot be converted to ONNX.
|
||||
"""
|
||||
...
|
||||
|
||||
The ``torch._C`` types are Python wrappers around the types defined in C++ in
|
||||
`ir.h <https://github.com/pytorch/pytorch/blob/main/torch/csrc/jit/ir/ir.h>`_.
|
||||
|
||||
The process for adding a symbolic function depends on the type of operator.
|
||||
|
||||
.. _adding-support-aten:
|
||||
|
||||
ATen operators
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
`ATen <https://pytorch.org/cppdocs/#aten>`_ is PyTorch's built-in tensor library.
|
||||
If the operator is an ATen operator (shows up in the TorchScript graph with the prefix
|
||||
``aten::``), make sure it is not supported already.
|
||||
|
||||
List of supported operators
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Visit the auto generated :doc:`list of supported TorchScript operators <../onnx_supported_aten_ops>`
|
||||
for details on which operator are supported in each ``opset_version``.
|
||||
|
||||
Adding support for an aten or quantized operator
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If the operator is not in the list above:
|
||||
|
||||
* Define the symbolic function in ``torch/onnx/symbolic_opset<version>.py``, for example
|
||||
`torch/onnx/symbolic_opset9.py <https://github.com/pytorch/pytorch/blob/main/torch/onnx/symbolic_opset9.py>`_.
|
||||
Make sure the function has the same name as the ATen function, which may be declared in
|
||||
``torch/_C/_VariableFunctions.pyi`` or ``torch/nn/functional.pyi`` (these files are generated at
|
||||
build time, so will not appear in your checkout until you build PyTorch).
|
||||
* By default, the first arg is the ONNX graph.
|
||||
Other arg names must EXACTLY match the names in the ``.pyi`` file,
|
||||
because dispatch is done with keyword arguments.
|
||||
* In the symbolic function, if the operator is in the
|
||||
`ONNX standard operator set <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_,
|
||||
we only need to create a node to represent the ONNX operator in the graph.
|
||||
If not, we can compose several standard operators that have the
|
||||
equivalent semantics to the ATen operator.
|
||||
|
||||
Here is an example of handling missing symbolic function for the ``ELU`` operator.
|
||||
|
||||
If we run the following code::
|
||||
|
||||
print(
|
||||
torch.jit.trace(
|
||||
torch.nn.ELU(), # module
|
||||
torch.ones(1) # example input
|
||||
).graph
|
||||
)
|
||||
|
||||
We see something like::
|
||||
|
||||
graph(%self : __torch__.torch.nn.modules.activation.___torch_mangle_0.ELU,
|
||||
%input : Float(1, strides=[1], requires_grad=0, device=cpu)):
|
||||
%4 : float = prim::Constant[value=1.]()
|
||||
%5 : int = prim::Constant[value=1]()
|
||||
%6 : int = prim::Constant[value=1]()
|
||||
%7 : Float(1, strides=[1], requires_grad=0, device=cpu) = aten::elu(%input, %4, %5, %6)
|
||||
return (%7)
|
||||
|
||||
Since we see ``aten::elu`` in the graph, we know this is an ATen operator.
|
||||
|
||||
We check the `ONNX operator list <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_,
|
||||
and confirm that ``Elu`` is standardized in ONNX.
|
||||
|
||||
We find a signature for ``elu`` in ``torch/nn/functional.pyi``::
|
||||
|
||||
def elu(input: Tensor, alpha: float = ..., inplace: bool = ...) -> Tensor: ...
|
||||
|
||||
We add the following lines to ``symbolic_opset9.py``::
|
||||
|
||||
def elu(g, input: torch.Value, alpha: torch.Value, inplace: bool = False):
|
||||
return g.op("Elu", input, alpha_f=alpha)
|
||||
|
||||
Now PyTorch is able to export models containing the ``aten::elu`` operator!
|
||||
|
||||
See the ``torch/onnx/symbolic_opset*.py`` files for more examples.
|
||||
|
||||
|
||||
torch.autograd.Functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If the operator is a sub-class of :class:`torch.autograd.Function`, there are three ways
|
||||
to export it.
|
||||
|
||||
Static Symbolic Method
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can add a static method named ``symbolic`` to your function class. It should return
|
||||
ONNX operators that represent the function's behavior in ONNX. For example::
|
||||
|
||||
class MyRelu(torch.autograd.Function):
|
||||
@staticmethod
|
||||
def forward(ctx, input: torch.Tensor) -> torch.Tensor:
|
||||
ctx.save_for_backward(input)
|
||||
return input.clamp(min=0)
|
||||
|
||||
@staticmethod
|
||||
def symbolic(g: torch.Graph, input: torch.Value) -> torch.Value:
|
||||
return g.op("Clip", input, g.op("Constant", value_t=torch.tensor(0, dtype=torch.float)))
|
||||
|
||||
.. FIXME(justinchuby): PythonOps are too complicated and the example below
|
||||
.. uses private methods we do not expose. We are looking to
|
||||
.. improve the experience. Since SymbolicContext is deprecated, we think
|
||||
.. defining a symbolic staticmethod is a better way to go for now.
|
||||
|
||||
.. PythonOp Symbolic
|
||||
.. ~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. Alternatively, you can register a custom symbolic function.
|
||||
.. This gives the symbolic function access to more info through the
|
||||
.. ``torch.onnx.SymbolicContext`` object, which gets passed in as the first
|
||||
.. argument (before the ``Graph`` object).
|
||||
|
||||
.. All autograd ``Function``\ s appear in the TorchScript graph as ``prim::PythonOp`` nodes.
|
||||
.. In order to differentiate between different ``Function`` subclasses, the
|
||||
.. symbolic function should use the ``name`` kwarg which gets set to the name of the class.
|
||||
|
||||
.. Custom symbolic functions should add type and shape information by calling ``setType(...)``
|
||||
.. on Value objects before returning them (implemented in C++ by
|
||||
.. . ``torch::jit::Value::setType``). This is not required, but it can help the exporter's
|
||||
.. shape and type inference for down-stream nodes. For a non-trivial example of ``setType``, see
|
||||
.. ``test_aten_embedding_2`` in
|
||||
.. `test_operators.py <https://github.com/pytorch/pytorch/blob/main/test/onnx/test_operators.py>`_.
|
||||
|
||||
.. The example below shows how you can access ``requires_grad`` via the ``Node`` object:
|
||||
|
||||
.. class MyClip(torch.autograd.Function):
|
||||
.. @staticmethod
|
||||
.. def forward(ctx, input, min):
|
||||
.. ctx.save_for_backward(input)
|
||||
.. return input.clamp(min=min)
|
||||
|
||||
.. class MyRelu(torch.autograd.Function):
|
||||
.. @staticmethod
|
||||
.. def forward(ctx, input):
|
||||
.. ctx.save_for_backward(input)
|
||||
.. return input.clamp(min=0)
|
||||
|
||||
.. def symbolic_python_op(g: "GraphContext", *args, **kwargs):
|
||||
.. n = ctx.cur_node
|
||||
.. print("original node: ", n)
|
||||
.. for i, out in enumerate(n.outputs()):
|
||||
.. print("original output {}: {}, requires grad: {}".format(i, out, out.requiresGrad()))
|
||||
.. import torch.onnx.symbolic_helper as sym_helper
|
||||
.. for i, arg in enumerate(args):
|
||||
.. requires_grad = arg.requiresGrad() if sym_helper._is_value(arg) else False
|
||||
.. print("arg {}: {}, requires grad: {}".format(i, arg, requires_grad))
|
||||
|
||||
.. name = kwargs["name"]
|
||||
.. ret = None
|
||||
.. if name == "MyClip":
|
||||
.. ret = g.op("Clip", args[0], args[1])
|
||||
.. elif name == "MyRelu":
|
||||
.. ret = g.op("Relu", args[0])
|
||||
.. else:
|
||||
.. # Logs a warning and returns None
|
||||
.. return _unimplemented("prim::PythonOp", "unknown node kind: " + name)
|
||||
.. # Copy type and shape from original node.
|
||||
.. ret.setType(n.type())
|
||||
.. return ret
|
||||
|
||||
.. from torch.onnx import register_custom_op_symbolic
|
||||
.. . register_custom_op_symbolic("prim::PythonOp", symbolic_python_op, 1)
|
||||
|
||||
Inline Autograd Function
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In cases where a static symbolic method is not provided for its subsequent :class:`torch.autograd.Function` or
|
||||
where a function to register ``prim::PythonOp`` as custom symbolic functions is not provided,
|
||||
:func:`torch.onnx.export` tries to inline the graph that corresponds to that :class:`torch.autograd.Function` such that
|
||||
this function is broken down into individual operators that were used within the function.
|
||||
The export should be successful as long as these individual operators are supported. For example::
|
||||
|
||||
class MyLogExp(torch.autograd.Function):
|
||||
@staticmethod
|
||||
def forward(ctx, input: torch.Tensor) -> torch.Tensor:
|
||||
ctx.save_for_backward(input)
|
||||
h = input.exp()
|
||||
return h.log().log()
|
||||
|
||||
There is no static symbolic method present for this model, yet it is exported as follows::
|
||||
|
||||
graph(%input : Float(1, strides=[1], requires_grad=0, device=cpu)):
|
||||
%1 : float = onnx::Exp[](%input)
|
||||
%2 : float = onnx::Log[](%1)
|
||||
%3 : float = onnx::Log[](%2)
|
||||
return (%3)
|
||||
|
||||
If you need to avoid inlining of :class:`torch.autograd.Function`, you should export models with
|
||||
``operator_export_type`` set to ``ONNX_FALLTHROUGH`` or ``ONNX_ATEN_FALLBACK``.
|
||||
|
||||
Custom operators
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
You can export your model with custom operators that includes a combination of many standard ONNX ops,
|
||||
or are driven by self-defined C++ backend.
|
||||
|
||||
ONNX-script functions
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If an operator is not a standard ONNX op, but can be composed of multiple existing ONNX ops, you can utilize
|
||||
`ONNX-script <https://github.com/microsoft/onnx-script>`_ to create an external ONNX function to support the operator.
|
||||
You can export it by following this example::
|
||||
|
||||
import onnxscript
|
||||
# There are three opset version needed to be aligned
|
||||
# This is (1) the opset version in ONNX function
|
||||
from onnxscript.onnx_opset import opset15 as op
|
||||
opset_version = 15
|
||||
|
||||
x = torch.randn(1, 2, 3, 4, requires_grad=True)
|
||||
model = torch.nn.SELU()
|
||||
|
||||
custom_opset = onnxscript.values.Opset(domain="onnx-script", version=1)
|
||||
|
||||
@onnxscript.script(custom_opset)
|
||||
def Selu(X):
|
||||
alpha = 1.67326 # auto wrapped as Constants
|
||||
gamma = 1.0507
|
||||
alphaX = op.CastLike(alpha, X)
|
||||
gammaX = op.CastLike(gamma, X)
|
||||
neg = gammaX * (alphaX * op.Exp(X) - alphaX)
|
||||
pos = gammaX * X
|
||||
zero = op.CastLike(0, X)
|
||||
return op.Where(X <= zero, neg, pos)
|
||||
|
||||
# setType API provides shape/type to ONNX shape/type inference
|
||||
def custom_selu(g: jit_utils.GraphContext, X):
|
||||
return g.onnxscript_op(Selu, X).setType(X.type())
|
||||
|
||||
# Register custom symbolic function
|
||||
# There are three opset version needed to be aligned
|
||||
# This is (2) the opset version in registry
|
||||
torch.onnx.register_custom_op_symbolic(
|
||||
symbolic_name="aten::selu",
|
||||
symbolic_fn=custom_selu,
|
||||
opset_version=opset_version,
|
||||
)
|
||||
|
||||
# There are three opset version needed to be aligned
|
||||
# This is (2) the opset version in exporter
|
||||
torch.onnx.export(
|
||||
model,
|
||||
x,
|
||||
"model.onnx",
|
||||
opset_version=opset_version,
|
||||
# only needed if you want to specify an opset version > 1.
|
||||
custom_opsets={"onnx-script": 2}
|
||||
)
|
||||
|
||||
The example above exports it as a custom operator in the "onnx-script" opset.
|
||||
When exporting a custom operator, you can specify the custom domain version using the
|
||||
``custom_opsets`` dictionary at export. If not specified, the custom opset version defaults to 1.
|
||||
|
||||
NOTE: Be careful to align the opset version mentioned in the above example, and make sure they are consumed in exporter step.
|
||||
The example usage of how to write a onnx-script function is a beta version in terms of the active development on onnx-script.
|
||||
Please follow the latest `ONNX-script <https://github.com/microsoft/onnx-script>`_
|
||||
|
||||
C++ Operators
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
If a model uses a custom operator implemented in C++ as described in
|
||||
`Extending TorchScript with Custom C++ Operators <https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html>`_,
|
||||
you can export it by following this example::
|
||||
|
||||
from torch.onnx import symbolic_helper
|
||||
|
||||
|
||||
# Define custom symbolic function
|
||||
@symbolic_helper.parse_args("v", "v", "f", "i")
|
||||
def symbolic_foo_forward(g, input1, input2, attr1, attr2):
|
||||
return g.op("custom_domain::Foo", input1, input2, attr1_f=attr1, attr2_i=attr2)
|
||||
|
||||
|
||||
# Register custom symbolic function
|
||||
torch.onnx.register_custom_op_symbolic("custom_ops::foo_forward", symbolic_foo_forward, 9)
|
||||
|
||||
|
||||
class FooModel(torch.nn.Module):
|
||||
def __init__(self, attr1, attr2):
|
||||
super().__init__()
|
||||
self.attr1 = attr1
|
||||
self.attr2 = attr2
|
||||
|
||||
def forward(self, input1, input2):
|
||||
# Calling custom op
|
||||
return torch.ops.custom_ops.foo_forward(input1, input2, self.attr1, self.attr2)
|
||||
|
||||
|
||||
model = FooModel(attr1, attr2)
|
||||
torch.onnx.export(
|
||||
model,
|
||||
(example_input1, example_input1),
|
||||
"model.onnx",
|
||||
# only needed if you want to specify an opset version > 1.
|
||||
custom_opsets={"custom_domain": 2}
|
||||
)
|
||||
|
||||
The example above exports it as a custom operator in the "custom_domain" opset.
|
||||
When exporting a custom operator, you can specify the custom domain version using the
|
||||
``custom_opsets`` dictionary at export. If not specified, the custom opset version defaults to 1.
|
||||
|
||||
The runtime that consumes the model needs to support the custom op. See
|
||||
`Caffe2 custom ops <https://caffe2.ai/docs/custom-operators.html>`_,
|
||||
`ONNX Runtime custom ops <https://onnxruntime.ai/docs/reference/operators/add-custom-op.html>`_,
|
||||
or your runtime of choice's documentation.
|
||||
|
||||
|
||||
Discovering all unconvertible ATen ops at once
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
When export fails due to an unconvertible ATen op, there may in fact be more
|
||||
than one such op but the error message only mentions the first. To discover
|
||||
all of the unconvertible ops in one go you can::
|
||||
|
||||
# prepare model, args, opset_version
|
||||
...
|
||||
|
||||
torch_script_graph, unconvertible_ops = torch.onnx.utils.unconvertible_ops(
|
||||
model, args, opset_version=opset_version
|
||||
)
|
||||
|
||||
print(set(unconvertible_ops))
|
||||
|
||||
The set is approximated because some ops may be removed during the conversion
|
||||
process and don't need to be converted. Some other ops may have partial support
|
||||
that will fail conversion with particular inputs, but this should give you a
|
||||
general idea of what ops are not supported. Please feel free to open GitHub Issues
|
||||
for op support requests.
|
||||
|
||||
Frequently Asked Questions
|
||||
--------------------------
|
||||
Q: I have exported my LSTM model, but its input size seems to be fixed?
|
||||
|
||||
The tracer records the shapes of the example inputs. If the model should accept
|
||||
inputs of dynamic shapes, set ``dynamic_axes`` when calling :func:`torch.onnx.export`.
|
||||
|
||||
Q: How to export models containing loops?
|
||||
|
||||
See `Tracing vs Scripting`_.
|
||||
|
||||
Q: How to export models with primitive type inputs (e.g. int, float)?
|
||||
|
||||
Support for primitive numeric type inputs was added in PyTorch 1.9.
|
||||
However, the exporter does not support models with str inputs.
|
||||
|
||||
Q: Does ONNX support implicit scalar datatype casting?
|
||||
|
||||
The ONNX standard does not, but the exporter will try to handle that part.
|
||||
Scalars are exported as constant tensors.
|
||||
The exporter will figure out the right data type for scalars. In rare cases when it is unable
|
||||
to do so, you will need to manually specify the datatype with e.g. `dtype=torch.float32`.
|
||||
If you see any errors, please [create a GitHub issue](https://github.com/pytorch/pytorch/issues).
|
||||
|
||||
Q: Are lists of Tensors exportable to ONNX?
|
||||
|
||||
Yes, for ``opset_version`` >= 11, since ONNX introduced the Sequence type in opset 11.
|
||||
|
||||
|
||||
Contributing / developing
|
||||
Contributing / Developing
|
||||
-------------------------
|
||||
`Developer docs <https://github.com/pytorch/pytorch/wiki/PyTorch-ONNX-exporter>`_.
|
||||
|
||||
Functions
|
||||
---------
|
||||
.. autofunction:: export
|
||||
.. autofunction:: export_to_pretty_string
|
||||
.. autofunction:: register_custom_op_symbolic
|
||||
.. autofunction:: unregister_custom_op_symbolic
|
||||
.. autofunction:: select_model_mode_for_export
|
||||
.. autofunction:: is_in_onnx_export
|
||||
.. autofunction:: enable_log
|
||||
.. autofunction:: disable_log
|
||||
.. autofunction:: torch.onnx.verification.find_mismatch
|
||||
The ONNX exporter is a community project and we welcome contributions. We follow the
|
||||
`PyTorch guidelines for contributions <https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md>`_, but you might
|
||||
also be interested in reading our `development wiki <https://github.com/pytorch/pytorch/wiki/PyTorch-ONNX-exporter>`_.
|
||||
|
||||
Classes
|
||||
-------
|
||||
.. toctree::
|
||||
:hidden:
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated
|
||||
:nosignatures:
|
||||
:template: classtemplate.rst
|
||||
|
||||
JitScalarType
|
||||
torch.onnx.verification.GraphInfo
|
||||
torch.onnx.verification.VerificationOptions
|
||||
|
||||
Preview: torch.onnx TorchDynamo Exporter
|
||||
----------------------------------------
|
||||
|
||||
.. warning::
|
||||
The ONNX exporter for TorchDynamo is under active development and is
|
||||
subject to rapid change.
|
||||
|
||||
.. autofunction:: torch.onnx.dynamo_export
|
||||
.. autofunction:: torch.onnx.enable_fake_mode
|
||||
.. autofunction:: torch.onnx.is_onnxrt_backend_supported
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated
|
||||
:nosignatures:
|
||||
:template: classtemplate.rst
|
||||
|
||||
torch.onnx.DiagnosticOptions
|
||||
torch.onnx.ExportOptions
|
||||
torch.onnx.ExportOutput
|
||||
torch.onnx.ExportOutputSerializer
|
||||
torch.onnx.OnnxExporterError
|
||||
torch.onnx.OnnxRegistry
|
||||
onnx_dynamo
|
||||
onnx_dynamo_onnxruntime_backend
|
||||
onnx_torchscript
|
||||
|
@ -1,26 +0,0 @@
|
||||
torch.onnx diagnostics
|
||||
======================
|
||||
|
||||
.. contents:: :local:
|
||||
.. automodule:: torch.onnx._internal.diagnostics
|
||||
.. currentmodule:: torch.onnx._internal.diagnostics
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
NOTE: This feature is underdevelopment and is subject to change.
|
||||
|
||||
The goal is to improve the diagnostics to help users debug and improve their model export to ONNX.
|
||||
|
||||
- The diagnostics are emitted in machine parsable `Static Analysis Results Interchange Format (SARIF) <https://docs.oasis-open.org/sarif/sarif/v2.1.0/sarif-v2.1.0.html>`__.
|
||||
- A new clearer, structured way to add new and keep track of diagnostic rules.
|
||||
- Serve as foundation for more future improvements consuming the diagnostics.
|
||||
|
||||
|
||||
Diagnostic Rules
|
||||
----------------
|
||||
|
||||
.. toctree::
|
||||
:glob:
|
||||
|
||||
generated/onnx_diagnostics_rules/*
|
156
docs/source/onnx_dynamo.rst
Normal file
156
docs/source/onnx_dynamo.rst
Normal file
@ -0,0 +1,156 @@
|
||||
TorchDynamo-based ONNX Exporter
|
||||
===============================
|
||||
|
||||
.. automodule:: torch.onnx
|
||||
:noindex:
|
||||
|
||||
.. contents:: :local:
|
||||
:depth: 3
|
||||
|
||||
.. warning::
|
||||
The ONNX exporter for TorchDynamo is a rapidly evolving beta technology.
|
||||
|
||||
Overview
|
||||
--------
|
||||
|
||||
The ONNX exporter leverages TorchDynamo engine to hook into Python's frame evaluation API
|
||||
and dynamically rewrite its bytecode into an FX Graph.
|
||||
The resulting FX Graph is then polished before it is finally translated into an ONNX graph.
|
||||
|
||||
The main advantage of this approach is that the `FX graph <https://pytorch.org/docs/stable/fx.html>`_ is captured using
|
||||
bytecode analysis that preserves the dynamic nature of the model instead of using traditional static tracing techniques.
|
||||
|
||||
The exporter is designed to be modular and extensible. It is composed of the following components:
|
||||
|
||||
- **ONNX Exporter**: :class:`Exporter` main class that orchestrates the export process.
|
||||
- **ONNX Export Options**: :class:`ExportOptions` has a set of options that control the export process.
|
||||
- **ONNX Registry**: :class:`OnnxRegistry` is the registry of ONNX operators and functions.
|
||||
- **FX Graph Extractor**: :class:`FXGraphExtractor` extracts the FX graph from the PyTorch model.
|
||||
- **Fake Mode**: :class:`ONNXFakeContext` is a context manager that enables fake mode for large scale models.
|
||||
- **ONNX Export Output**: :class:`ExportOutput` is the output of the exporter that contains the exported ONNX graph and diagnostics.
|
||||
- **ONNX Export Output Serializer**: :class:`ExportOutputSerializer` serializes the exported model to a file.
|
||||
- **ONNX Diagnostic Options**: :class:`DiagnosticOptions` has a set of options that control the diagnostics emitted by the exporter.
|
||||
|
||||
Dependencies
|
||||
------------
|
||||
|
||||
The ONNX exporter depends on extra Python packages:
|
||||
|
||||
- `ONNX <https://onnx.ai>`_
|
||||
- `ONNX Script <https://onnxscript.ai>`_
|
||||
|
||||
They can be installed through `pip <https://pypi.org/project/pip/>`_:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
pip install --upgrade onnx onnxscript
|
||||
|
||||
A simple example
|
||||
----------------
|
||||
|
||||
See below a demonstration of exporter API in action with a simple Multilayer Perceptron (MLP) as example:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
|
||||
class MLPModel(nn.Module):
|
||||
def __init__(self):
|
||||
super().__init__()
|
||||
self.fc0 = nn.Linear(8, 8, bias=True)
|
||||
self.fc1 = nn.Linear(8, 4, bias=True)
|
||||
self.fc2 = nn.Linear(4, 2, bias=True)
|
||||
self.fc3 = nn.Linear(2, 2, bias=True)
|
||||
|
||||
def forward(self, tensor_x: torch.Tensor):
|
||||
tensor_x = self.fc0(tensor_x)
|
||||
tensor_x = torch.sigmoid(tensor_x)
|
||||
tensor_x = self.fc1(tensor_x)
|
||||
tensor_x = torch.sigmoid(tensor_x)
|
||||
tensor_x = self.fc2(tensor_x)
|
||||
tensor_x = torch.sigmoid(tensor_x)
|
||||
output = self.fc3(tensor_x)
|
||||
return output
|
||||
|
||||
model = MLPModel()
|
||||
tensor_x = torch.rand((97, 8), dtype=torch.float32)
|
||||
export_output = torch.onnx.dynamo_export(model, tensor_x)
|
||||
|
||||
As the code above shows, all you need is to provide :func:`torch.onnx.dynamo_export` with an instance of the model and its input.
|
||||
The exporter will then return an instance of :class:`torch.onnx.ExportOutput` that contains the exported ONNX graph along with extra information.
|
||||
|
||||
The in-memory model available through ``export_output.model_proto`` is an ``onnx.ModelProto`` object in compliance with the `ONNX IR spec <https://github.com/onnx/onnx/blob/main/docs/IR.md>`_.
|
||||
The ONNX model may then be serialized into a `Protobuf file <https://protobuf.dev/>`_ using the :meth:`torch.onnx.ExportOutput.save` API.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
export_output.save("mlp.onnx")
|
||||
|
||||
Inspecting the ONNX model using GUI
|
||||
-----------------------------------
|
||||
|
||||
You can view the exported model using `Netron <https://netron.app/>`__.
|
||||
|
||||
.. image:: _static/img/onnx/onnx_dynamo_mlp_model.png
|
||||
:width: 40%
|
||||
:alt: MLP model as viewed using Netron
|
||||
|
||||
Note that each layer is represented in a rectangular box with a *f* icon in the top right corner.
|
||||
|
||||
.. image:: _static/img/onnx/onnx_dynamo_mlp_model_function_highlight.png
|
||||
:width: 40%
|
||||
:alt: ONNX function highlighted on MLP model
|
||||
|
||||
By expanding it, the function body is shown.
|
||||
|
||||
.. image:: _static/img/onnx/onnx_dynamo_mlp_model_function_body.png
|
||||
:width: 50%
|
||||
:alt: ONNX function body
|
||||
|
||||
The function body is a sequence of ONNX operators or other functions.
|
||||
|
||||
Diagnosing issues with SARIF
|
||||
----------------------------
|
||||
|
||||
ONNX diagnostics goes beyond regular logs through the adoption of
|
||||
`Static Analysis Results Interchange Format (aka SARIF) <https://docs.oasis-open.org/sarif/sarif/v2.1.0/sarif-v2.1.0.html>`__
|
||||
to help users debug and improve their model using a GUI, such as
|
||||
Visual Studio Code's `SARIF Viewer <https://marketplace.visualstudio.com/items?itemName=MS-SarifVSCode.sarif-viewer>`_.
|
||||
|
||||
The main advantages are:
|
||||
|
||||
- The diagnostics are emitted in machine parseable `Static Analysis Results Interchange Format (SARIF) <https://docs.oasis-open.org/sarif/sarif/v2.1.0/sarif-v2.1.0.html>`__.
|
||||
- A new clearer, structured way to add new and keep track of diagnostic rules.
|
||||
- Serve as foundation for more future improvements consuming the diagnostics.
|
||||
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
:caption: ONNX Diagnostic SARIF Rules
|
||||
:glob:
|
||||
|
||||
generated/onnx_dynamo_diagnostics_rules/*
|
||||
|
||||
API Reference
|
||||
-------------
|
||||
|
||||
.. autofunction:: torch.onnx.dynamo_export
|
||||
|
||||
.. autoclass:: torch.onnx.ExportOptions
|
||||
:members:
|
||||
|
||||
.. autofunction:: torch.onnx.enable_fake_mode
|
||||
|
||||
.. autoclass:: torch.onnx.ExportOutput
|
||||
:members:
|
||||
|
||||
.. autoclass:: torch.onnx.ExportOutputSerializer
|
||||
:members:
|
||||
|
||||
.. autoclass:: torch.onnx.OnnxExporterError
|
||||
:members:
|
||||
|
||||
.. autoclass:: torch.onnx.OnnxRegistry
|
||||
:members:
|
||||
|
||||
.. autoclass:: torch.onnx.DiagnosticOptions
|
||||
:members:
|
9
docs/source/onnx_dynamo_onnxruntime_backend.rst
Normal file
9
docs/source/onnx_dynamo_onnxruntime_backend.rst
Normal file
@ -0,0 +1,9 @@
|
||||
ONNX Backend for TorchDynamo
|
||||
============================
|
||||
|
||||
For a quick overview of ``torch.compiler``, see :ref:`torch.compiler_overview`.
|
||||
|
||||
.. warning::
|
||||
The ONNX backend for torch.compile is a rapidly evolving beta technology.
|
||||
|
||||
.. autofunction:: torch.onnx.is_onnxrt_backend_supported
|
719
docs/source/onnx_torchscript.rst
Normal file
719
docs/source/onnx_torchscript.rst
Normal file
@ -0,0 +1,719 @@
|
||||
TorchScript-based ONNX Exporter
|
||||
===============================
|
||||
|
||||
.. note::
|
||||
To export an ONNX model using TorchDynamo instead of TorchScript, see :func:`torch.onnx.dynamo_export`.
|
||||
|
||||
.. contents:: :local:
|
||||
|
||||
Example: AlexNet from PyTorch to ONNX
|
||||
-------------------------------------
|
||||
|
||||
Here is a simple script which exports a pretrained AlexNet to an ONNX file named ``alexnet.onnx``.
|
||||
The call to ``torch.onnx.export`` runs the model once to trace its execution and then exports the
|
||||
traced model to the specified file::
|
||||
|
||||
import torch
|
||||
import torchvision
|
||||
|
||||
dummy_input = torch.randn(10, 3, 224, 224, device="cuda")
|
||||
model = torchvision.models.alexnet(pretrained=True).cuda()
|
||||
|
||||
# Providing input and output names sets the display names for values
|
||||
# within the model's graph. Setting these does not change the semantics
|
||||
# of the graph; it is only for readability.
|
||||
#
|
||||
# The inputs to the network consist of the flat list of inputs (i.e.
|
||||
# the values you would pass to the forward() method) followed by the
|
||||
# flat list of parameters. You can partially specify names, i.e. provide
|
||||
# a list here shorter than the number of inputs to the model, and we will
|
||||
# only set that subset of names, starting from the beginning.
|
||||
input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
|
||||
output_names = [ "output1" ]
|
||||
|
||||
torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names)
|
||||
|
||||
The resulting ``alexnet.onnx`` file contains a binary `protocol buffer <https://developers.google.com/protocol-buffers/>`_
|
||||
which contains both the network structure and parameters of the model you exported
|
||||
(in this case, AlexNet). The argument ``verbose=True`` causes the
|
||||
exporter to print out a human-readable representation of the model::
|
||||
|
||||
# These are the inputs and parameters to the network, which have taken on
|
||||
# the names we specified earlier.
|
||||
graph(%actual_input_1 : Float(10, 3, 224, 224)
|
||||
%learned_0 : Float(64, 3, 11, 11)
|
||||
%learned_1 : Float(64)
|
||||
%learned_2 : Float(192, 64, 5, 5)
|
||||
%learned_3 : Float(192)
|
||||
# ---- omitted for brevity ----
|
||||
%learned_14 : Float(1000, 4096)
|
||||
%learned_15 : Float(1000)) {
|
||||
# Every statement consists of some output tensors (and their types),
|
||||
# the operator to be run (with its attributes, e.g., kernels, strides,
|
||||
# etc.), its input tensors (%actual_input_1, %learned_0, %learned_1)
|
||||
%17 : Float(10, 64, 55, 55) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[11, 11], pads=[2, 2, 2, 2], strides=[4, 4]](%actual_input_1, %learned_0, %learned_1), scope: AlexNet/Sequential[features]/Conv2d[0]
|
||||
%18 : Float(10, 64, 55, 55) = onnx::Relu(%17), scope: AlexNet/Sequential[features]/ReLU[1]
|
||||
%19 : Float(10, 64, 27, 27) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%18), scope: AlexNet/Sequential[features]/MaxPool2d[2]
|
||||
# ---- omitted for brevity ----
|
||||
%29 : Float(10, 256, 6, 6) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%28), scope: AlexNet/Sequential[features]/MaxPool2d[12]
|
||||
# Dynamic means that the shape is not known. This may be because of a
|
||||
# limitation of our implementation (which we would like to fix in a
|
||||
# future release) or shapes which are truly dynamic.
|
||||
%30 : Dynamic = onnx::Shape(%29), scope: AlexNet
|
||||
%31 : Dynamic = onnx::Slice[axes=[0], ends=[1], starts=[0]](%30), scope: AlexNet
|
||||
%32 : Long() = onnx::Squeeze[axes=[0]](%31), scope: AlexNet
|
||||
%33 : Long() = onnx::Constant[value={9216}](), scope: AlexNet
|
||||
# ---- omitted for brevity ----
|
||||
%output1 : Float(10, 1000) = onnx::Gemm[alpha=1, beta=1, broadcast=1, transB=1](%45, %learned_14, %learned_15), scope: AlexNet/Sequential[classifier]/Linear[6]
|
||||
return (%output1);
|
||||
}
|
||||
|
||||
You can also verify the output using the `ONNX <https://github.com/onnx/onnx/>`_ library,
|
||||
which you can install using ``pip``::
|
||||
|
||||
pip install onnx
|
||||
|
||||
Then, you can run::
|
||||
|
||||
import onnx
|
||||
|
||||
# Load the ONNX model
|
||||
model = onnx.load("alexnet.onnx")
|
||||
|
||||
# Check that the model is well formed
|
||||
onnx.checker.check_model(model)
|
||||
|
||||
# Print a human readable representation of the graph
|
||||
print(onnx.helper.printable_graph(model.graph))
|
||||
|
||||
You can also run the exported model with one of the many
|
||||
`runtimes that support ONNX <https://onnx.ai/supported-tools.html#deployModel>`_.
|
||||
For example after installing `ONNX Runtime <https://www.onnxruntime.ai>`_, you can
|
||||
load and run the model::
|
||||
|
||||
import onnxruntime as ort
|
||||
import numpy as np
|
||||
|
||||
ort_session = ort.InferenceSession("alexnet.onnx")
|
||||
|
||||
outputs = ort_session.run(
|
||||
None,
|
||||
{"actual_input_1": np.random.randn(10, 3, 224, 224).astype(np.float32)},
|
||||
)
|
||||
print(outputs[0])
|
||||
|
||||
Here is a more involved `tutorial on exporting a model and running it with ONNX Runtime <https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html>`_.
|
||||
|
||||
.. _tracing-vs-scripting:
|
||||
|
||||
Tracing vs Scripting
|
||||
--------------------
|
||||
|
||||
Internally, :func:`torch.onnx.export()` requires a :class:`torch.jit.ScriptModule` rather than
|
||||
a :class:`torch.nn.Module`. If the passed-in model is not already a ``ScriptModule``,
|
||||
``export()`` will use *tracing* to convert it to one:
|
||||
|
||||
.. TODO(justinchuby): Add a word on recommending tracing over scripting for most use cases.
|
||||
|
||||
* **Tracing**: If ``torch.onnx.export()`` is called with a Module that is not already a
|
||||
``ScriptModule``, it first does the equivalent of :func:`torch.jit.trace`, which executes the model
|
||||
once with the given ``args`` and records all operations that happen during that execution. This
|
||||
means that if your model is dynamic, e.g., changes behavior depending on input data, the exported
|
||||
model will *not* capture this dynamic behavior.
|
||||
We recommend examining the exported model and making sure the operators look
|
||||
reasonable. Tracing will unroll loops and if statements, exporting a static graph that is exactly
|
||||
the same as the traced run. If you want to export your model with dynamic control flow, you will
|
||||
need to use *scripting*.
|
||||
|
||||
* **Scripting**: Compiling a model via scripting preserves dynamic control flow and is valid for inputs
|
||||
of different sizes. To use scripting:
|
||||
|
||||
* Use :func:`torch.jit.script` to produce a ``ScriptModule``.
|
||||
* Call ``torch.onnx.export()`` with the ``ScriptModule`` as the model. The ``args`` are still required,
|
||||
but they will be used internally only to produce example outputs, so that the types and shapes of the
|
||||
outputs can be captured. No tracing will be performed.
|
||||
|
||||
See `Introduction to TorchScript <https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html>`_
|
||||
and `TorchScript <jit.html>`_ for more details, including how to compose tracing and scripting to suit the
|
||||
particular requirements of different models.
|
||||
|
||||
|
||||
Avoiding Pitfalls
|
||||
-----------------
|
||||
|
||||
Avoid NumPy and built-in Python types
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
PyTorch models can be written using NumPy or Python types and functions, but
|
||||
during :ref:`tracing<tracing-vs-scripting>`, any variables of NumPy or Python
|
||||
types (rather than torch.Tensor) are converted to constants, which will produce
|
||||
the wrong result if those values should change depending on the inputs.
|
||||
|
||||
For example, rather than using numpy functions on numpy.ndarrays: ::
|
||||
|
||||
# Bad! Will be replaced with constants during tracing.
|
||||
x, y = np.random.rand(1, 2), np.random.rand(1, 2)
|
||||
np.concatenate((x, y), axis=1)
|
||||
|
||||
Use torch operators on torch.Tensors: ::
|
||||
|
||||
# Good! Tensor operations will be captured during tracing.
|
||||
x, y = torch.randn(1, 2), torch.randn(1, 2)
|
||||
torch.cat((x, y), dim=1)
|
||||
|
||||
|
||||
And rather than use :func:`torch.Tensor.item` (which converts a Tensor to a Python
|
||||
built-in number): ::
|
||||
|
||||
# Bad! y.item() will be replaced with a constant during tracing.
|
||||
def forward(self, x, y):
|
||||
return x.reshape(y.item(), -1)
|
||||
|
||||
Use torch's support for implicit casting of single-element tensors: ::
|
||||
|
||||
# Good! y will be preserved as a variable during tracing.
|
||||
def forward(self, x, y):
|
||||
return x.reshape(y, -1)
|
||||
|
||||
Avoid Tensor.data
|
||||
^^^^^^^^^^^^^^^^^
|
||||
|
||||
Using the Tensor.data field can produce an incorrect trace and therefore an incorrect ONNX graph.
|
||||
Use :func:`torch.Tensor.detach` instead. (Work is ongoing to
|
||||
`remove Tensor.data entirely <https://github.com/pytorch/pytorch/issues/30987>`_).
|
||||
|
||||
Avoid in-place operations when using tensor.shape in tracing mode
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
In tracing mode, shapes obtained from ``tensor.shape`` are traced as tensors,
|
||||
and share the same memory. This might cause a mismatch the final output values.
|
||||
As a workaround, avoid the use of inplace operations in these scenarios.
|
||||
For example, in the model::
|
||||
|
||||
class Model(torch.nn.Module):
|
||||
def forward(self, states):
|
||||
batch_size, seq_length = states.shape[:2]
|
||||
real_seq_length = seq_length
|
||||
real_seq_length += 2
|
||||
return real_seq_length + seq_length
|
||||
|
||||
``real_seq_length`` and ``seq_length`` share the same memory in tracing mode.
|
||||
This could be avoided by rewriting the inplace operation::
|
||||
|
||||
real_seq_length = real_seq_length + 2
|
||||
|
||||
Limitations
|
||||
-----------
|
||||
|
||||
Types
|
||||
^^^^^
|
||||
|
||||
* Only :class:`torch.Tensors`, numeric types that can be trivially converted to torch.Tensors (e.g. float, int),
|
||||
and tuples and lists of those types are supported as model inputs or outputs. Dict and str inputs and
|
||||
outputs are accepted in :ref:`tracing<tracing-vs-scripting>` mode, but:
|
||||
|
||||
* Any computation that depends on the value of a dict or a str input **will be replaced with the
|
||||
constant value** seen during the one traced execution.
|
||||
* Any output that is a dict will be silently replaced with a **flattened sequence of its values
|
||||
(keys will be removed)**. E.g. ``{"foo": 1, "bar": 2}`` becomes ``(1, 2)``.
|
||||
* Any output that is a str will be silently removed.
|
||||
|
||||
* Certain operations involving tuples and lists are not supported in
|
||||
:ref:`scripting<tracing-vs-scripting>` mode due to limited support in ONNX for nested sequences.
|
||||
In particular appending a tuple to a list is not supported. In tracing mode, the nested sequences
|
||||
will be flattened automatically during the tracing.
|
||||
|
||||
Differences in Operator Implementations
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Due to differences in implementations of operators, running the exported model on different runtimes
|
||||
may produce different results from each other or from PyTorch. Normally these differences are
|
||||
numerically small, so this should only be a concern if your application is sensitive to these
|
||||
small differences.
|
||||
|
||||
.. _tensor-indexing:
|
||||
|
||||
Unsupported Tensor Indexing Patterns
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
Tensor indexing patterns that cannot be exported are listed below.
|
||||
If you are experiencing issues exporting a model that does not include any of
|
||||
the unsupported patterns below, please double check that you are exporting with
|
||||
the latest ``opset_version``.
|
||||
|
||||
Reads / Gets
|
||||
~~~~~~~~~~~~
|
||||
|
||||
When indexing into a tensor for reading, the following patterns are not supported: ::
|
||||
|
||||
# Tensor indices that includes negative values.
|
||||
data[torch.tensor([[1, 2], [2, -3]]), torch.tensor([-2, 3])]
|
||||
# Workarounds: use positive index values.
|
||||
|
||||
Writes / Sets
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
When indexing into a Tensor for writing, the following patterns are not supported: ::
|
||||
|
||||
# Multiple tensor indices if any has rank >= 2
|
||||
data[torch.tensor([[1, 2], [2, 3]]), torch.tensor([2, 3])] = new_data
|
||||
# Workarounds: use single tensor index with rank >= 2,
|
||||
# or multiple consecutive tensor indices with rank == 1.
|
||||
|
||||
# Multiple tensor indices that are not consecutive
|
||||
data[torch.tensor([2, 3]), :, torch.tensor([1, 2])] = new_data
|
||||
# Workarounds: transpose `data` such that tensor indices are consecutive.
|
||||
|
||||
# Tensor indices that includes negative values.
|
||||
data[torch.tensor([1, -2]), torch.tensor([-2, 3])] = new_data
|
||||
# Workarounds: use positive index values.
|
||||
|
||||
# Implicit broadcasting required for new_data.
|
||||
data[torch.tensor([[0, 2], [1, 1]]), 1:3] = new_data
|
||||
# Workarounds: expand new_data explicitly.
|
||||
# Example:
|
||||
# data shape: [3, 4, 5]
|
||||
# new_data shape: [5]
|
||||
# expected new_data shape after broadcasting: [2, 2, 2, 5]
|
||||
|
||||
Adding support for operators
|
||||
----------------------------
|
||||
|
||||
When exporting a model that includes unsupported operators, you'll see an error message like:
|
||||
|
||||
.. code-block:: text
|
||||
|
||||
RuntimeError: ONNX export failed: Couldn't export operator foo
|
||||
|
||||
When that happens, there are a few things you can do:
|
||||
|
||||
#. Change the model to not use that operator.
|
||||
#. Create a symbolic function to convert the operator and register it as a custom symbolic function.
|
||||
#. Contribute to PyTorch to add the same symbolic function to :mod:`torch.onnx` itself.
|
||||
|
||||
If you decided to implement a symbolic function (we hope you will contribute it back to PyTorch!), here is how you can get started:
|
||||
|
||||
ONNX exporter internals
|
||||
^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
A "symbolic function" is a function that decomposes a PyTorch operator into a
|
||||
composition of a series of ONNX operators.
|
||||
|
||||
During export, each node (which contains a PyTorch operator) in the TorchScript
|
||||
graph is visited by the exporter in topological order.
|
||||
Upon visiting a node, the exporter looks for a registered symbolic functions for
|
||||
that operator. Symbolic functions are implemented in Python. A symbolic function for
|
||||
an op named ``foo`` would look something like::
|
||||
|
||||
|
||||
def foo(
|
||||
g,
|
||||
input_0: torch._C.Value,
|
||||
input_1: torch._C.Value) -> Union[None, torch._C.Value, List[torch._C.Value]]:
|
||||
"""
|
||||
Adds the ONNX operations representing this PyTorch function by updating the
|
||||
graph g with `g.op()` calls.
|
||||
|
||||
Args:
|
||||
g (Graph): graph to write the ONNX representation into.
|
||||
input_0 (Value): value representing the variables which contain
|
||||
the first input for this operator.
|
||||
input_1 (Value): value representing the variables which contain
|
||||
the second input for this operator.
|
||||
|
||||
Returns:
|
||||
A Value or List of Values specifying the ONNX nodes that compute something
|
||||
equivalent to the original PyTorch operator with the given inputs.
|
||||
|
||||
None if it cannot be converted to ONNX.
|
||||
"""
|
||||
...
|
||||
|
||||
The ``torch._C`` types are Python wrappers around the types defined in C++ in
|
||||
`ir.h <https://github.com/pytorch/pytorch/blob/main/torch/csrc/jit/ir/ir.h>`_.
|
||||
|
||||
The process for adding a symbolic function depends on the type of operator.
|
||||
|
||||
.. _adding-support-aten:
|
||||
|
||||
ATen operators
|
||||
^^^^^^^^^^^^^^
|
||||
|
||||
`ATen <https://pytorch.org/cppdocs/#aten>`_ is PyTorch's built-in tensor library.
|
||||
If the operator is an ATen operator (shows up in the TorchScript graph with the prefix
|
||||
``aten::``), make sure it is not supported already.
|
||||
|
||||
List of supported operators
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Visit the auto generated :doc:`list of supported TorchScript operators <../onnx_torchscript_supported_aten_ops>`
|
||||
for details on which operator are supported in each ``opset_version``.
|
||||
|
||||
Adding support for an aten or quantized operator
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If the operator is not in the list above:
|
||||
|
||||
* Define the symbolic function in ``torch/onnx/symbolic_opset<version>.py``, for example
|
||||
`torch/onnx/symbolic_opset9.py <https://github.com/pytorch/pytorch/blob/main/torch/onnx/symbolic_opset9.py>`_.
|
||||
Make sure the function has the same name as the ATen function, which may be declared in
|
||||
``torch/_C/_VariableFunctions.pyi`` or ``torch/nn/functional.pyi`` (these files are generated at
|
||||
build time, so will not appear in your checkout until you build PyTorch).
|
||||
* By default, the first arg is the ONNX graph.
|
||||
Other arg names must EXACTLY match the names in the ``.pyi`` file,
|
||||
because dispatch is done with keyword arguments.
|
||||
* In the symbolic function, if the operator is in the
|
||||
`ONNX standard operator set <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_,
|
||||
we only need to create a node to represent the ONNX operator in the graph.
|
||||
If not, we can compose several standard operators that have the
|
||||
equivalent semantics to the ATen operator.
|
||||
|
||||
Here is an example of handling missing symbolic function for the ``ELU`` operator.
|
||||
|
||||
If we run the following code::
|
||||
|
||||
print(
|
||||
torch.jit.trace(
|
||||
torch.nn.ELU(), # module
|
||||
torch.ones(1) # example input
|
||||
).graph
|
||||
)
|
||||
|
||||
We see something like::
|
||||
|
||||
graph(%self : __torch__.torch.nn.modules.activation.___torch_mangle_0.ELU,
|
||||
%input : Float(1, strides=[1], requires_grad=0, device=cpu)):
|
||||
%4 : float = prim::Constant[value=1.]()
|
||||
%5 : int = prim::Constant[value=1]()
|
||||
%6 : int = prim::Constant[value=1]()
|
||||
%7 : Float(1, strides=[1], requires_grad=0, device=cpu) = aten::elu(%input, %4, %5, %6)
|
||||
return (%7)
|
||||
|
||||
Since we see ``aten::elu`` in the graph, we know this is an ATen operator.
|
||||
|
||||
We check the `ONNX operator list <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_,
|
||||
and confirm that ``Elu`` is standardized in ONNX.
|
||||
|
||||
We find a signature for ``elu`` in ``torch/nn/functional.pyi``::
|
||||
|
||||
def elu(input: Tensor, alpha: float = ..., inplace: bool = ...) -> Tensor: ...
|
||||
|
||||
We add the following lines to ``symbolic_opset9.py``::
|
||||
|
||||
def elu(g, input: torch.Value, alpha: torch.Value, inplace: bool = False):
|
||||
return g.op("Elu", input, alpha_f=alpha)
|
||||
|
||||
Now PyTorch is able to export models containing the ``aten::elu`` operator!
|
||||
|
||||
See the ``torch/onnx/symbolic_opset*.py`` files for more examples.
|
||||
|
||||
|
||||
torch.autograd.Functions
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
If the operator is a sub-class of :class:`torch.autograd.Function`, there are three ways
|
||||
to export it.
|
||||
|
||||
Static Symbolic Method
|
||||
~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
You can add a static method named ``symbolic`` to your function class. It should return
|
||||
ONNX operators that represent the function's behavior in ONNX. For example::
|
||||
|
||||
class MyRelu(torch.autograd.Function):
|
||||
@staticmethod
|
||||
def forward(ctx, input: torch.Tensor) -> torch.Tensor:
|
||||
ctx.save_for_backward(input)
|
||||
return input.clamp(min=0)
|
||||
|
||||
@staticmethod
|
||||
def symbolic(g: torch.Graph, input: torch.Value) -> torch.Value:
|
||||
return g.op("Clip", input, g.op("Constant", value_t=torch.tensor(0, dtype=torch.float)))
|
||||
|
||||
.. FIXME(justinchuby): PythonOps are too complicated and the example below
|
||||
.. uses private methods we do not expose. We are looking to
|
||||
.. improve the experience. Since SymbolicContext is deprecated, we think
|
||||
.. defining a symbolic staticmethod is a better way to go for now.
|
||||
|
||||
.. PythonOp Symbolic
|
||||
.. ~~~~~~~~~~~~~~~~~
|
||||
|
||||
.. Alternatively, you can register a custom symbolic function.
|
||||
.. This gives the symbolic function access to more info through the
|
||||
.. ``torch.onnx.SymbolicContext`` object, which gets passed in as the first
|
||||
.. argument (before the ``Graph`` object).
|
||||
|
||||
.. All autograd ``Function``\ s appear in the TorchScript graph as ``prim::PythonOp`` nodes.
|
||||
.. In order to differentiate between different ``Function`` subclasses, the
|
||||
.. symbolic function should use the ``name`` kwarg which gets set to the name of the class.
|
||||
|
||||
.. Custom symbolic functions should add type and shape information by calling ``setType(...)``
|
||||
.. on Value objects before returning them (implemented in C++ by
|
||||
.. . ``torch::jit::Value::setType``). This is not required, but it can help the exporter's
|
||||
.. shape and type inference for down-stream nodes. For a non-trivial example of ``setType``, see
|
||||
.. ``test_aten_embedding_2`` in
|
||||
.. `test_operators.py <https://github.com/pytorch/pytorch/blob/main/test/onnx/test_operators.py>`_.
|
||||
|
||||
.. The example below shows how you can access ``requires_grad`` via the ``Node`` object:
|
||||
|
||||
.. class MyClip(torch.autograd.Function):
|
||||
.. @staticmethod
|
||||
.. def forward(ctx, input, min):
|
||||
.. ctx.save_for_backward(input)
|
||||
.. return input.clamp(min=min)
|
||||
|
||||
.. class MyRelu(torch.autograd.Function):
|
||||
.. @staticmethod
|
||||
.. def forward(ctx, input):
|
||||
.. ctx.save_for_backward(input)
|
||||
.. return input.clamp(min=0)
|
||||
|
||||
.. def symbolic_python_op(g: "GraphContext", *args, **kwargs):
|
||||
.. n = ctx.cur_node
|
||||
.. print("original node: ", n)
|
||||
.. for i, out in enumerate(n.outputs()):
|
||||
.. print("original output {}: {}, requires grad: {}".format(i, out, out.requiresGrad()))
|
||||
.. import torch.onnx.symbolic_helper as sym_helper
|
||||
.. for i, arg in enumerate(args):
|
||||
.. requires_grad = arg.requiresGrad() if sym_helper._is_value(arg) else False
|
||||
.. print("arg {}: {}, requires grad: {}".format(i, arg, requires_grad))
|
||||
|
||||
.. name = kwargs["name"]
|
||||
.. ret = None
|
||||
.. if name == "MyClip":
|
||||
.. ret = g.op("Clip", args[0], args[1])
|
||||
.. elif name == "MyRelu":
|
||||
.. ret = g.op("Relu", args[0])
|
||||
.. else:
|
||||
.. # Logs a warning and returns None
|
||||
.. return _unimplemented("prim::PythonOp", "unknown node kind: " + name)
|
||||
.. # Copy type and shape from original node.
|
||||
.. ret.setType(n.type())
|
||||
.. return ret
|
||||
|
||||
.. from torch.onnx import register_custom_op_symbolic
|
||||
.. . register_custom_op_symbolic("prim::PythonOp", symbolic_python_op, 1)
|
||||
|
||||
Inline Autograd Function
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
In cases where a static symbolic method is not provided for its subsequent :class:`torch.autograd.Function` or
|
||||
where a function to register ``prim::PythonOp`` as custom symbolic functions is not provided,
|
||||
:func:`torch.onnx.export` tries to inline the graph that corresponds to that :class:`torch.autograd.Function` such that
|
||||
this function is broken down into individual operators that were used within the function.
|
||||
The export should be successful as long as these individual operators are supported. For example::
|
||||
|
||||
class MyLogExp(torch.autograd.Function):
|
||||
@staticmethod
|
||||
def forward(ctx, input: torch.Tensor) -> torch.Tensor:
|
||||
ctx.save_for_backward(input)
|
||||
h = input.exp()
|
||||
return h.log().log()
|
||||
|
||||
There is no static symbolic method present for this model, yet it is exported as follows::
|
||||
|
||||
graph(%input : Float(1, strides=[1], requires_grad=0, device=cpu)):
|
||||
%1 : float = onnx::Exp[](%input)
|
||||
%2 : float = onnx::Log[](%1)
|
||||
%3 : float = onnx::Log[](%2)
|
||||
return (%3)
|
||||
|
||||
If you need to avoid inlining of :class:`torch.autograd.Function`, you should export models with
|
||||
``operator_export_type`` set to ``ONNX_FALLTHROUGH`` or ``ONNX_ATEN_FALLBACK``.
|
||||
|
||||
Custom operators
|
||||
^^^^^^^^^^^^^^^^
|
||||
|
||||
You can export your model with custom operators that includes a combination of many standard ONNX ops,
|
||||
or are driven by self-defined C++ backend.
|
||||
|
||||
ONNX-script functions
|
||||
~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
If an operator is not a standard ONNX op, but can be composed of multiple existing ONNX ops, you can utilize
|
||||
`ONNX-script <https://github.com/microsoft/onnx-script>`_ to create an external ONNX function to support the operator.
|
||||
You can export it by following this example::
|
||||
|
||||
import onnxscript
|
||||
# There are three opset version needed to be aligned
|
||||
# This is (1) the opset version in ONNX function
|
||||
from onnxscript.onnx_opset import opset15 as op
|
||||
opset_version = 15
|
||||
|
||||
x = torch.randn(1, 2, 3, 4, requires_grad=True)
|
||||
model = torch.nn.SELU()
|
||||
|
||||
custom_opset = onnxscript.values.Opset(domain="onnx-script", version=1)
|
||||
|
||||
@onnxscript.script(custom_opset)
|
||||
def Selu(X):
|
||||
alpha = 1.67326 # auto wrapped as Constants
|
||||
gamma = 1.0507
|
||||
alphaX = op.CastLike(alpha, X)
|
||||
gammaX = op.CastLike(gamma, X)
|
||||
neg = gammaX * (alphaX * op.Exp(X) - alphaX)
|
||||
pos = gammaX * X
|
||||
zero = op.CastLike(0, X)
|
||||
return op.Where(X <= zero, neg, pos)
|
||||
|
||||
# setType API provides shape/type to ONNX shape/type inference
|
||||
def custom_selu(g: jit_utils.GraphContext, X):
|
||||
return g.onnxscript_op(Selu, X).setType(X.type())
|
||||
|
||||
# Register custom symbolic function
|
||||
# There are three opset version needed to be aligned
|
||||
# This is (2) the opset version in registry
|
||||
torch.onnx.register_custom_op_symbolic(
|
||||
symbolic_name="aten::selu",
|
||||
symbolic_fn=custom_selu,
|
||||
opset_version=opset_version,
|
||||
)
|
||||
|
||||
# There are three opset version needed to be aligned
|
||||
# This is (2) the opset version in exporter
|
||||
torch.onnx.export(
|
||||
model,
|
||||
x,
|
||||
"model.onnx",
|
||||
opset_version=opset_version,
|
||||
# only needed if you want to specify an opset version > 1.
|
||||
custom_opsets={"onnx-script": 2}
|
||||
)
|
||||
|
||||
The example above exports it as a custom operator in the "onnx-script" opset.
|
||||
When exporting a custom operator, you can specify the custom domain version using the
|
||||
``custom_opsets`` dictionary at export. If not specified, the custom opset version defaults to 1.
|
||||
|
||||
NOTE: Be careful to align the opset version mentioned in the above example, and make sure they are consumed in exporter step.
|
||||
The example usage of how to write a onnx-script function is a beta version in terms of the active development on onnx-script.
|
||||
Please follow the latest `ONNX-script <https://github.com/microsoft/onnx-script>`_
|
||||
|
||||
C++ Operators
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
If a model uses a custom operator implemented in C++ as described in
|
||||
`Extending TorchScript with Custom C++ Operators <https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html>`_,
|
||||
you can export it by following this example::
|
||||
|
||||
from torch.onnx import symbolic_helper
|
||||
|
||||
|
||||
# Define custom symbolic function
|
||||
@symbolic_helper.parse_args("v", "v", "f", "i")
|
||||
def symbolic_foo_forward(g, input1, input2, attr1, attr2):
|
||||
return g.op("custom_domain::Foo", input1, input2, attr1_f=attr1, attr2_i=attr2)
|
||||
|
||||
|
||||
# Register custom symbolic function
|
||||
torch.onnx.register_custom_op_symbolic("custom_ops::foo_forward", symbolic_foo_forward, 9)
|
||||
|
||||
|
||||
class FooModel(torch.nn.Module):
|
||||
def __init__(self, attr1, attr2):
|
||||
super().__init__()
|
||||
self.attr1 = attr1
|
||||
self.attr2 = attr2
|
||||
|
||||
def forward(self, input1, input2):
|
||||
# Calling custom op
|
||||
return torch.ops.custom_ops.foo_forward(input1, input2, self.attr1, self.attr2)
|
||||
|
||||
|
||||
model = FooModel(attr1, attr2)
|
||||
torch.onnx.export(
|
||||
model,
|
||||
(example_input1, example_input1),
|
||||
"model.onnx",
|
||||
# only needed if you want to specify an opset version > 1.
|
||||
custom_opsets={"custom_domain": 2}
|
||||
)
|
||||
|
||||
The example above exports it as a custom operator in the "custom_domain" opset.
|
||||
When exporting a custom operator, you can specify the custom domain version using the
|
||||
``custom_opsets`` dictionary at export. If not specified, the custom opset version defaults to 1.
|
||||
|
||||
The runtime that consumes the model needs to support the custom op. See
|
||||
`Caffe2 custom ops <https://caffe2.ai/docs/custom-operators.html>`_,
|
||||
`ONNX Runtime custom ops <https://onnxruntime.ai/docs/reference/operators/add-custom-op.html>`_,
|
||||
or your runtime of choice's documentation.
|
||||
|
||||
|
||||
Discovering all unconvertible ATen ops at once
|
||||
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||
|
||||
When export fails due to an unconvertible ATen op, there may in fact be more
|
||||
than one such op but the error message only mentions the first. To discover
|
||||
all of the unconvertible ops in one go you can::
|
||||
|
||||
# prepare model, args, opset_version
|
||||
...
|
||||
|
||||
torch_script_graph, unconvertible_ops = torch.onnx.utils.unconvertible_ops(
|
||||
model, args, opset_version=opset_version
|
||||
)
|
||||
|
||||
print(set(unconvertible_ops))
|
||||
|
||||
The set is approximated because some ops may be removed during the conversion
|
||||
process and don't need to be converted. Some other ops may have partial support
|
||||
that will fail conversion with particular inputs, but this should give you a
|
||||
general idea of what ops are not supported. Please feel free to open GitHub Issues
|
||||
for op support requests.
|
||||
|
||||
Frequently Asked Questions
|
||||
--------------------------
|
||||
Q: I have exported my LSTM model, but its input size seems to be fixed?
|
||||
|
||||
The tracer records the shapes of the example inputs. If the model should accept
|
||||
inputs of dynamic shapes, set ``dynamic_axes`` when calling :func:`torch.onnx.export`.
|
||||
|
||||
Q: How to export models containing loops?
|
||||
|
||||
See `Tracing vs Scripting`_.
|
||||
|
||||
Q: How to export models with primitive type inputs (e.g. int, float)?
|
||||
|
||||
Support for primitive numeric type inputs was added in PyTorch 1.9.
|
||||
However, the exporter does not support models with str inputs.
|
||||
|
||||
Q: Does ONNX support implicit scalar datatype casting?
|
||||
|
||||
The ONNX standard does not, but the exporter will try to handle that part.
|
||||
Scalars are exported as constant tensors.
|
||||
The exporter will figure out the right data type for scalars. In rare cases when it is unable
|
||||
to do so, you will need to manually specify the datatype with e.g. `dtype=torch.float32`.
|
||||
If you see any errors, please [create a GitHub issue](https://github.com/pytorch/pytorch/issues).
|
||||
|
||||
Q: Are lists of Tensors exportable to ONNX?
|
||||
|
||||
Yes, for ``opset_version`` >= 11, since ONNX introduced the Sequence type in opset 11.
|
||||
|
||||
Python API
|
||||
----------
|
||||
|
||||
.. automodule:: torch.onnx
|
||||
|
||||
Functions
|
||||
^^^^^^^^^
|
||||
|
||||
.. autofunction:: export
|
||||
.. autofunction:: export_to_pretty_string
|
||||
.. autofunction:: register_custom_op_symbolic
|
||||
.. autofunction:: unregister_custom_op_symbolic
|
||||
.. autofunction:: select_model_mode_for_export
|
||||
.. autofunction:: is_in_onnx_export
|
||||
.. autofunction:: enable_log
|
||||
.. autofunction:: disable_log
|
||||
.. autofunction:: torch.onnx.verification.find_mismatch
|
||||
|
||||
Classes
|
||||
^^^^^^^
|
||||
|
||||
.. autosummary::
|
||||
:toctree: generated
|
||||
:nosignatures:
|
||||
:template: classtemplate.rst
|
||||
|
||||
JitScalarType
|
||||
torch.onnx.verification.GraphInfo
|
||||
torch.onnx.verification.VerificationOptions
|
@ -5,7 +5,7 @@ ONNX supported TorchScript operators
|
||||
|
||||
.. This file is automatically generated during the documentation build
|
||||
.. by cross referencing ONNX operator symbolics with TorchScript operators via
|
||||
.. ``docs/source/scripts/build_onnx_supported_aten_op_csv_table.py``.
|
||||
.. ``docs/source/scripts/build_onnx_torchscript_supported_aten_op_csv_table.py``.
|
||||
.. Do not modify directly and instead `rebuild the docs <https://github.com/pytorch/pytorch#building-the-documentation>`_.
|
||||
|
||||
This page lists the TorchScript operators that are supported/unsupported by ONNX export.
|
@ -119,7 +119,9 @@ def generate_index_rst(example_cases, tag_to_modules, support_level_to_modules):
|
||||
blurb = file.read()
|
||||
|
||||
# Generate contents of the .rst file
|
||||
doc_contents = f"""ExportDB
|
||||
doc_contents = f""".. _torch.export_db:
|
||||
|
||||
ExportDB
|
||||
========
|
||||
|
||||
{blurb}
|
||||
|
@ -575,6 +575,7 @@ Tensor class reference
|
||||
Tensor.reciprocal_
|
||||
Tensor.record_stream
|
||||
Tensor.register_hook
|
||||
Tensor.register_post_accumulate_grad_hook
|
||||
Tensor.remainder
|
||||
Tensor.remainder_
|
||||
Tensor.renorm
|
||||
|
@ -37,7 +37,7 @@ TorchDynamo requires a backend that converts the captured graphs into a fast
|
||||
machine code. Different backends can result in various optimization gains.
|
||||
The default backend is called TorchInductor, also known as *inductor*,
|
||||
TorchDynamo has a list of supported backends developed by our partners,
|
||||
which can be see by running ``torch.compile.list_backends()`` each of which
|
||||
which can be see by running ``torch.compiler.list_backends()`` each of which
|
||||
with its optional dependencies.
|
||||
|
||||
Some of the most commonly used backends include:
|
||||
@ -54,6 +54,10 @@ Some of the most commonly used backends include:
|
||||
- Uses the TorchInductor backend. `Read more <https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747>`__
|
||||
* - ``torch.compile(m, backend="cudagraphs")``
|
||||
- CUDA graphs with AOT Autograd. `Read more <https://github.com/pytorch/torchdynamo/pull/757>`__
|
||||
* - ``torch.compile(m, backend="ipex")``
|
||||
- Uses IPEX on CPU. `Read more <https://github.com/intel/intel-extension-for-pytorch>`__
|
||||
* - ``torch.compile(m, backend="onnxrt")``
|
||||
- Uses ONNX Runtime for training on CPU/GPU. :doc:`Read more <onnx_dynamo_onnxruntime_backend>`
|
||||
|
||||
**Inference-only backends**
|
||||
|
||||
@ -63,10 +67,8 @@ Some of the most commonly used backends include:
|
||||
|
||||
* - Backend
|
||||
- Description
|
||||
* - ``torch.compile(m, backend="onnxrt")``
|
||||
- Uses ONNXRT for inference on CPU/GPU. `Read more <https://onnxruntime.ai/>`__
|
||||
* - ``torch.compile(m, backend="tensorrt")``
|
||||
- Uses ONNXRT to run TensorRT for inference optimizations. `Read more <https://github.com/onnx/onnx-tensorrt>`__
|
||||
- Uses ONNX Runtime to run TensorRT for inference optimizations. `Read more <https://github.com/onnx/onnx-tensorrt>`__
|
||||
* - ``torch.compile(m, backend="ipex")``
|
||||
- Uses IPEX for inference on CPU. `Read more <https://github.com/intel/intel-extension-for-pytorch>`__
|
||||
* - ``torch.compile(m, backend="tvm")``
|
||||
|
@ -19,25 +19,27 @@ In supporting dynamic shapes, we chose not to support dynamic rank programs, e.g
|
||||
Abridged public API
|
||||
-------------------
|
||||
|
||||
The eventual plan:
|
||||
The default dynamic behavior in PyTorch 2.1 is:
|
||||
|
||||
- PT2 assumes everything is static by default
|
||||
- If we recompile because a size changed, we will instead attempt to recompile that size as being dynamic (so we will never recompile because of that size again)
|
||||
- If you know ahead of time something will be dynamic, you can skip the first recompile with ``torch._dynamo.mark_dynamic(tensor, dim)``
|
||||
- If you say ``torch.compile(dynamic=True)`` we will attempt to make as much dynamic as possible
|
||||
|
||||
Unbacked integers for eager mode:
|
||||
- If we recompile because a size changed, we will instead attempt to recompile
|
||||
that size as being dynamic (sizes that have changed are likely to change in
|
||||
the future). This generalization may fail (e.g., because user code does a
|
||||
conditional branch on the size in question or missing dynamic shapes support
|
||||
in PT2). If you are trying to understand why PT2 has overspecialized some
|
||||
code, run with ``TORCH_LOGS=dynamic`` and look for "eval" entries that say
|
||||
when guards are added and why.
|
||||
|
||||
What we have currently:
|
||||
- If you know ahead of time something will be dynamic, you can skip the first
|
||||
recompile with ``torch._dynamo.mark_dynamic(tensor, dim)``.
|
||||
|
||||
- You must explicitly opt into dynamic shapes with ``torch._dynamo.config.automatic_dynamic_shapes = True`` or ``torch.compile(dynamic=True)``
|
||||
- ``torch.compile(dynamic=True)`` proactively attempts to make everything dynamic
|
||||
- ``torch._dynamo.config.automatic_dynamic_shapes`` will assume everything is
|
||||
static, but if we recompile because a size varied, the next time we will try
|
||||
to compile it dynamically
|
||||
- ``torch._dynamo.mark_dynamic`` works
|
||||
|
||||
Use ``TORCH_LOGS=dynamic`` to view more information about what is going on with dynamic shapes.
|
||||
- If you say ``torch.compile(dynamic=False)``, we will turn off automatic
|
||||
dynamic shapes on recompiles and always recompile for each distinct size.
|
||||
Conversely, if you say ``torch.compile(dynamic=True)``, we will try to make
|
||||
everything as dynamic as possible. This is mostly useful for small
|
||||
operators; if you try it on a big model it will (1) probably crash PT2 and
|
||||
(2) run slow for no good reason.
|
||||
|
||||
The Guard Model
|
||||
---------------
|
||||
@ -114,3 +116,10 @@ Naively implemented, this is too restrictive: most PyTorch programs will immedia
|
||||
- On tensor creation, PyTorch precomputes a lot of data about a tensor; for example, if you use ``empty_strided`` to create a tensor, we will eagerly sort the strides and determine if the tensor is non-overlapping and dense. Sorts produce a lot of guards. However, it is more common to produce a tensor directly with a higher-level API like ``empty``, which is guaranteed to produce a non-overlapping and dense tensor. We modified PyTorch to avoid needlessly recomputing these properties.
|
||||
- Even if nontrivial compute is needed, sometimes a property is never actually queried at all. Making these precomputed properties lazy allows us to avoid guarding on an unbacked symbolic integer unless it is actually needed.
|
||||
- The data in an integer tensor is generally not known to be non-negative. However, we provide an API ``constrain_range`` whereby a user can specify that a size is bounded above and below by known limits.
|
||||
|
||||
In future versions of PT2 (beyond PT2.1), we will extend our reasoning system
|
||||
to infer that an unbacked symbolic integer is size-like based on usage. For
|
||||
example, if you pass the result of an ``.item()`` call to a factory function
|
||||
like ``torch.empty``, we will automatically infer that the result is a size
|
||||
(because if it was not, it would fail.) This assumption would get validated
|
||||
at runtime, raising an error if it was not fulfilled.
|
||||
|
@ -317,47 +317,12 @@ them by default: ``env TORCHDYNAMO_DYNAMIC_SHAPES=0 python model.py`` 2.
|
||||
CUDA graphs with Triton are enabled by default in inductor but removing
|
||||
them may alleviate some OOM issues: ``torch._inductor.config.triton.cudagraphs = False``.
|
||||
|
||||
``torch.func`` does not work with ``torch.compile``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
Does ``torch.func`` work with ``torch.compile`` (for `grad` and `vmap` transforms)?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Applying a ``torch.func`` transform to a function that uses ``torch.compile``
|
||||
does not work:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
|
||||
@torch.compile
|
||||
def f(x):
|
||||
return torch.sin(x)
|
||||
|
||||
def g(x):
|
||||
return torch.grad(f)(x)
|
||||
|
||||
x = torch.randn(2, 3)
|
||||
g(x)
|
||||
|
||||
As a workaround, use ``torch.compile`` outside of the ``torch.func`` function:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
|
||||
def f(x):
|
||||
return torch.sin(x)
|
||||
|
||||
@torch.compile
|
||||
def g(x):
|
||||
return torch.vmap(f)(x)
|
||||
|
||||
x = torch.randn(2, 3)
|
||||
g(x)
|
||||
|
||||
Applying a ``torch.func`` transform to a function handled with ``torch.compile``
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
For example, you have the following code:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
@ -374,12 +339,18 @@ For example, you have the following code:
|
||||
|
||||
This code will not work. There is an `issue <https://github.com/pytorch/pytorch/issues/100320>`__
|
||||
that you can track for this.
|
||||
As a workaround, please put the ``torch.compile`` outside of ``torch.func`` transform:
|
||||
|
||||
As a workaround, use ``torch.compile`` outside of the ``torch.func`` function:
|
||||
|
||||
.. note::
|
||||
This is an experimental feature and can be used by setting `torch._dynamo.config.capture_func_transforms=True`
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
|
||||
torch._dynamo.config.capture_func_transforms=True
|
||||
|
||||
def f(x):
|
||||
return torch.sin(x)
|
||||
|
||||
@ -393,18 +364,137 @@ As a workaround, please put the ``torch.compile`` outside of ``torch.func`` tran
|
||||
Calling ``torch.func`` transform inside of a function handled with ``torch.compile``
|
||||
------------------------------------------------------------------------------------
|
||||
|
||||
|
||||
Compiling ``torch.func.grad`` with ``torch.compile``
|
||||
----------------------------------------------------
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
|
||||
@torch.compile
|
||||
def f(x):
|
||||
return torch.vmap(torch.sum)(x)
|
||||
torch._dynamo.config.capture_func_transforms=True
|
||||
|
||||
x = torch.randn(2, 3)
|
||||
f(x)
|
||||
def wrapper_fn(x):
|
||||
return torch.func.grad(lambda x: x.sin().sum())(x)
|
||||
|
||||
This doesn't work yet. As a workaround, use ``torch._dynamo.allow_in_graph``
|
||||
x = torch.randn(3, 3, 3)
|
||||
grad_x = torch.compile(wrapper_fn)(x)
|
||||
|
||||
Compiling ``torch.vmap`` with ``torch.compile``
|
||||
-----------------------------------------------
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
|
||||
torch._dynamo.config.capture_func_transforms=True
|
||||
|
||||
def my_fn(x):
|
||||
return torch.vmap(lambda x: x.sum(1))(x)
|
||||
|
||||
x = torch.randn(3, 3, 3)
|
||||
output = torch.compile(my_fn)(x)
|
||||
|
||||
Limitations
|
||||
-----------
|
||||
|
||||
There are currently a few cases which are not supported and lead to graph breaks
|
||||
(that is, torch.compile falls back to eager-mode PyTorch on these). We are working
|
||||
on improving the situation for the next release (PyTorch 2.2)
|
||||
|
||||
1. The inputs and outputs of the function being transformed over must be tensors.
|
||||
We do not yet support things like tuple of Tensors.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
|
||||
torch._dynamo.config.capture_func_transforms=True
|
||||
|
||||
def fn(x):
|
||||
x1, x2 = x
|
||||
return x1 + x2
|
||||
|
||||
def my_fn(x):
|
||||
return torch.func.vmap(fn)(x)
|
||||
|
||||
x1 = torch.randn(3, 3, 3)
|
||||
x2 = torch.randn(3, 3, 3)
|
||||
# Unsupported, falls back to eager-mode PyTorch
|
||||
output = torch.compile(my_fn)((x1, x2))
|
||||
|
||||
2. Keyword arguments are not supported.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
|
||||
torch._dynamo.config.capture_func_transforms=True
|
||||
|
||||
def fn(x, y):
|
||||
return (x + y).sum()
|
||||
|
||||
def my_fn(x, y):
|
||||
return torch.func.grad(fn)(x, y=y)
|
||||
|
||||
x = torch.randn(3, 3)
|
||||
y = torch.randn(3, 3)
|
||||
# Unsupported, falls back to eager-mode PyTorch
|
||||
output = torch.compile(my_fn)(x, y)
|
||||
|
||||
3. Functions with observable side effects. For example, it is OK to mutate a list created in the function,
|
||||
but not OK to mutate a list created outside of the function.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
|
||||
torch._dynamo.config.capture_func_transforms=True
|
||||
|
||||
some_list = []
|
||||
|
||||
def f(x, y):
|
||||
some_list.append(1)
|
||||
return x + y
|
||||
|
||||
def my_fn(x, y):
|
||||
return torch.func.vmap(f)(x, y)
|
||||
|
||||
x = torch.ones(2, 3)
|
||||
y = torch.randn(2, 3)
|
||||
# Unsupported, falls back to eager-mode PyTorch
|
||||
output = torch.compile(my_fn)(x, y)
|
||||
|
||||
4. ``torch.vmap`` over a function that calls one or more operators in the following list.
|
||||
|
||||
.. note::
|
||||
'stride', 'requires_grad', 'storage_offset', 'layout', 'data', 'is_coalesced', 'is_complex',
|
||||
'is_conj', 'is_contiguous', 'is_cpu', 'is_cuda', 'is_distributed', 'is_floating_point',
|
||||
'is_inference', 'is_ipu', 'is_leaf', 'is_meta', 'is_mkldnn', 'is_mps', 'is_neg', 'is_nested',
|
||||
'is_nonzero', 'is_ort', 'is_pinned', 'is_quantized', 'is_same_size', 'is_set_to', 'is_shared',
|
||||
'is_signed', 'is_sparse', 'is_sparse_csr', 'is_vulkan', 'is_xla', 'is_xpu'
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
|
||||
torch._dynamo.config.capture_func_transforms=True
|
||||
|
||||
def bad_fn(x):
|
||||
x.stride()
|
||||
return x
|
||||
|
||||
def my_fn(x):
|
||||
return torch.func.vmap(bad_fn)(x)
|
||||
|
||||
x = torch.randn(3, 3, 3)
|
||||
# Unsupported, falls back to eager-mode PyTorch
|
||||
output = torch.compile(my_fn)(x)
|
||||
|
||||
Compiling functions besides the ones which are supported (escape hatch)
|
||||
-----------------------------------------------------------------------
|
||||
|
||||
For other transforms, as a workaround, use ``torch._dynamo.allow_in_graph``
|
||||
|
||||
``allow_in_graph`` is an escape hatch. If your code does not work with
|
||||
``torch.compile``, which introspects Python bytecode, but you believe it
|
||||
@ -438,6 +528,160 @@ invokes an ``nn.Module``. This is because the outputs now depend on the
|
||||
parameters of the ``nn.Module``. To get this to work, use
|
||||
``torch.func.functional_call`` to extract the module state.
|
||||
|
||||
Does NumPy work with ``torch.compile``?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Starting in 2.1, ``torch.compile`` understands native NumPy programs that
|
||||
work on NumPy arrays, and mixed PyTorch-NumPy programs that convert from PyTorch
|
||||
to NumPy and back via ``x.numpy()``, ``torch.from_numpy``, and related functions.
|
||||
|
||||
.. _nonsupported-numpy-feats:
|
||||
|
||||
Which NumPy features does ``torch.compile`` support?
|
||||
----------------------------------------------------
|
||||
|
||||
NumPy within ``torch.compile`` follows NumPy 2.0 pre-release.
|
||||
|
||||
Generally, ``torch.compile`` is able to trace through most NumPy constructions,
|
||||
and when it cannot, it falls back to eager and lets NumPy execute that piece of
|
||||
code. Even then, there are a few features where ``torch.compile`` semantics
|
||||
slightly deviate from those of NumPy:
|
||||
|
||||
- NumPy scalars: We model them as 0-D arrays. That is, ``np.float32(3)`` returns
|
||||
a 0-D array under ``torch.compile``. To avoid a graph break, it is best to use this 0-D
|
||||
array. If this breaks your code, you can workaround this by casting the NumPy scalar
|
||||
to the relevant Python scalar type ``bool/int/float``.
|
||||
|
||||
- Negative strides: ``np.flip`` and slicing with a negative step return a copy.
|
||||
|
||||
- Type promotion: NumPy's type promotion will change in NumPy 2.0. The new rules
|
||||
are described in `NEP 50 <https://numpy.org/neps/nep-0050-scalar-promotion.html)>`__.
|
||||
``torch.compile`` implements NEP 50 rather than the current soon-to-be deprecated rules.
|
||||
|
||||
- ``{tril,triu}_indices_from/{tril,triu}_indices`` return arrays rather than a tuple of arrays.
|
||||
|
||||
There are other features for which we do not support tracing and we gracefully
|
||||
fallback to NumPy for their execution:
|
||||
|
||||
- Non-numeric dtypes like datetimes, strings, chars, void, structured dtypes and recarrays.
|
||||
|
||||
- Long dtypes ``np.float128/np.complex256`` and some unsigned dtypes ``np.uint16/np.uint32/np.uint64``.
|
||||
|
||||
- ``ndarray`` subclasses.
|
||||
|
||||
- Masked arrays.
|
||||
|
||||
- Esoteric ufunc machinery like ``axes=[(n,k),(k,m)->(n,m)]`` and ufunc methods (e.g., ``np.add.reduce``).
|
||||
|
||||
- Sorting / ordering ``complex64/complex128`` arrays.
|
||||
|
||||
- NumPy ``np.poly1d`` and ``np.polynomial``.
|
||||
|
||||
- Positional ``out1, out2`` args in functions with 2 or more returns (``out=tuple`` does work).
|
||||
|
||||
- ``__array_function__``, ``__array_interface__`` and ``__array_wrap__``.
|
||||
|
||||
- ``ndarray.ctypes`` attribute.
|
||||
|
||||
Can I execute NumPy code on CUDA via ``torch.compile``?
|
||||
-------------------------------------------------------
|
||||
|
||||
Yes you can! To do so, you may simply execute your code within a ``torch.device("cuda")``
|
||||
context. Consider the example
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
import torch
|
||||
import numpy as np
|
||||
|
||||
@torch.compile
|
||||
def numpy_fn(X: np.ndarray, Y: np.ndarray) -> np.ndarray:
|
||||
return np.sum(X[:, :, None] * Y[:, None, :], axis=(-2, -1))
|
||||
|
||||
X = np.random.randn(1024, 64)
|
||||
Y = np.random.randn(1024, 64)
|
||||
with torch.device("cuda"):
|
||||
Z = numpy_fn(X, Y)
|
||||
|
||||
|
||||
In this example, ``numpy_fn`` will be executed in CUDA. For this to be
|
||||
possible, ``torch.compile`` automatically moves ``X`` and ``Y`` from CPU
|
||||
to CUDA, and then it moves the result ``Z`` from CUDA to CPU. If we are
|
||||
executing this function several times in the same program run, we may want
|
||||
to avoid all these rather expensive memory copies. To do so, we just need
|
||||
to tweak our ``numpy_fn`` so that it accepts cuda Tensors and returns tensors:
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
@torch.compile
|
||||
def numpy_fn(X: torch.Tensor, Y: torch.Tensor) -> torch.Tensor:
|
||||
X, Y = X.numpy(), Y.numpy()
|
||||
Z = np.sum(X[:, :, None] * Y[:, None, :], axis=(-2, -1))
|
||||
return torch.from_numpy(Z)
|
||||
|
||||
X = torch.randn(1024, 64, device="cuda")
|
||||
Y = torch.randn(1024, 64, device="cuda")
|
||||
with torch.device("cuda"):
|
||||
Z = numpy_fn(X, Y)
|
||||
|
||||
By doing this, we explicitly create the tensors in CUDA memory, and we keep
|
||||
them there. In this case ``X.numpy()`` and ``from_numpy()`` are hints to the compiler
|
||||
but no real data movement happens. Note that the original program would not run
|
||||
on eager mode now. If you want to run it in eager mode, you would need to call
|
||||
``.numpy(force=True)`` doing ``Z = Z.cuda()`` before returning
|
||||
``Z``. Of course, doing this would execute the program on eager mode NumPy, and
|
||||
on CPU.
|
||||
|
||||
|
||||
How do I debug NumPy code under ``torch.compile``?
|
||||
--------------------------------------------------
|
||||
|
||||
Debugging JIT compiled code is challenging, given the complexity of modern
|
||||
compilers and the daunting errors that they raise.
|
||||
`The tutorial on how to diagnose runtime errors within torch.compile <https://pytorch.org/docs/main/torch.compiler_troubleshooting.html#diagnosing-runtime-errors>`__
|
||||
contains a few tips and tricks on how to tackle this task.
|
||||
|
||||
If the above is not enough to pinpoint the origin of the issue, there are still
|
||||
a few other NumPy-specific tools we can use. We can discern whether the bug
|
||||
is entirely in the PyTorch code by disabling tracing through NumPy functions:
|
||||
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
from torch._dynamo import config
|
||||
config.trace_numpy = False
|
||||
|
||||
If the bug lies in the traced NumPy code, we can execute the NumPy code eagerly (without ``torch.compile``)
|
||||
using PyTorch as a backend by importing ``import torch._numpy as np``.
|
||||
This should just be used for **debugging purposes** and is in no way a
|
||||
replacement for the PyTorch API, as it is **much less performant** and, as a
|
||||
private API, **may change without notice**. At any rate, ``torch._numpy`` is a
|
||||
Python implementation of NumPy in terms of PyTorch and it is used internally by ``torch.compile`` to
|
||||
transform NumPy code into Pytorch code. It is rather easy to read and modify,
|
||||
so if you find any bug in it feel free to submit a PR fixing it or simply open
|
||||
an issue.
|
||||
|
||||
If the program does work when importing ``torch._numpy as np``, chances are
|
||||
that the bug is in TorchDynamo. If this is the case, please feel open an issue
|
||||
with a `minimal reproducer <https://pytorch.org/docs/2.1/torch.compiler_troubleshooting.html>`__.
|
||||
|
||||
I ``torch.compile`` some NumPy code and I did not see any speed-up.
|
||||
-------------------------------------------------------------------
|
||||
|
||||
The best place to start is the
|
||||
`tutorial with general advice for how to debug these sort of torch.compile issues <https://pytorch.org/docs/main/torch.compiler_faq.html#why-am-i-not-seeing-speedups>`__.
|
||||
|
||||
Some graph breaks may happen because of the use of unsupported features. See
|
||||
:ref:`nonsupported-numpy-feats`. More generally, it is useful to keep in mind
|
||||
that some widely used NumPy features do not play well with compilers. For
|
||||
example, in-place modifications make reasoning difficult within the compiler and
|
||||
often yield worse performance than their out-of-place counterparts.As such, it is best to avoid
|
||||
them. Same goes for the use of the ``out=`` parameter. Instead, prefer
|
||||
out-of-place ops and let ``torch.compile`` optimize the memory use. Same goes
|
||||
for data-dependent ops like masked indexing through boolean masks, or
|
||||
data-dependent control flow like ``if`` or ``while`` constructions.
|
||||
|
||||
|
||||
Which API to use for fine grain tracing?
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user