Compare commits

...

132 Commits

Author SHA1 Message Date
a545ebf334 [releng] Docker release, Change docker tag. Refactor Push nightly tags step. Move cuda and cudnn version to docker tag rather then name (#116098)
* [releng] Docker Official release make sure cuda version is part of image name (#116070)

Follow up on https://github.com/pytorch/pytorch/pull/115949

Change docker build image name:
``pytorch:2.1.2-devel``-> ``2.1.2-cuda12.1-cudnn8-devel and 2.1.2-cuda11.8-cudnn8-devel``

Ref: https://github.com/orgs/pytorch/packages/container/package/pytorch-nightly

Naming will be same as in https://hub.docker.com/r/pytorch/pytorch/tags
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116070
Approved by: https://github.com/huydhn, https://github.com/seemethere

* [releng] Docker release Refactor Push nightly tags step. Move cuda and cudnn version to docker tag rather then name (#116097)

Follow up after : https://github.com/pytorch/pytorch/pull/116070

This PR does 2 things.

1. Refactor Push nightly tags step, don't need to extract CUDA_VERSION anymore. New tag should be in this format: ``${PYTORCH_VERSION}-cuda$(CUDA_VERSION_SHORT)-cudnn$(CUDNN_VERSION)-runtime``
2. Move cuda$(CUDA_VERSION_SHORT)-cudnn$(CUDNN_VERSION) from docker name to tag

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116097
Approved by: https://github.com/jeanschmidt

* [release only change] hardcode pytorch 2.1.2 version
2023-12-19 09:35:23 -05:00
590a9c2762 Use matrix generate script for docker release workflows (#115949) (#116063)
* Use matrix generate script for docker release workflows (#115949)

Enable both supported CUDA version builds for docker release. Rather then building only 1 version.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115949
Approved by: https://github.com/huydhn

* release 2.1.2 only changes, fix rebase

* lint
2023-12-18 17:59:34 -05:00
a8e7c98cb9 Revert "Require less alignment for attn bias (#114173) (#114837)"
This reverts commit 59656491f3b1da809312942872cce010337504b0.
2023-12-12 08:41:07 -08:00
448700d18e Fix NULL dereference in binary CPU ops (#115241)
* Fix NULL dereference in binary CPU ops (#115183)

Targeted fix for https://github.com/pytorch/pytorch/issues/113037

A more fundamental one, where those functions are not even called for
empty tensors are coming later

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115183
Approved by: https://github.com/drisspg, https://github.com/atalman, https://github.com/huydhn

* Fix build after conflict resolution

* Also include https://github.com/pytorch/pytorch/pull/113262 to pass the test

---------

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2023-12-06 01:20:06 -08:00
59656491f3 Require less alignment for attn bias (#114173) (#114837)
Improved Fix for Attention Mask Alignment Issue (#112577)

This PR addresses Issue #112577 by refining the previously implemented fix, which was found to be incorrect and causes un-needed memory regressions. The update simplifies the approach to handling the alignment of the attention mask for mem eff attention.

Alignment Check and Padding: Initially, the alignment of the attention mask is checked. If misalignment is detected, padding is applied, followed by slicing. During this process, a warning is raised to alert users.

Should this be warn_once?

We only call expand, once on the aligned mask.

Reference
https://github.com/facebookresearch/xformers/blob/main/xformers/ops/fmha/cutlass.py#L115

@albanD, @mruberry, @jbschlosser, @walterddr, and @mikaylagawarecki.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114173
Approved by: https://github.com/danthe3rd
2023-12-05 14:50:58 -05:00
41210eaedc [MPS] Fix out-of-bounds fill to sliced tensor (#114958)
This fixes regression introduced by https://github.com/pytorch/pytorch/pull/81951 that caused out-of-bounds access when sliced tensor is filled with zeros

Remove bogus `TORCH_INTERNAL_ASSERT(length >= offset)` as [NSMakeRange](https://developer.apple.com/documentation/foundation/1417188-nsmakerange?language=objc) arguments are location and length rather than start and end offset.

In `fill_mps_tensor_`:
- Pass `value` argument to `MPSStream::fill`
- Pass `self.nbytes()` rather than `self.storage().nbytes()` as length of of buffer to fill as later will always results in out-of-bounds write if offset within the store is non-zero

Add regression test

Fixes https://github.com/pytorch/pytorch/issues/114692

Cherry pick of https://github.com/pytorch/pytorch/pull/114838 into release/2.1 branch

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2023-12-01 10:58:57 -08:00
3183bcd417 Fix mkldnn_matmul error on AArch64 (#114851)
Fixes https://github.com/pytorch/pytorch/issues/110149

Cherry pick https://github.com/pytorch/pytorch/pull/110150. This is a bug fix against 2.1 release
2023-11-30 08:11:08 -08:00
b5a89bbc5f Fix broadcasting cosine_similarity (#114795)
* Fix broadcasting cosine_similarity (#109363)

Fixes https://github.com/pytorch/pytorch/issues/109333
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109363
Approved by: https://github.com/peterbell10

* The PR incidentally fixes the test by switching from sizes to sym_sizes

test_make_fx_symbolic_exhaustive_masked_scatter_cpu_float32

---------

Co-authored-by: lezcano <lezcano-93@hotmail.com>
2023-11-30 00:23:40 -08:00
3f662b6255 Package pybind11/eigen/ (#113055) (#114756)
Which was added for eigen 2.11 release, see https://github.com/pybind/pybind11/tree/v2.11.0/include/pybind11/eigen

Fixes https://github.com/pytorch/pytorch/issues/112841

Cherry-pick of  https://github.com/pytorch/pytorch/pull/113055 into release/2.1 branch

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2023-11-29 07:04:29 -08:00
614af50378 [release only] Pin disabled-test-condensed and slow-tests json (#114514)
* [release only] Pin disabled-test-condensed json

* pin slow tests json
2023-11-27 13:30:27 -05:00
b3b22d7390 [BE] Handle errors in set_num_threads (#114420)
and `set_num_interop_threads`

Before that, call `torch.set_num_threads(2**65)` resulted in segmentation fault, afterwards it becomes a good old runtime error:
```
% python -c "import torch;torch.set_num_threads(2**65)"
Traceback (most recent call last):
  File "<string>", line 1, in <module>
RuntimeError: Overflow when unpacking long
```

Similar to https://github.com/pytorch/pytorch/pull/60073

Cherry pick of https://github.com/pytorch/pytorch/pull/113684 into release/2.1

(cherry picked from commit 78f3937ee84e71475942598f4b51dce7c8a70783)
2023-11-23 14:04:26 -05:00
7405d70c30 [MPS] Fix crashes during Conv backward pass (#114419)
By adding weights tensor to the MPSGraph cache key.
Add regression test to validate that collision no longer happens

Fixes https://github.com/pytorch/pytorch/issues/112998

Cherry pick of https://github.com/pytorch/pytorch/pull/113398 into release/2.1

(cherry picked from commit 265d6aac0b71b917d6e36c5dd65c22f61644b715)
2023-11-23 14:02:46 -05:00
d62c757533 [Caffe2] Handle cpuinfo_initialize() failure (#114418)
It can fail on ARM platform if `/sys` folder is not accessible.
In that case, call `std:🧵:hardware_concurrency()`, which is
aligned with the thread_pool initialization logic of `c10::TaskThreadPoolBase:defaultNumThreads()`

Further addresses issue raised in https://github.com/pytorch/pytorch/issues/113568
This is a cherry-pick of https://github.com/pytorch/pytorch/pull/114011 into release/2.1 branch

(cherry picked from commit 310e3060b7e4d0c76149aadad4519c7abed8c2a7)
2023-11-23 14:01:16 -05:00
7833889a44 Fix chrome trace entry format (#113763) (#114416)
Fix regression introduced by https://github.com/pytorch/pytorch/pull/107519

`'"args": {{}}}}, '` was part of format string, when curly braces a duplicated to get them printed single time, but ruff change left the string format as is

Fixes https://github.com/pytorch/pytorch/issues/113756

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113763
Approved by: https://github.com/Skylion007, https://github.com/aaronenyeshi

(cherry picked from commit e100ff42fd087d7a1696cb52c216507d45b8fb85)
2023-11-23 13:57:43 -05:00
4c55dc5035 remove _shard_tensor() call (#111687)
Co-authored-by: Andrey Talman <atalman@fb.com>
2023-11-08 07:49:29 -05:00
f58669bc5f c10::DriverAPI Try opening libcuda.so.1 (#113096)
As `libcuda.so` is only installed on dev environment (i.e. when CUDAToolkit is installed), while `libcuda.so.1` is part of NVIDIA driver.
Also, this will keep it aligned with a5cb8f75a7/aten/src/ATen/cuda/detail/LazyNVRTC.cpp (L16)
Better errors in `c10::DriverAPI` on `dlopn`/`dlsym` failures
    
Cherry-pick of  following PR into release/2.1 branch
- Better errors in `c10::DriverAPI` on `dl` failure (#112995)
- `c10::DriverAPI` Try opening libcuda.so.1 (#112996)

(cherry picked from commit 3be0e1cd587ece8fa54a3a4da8ae68225b9cbb9b)
(cherry picked from commit d0a80f8af19625cbd0b3eb74a1970ac5b7c5439a)
2023-11-07 11:47:24 -08:00
33106b706e [DCP] Add test for planner option for load_sharded_optimizer_state_dict (#112930)
Add test for a user submitted PR: https://github.com/pytorch/pytorch/pull/112259
Cherry-pick of https://github.com/pytorch/pytorch/pull/112891 into `release/2.1` branch
2023-11-07 11:38:50 -08:00
4b4c012a60 Enable planner to be used for loading sharded optimizer state dict (#112520)
Cherry-pick [#112259](https://github.com/pytorch/pytorch/pull/112259)

Requested by MosaicML

Comments from users:
> without this, we can't do training resumption because the model gets loaded without the optimizer

---------------------------------------------------------------------------------------------------------------------
This creates a more consistent interface for saving and loading sharded state dicts. A planner is able to be specified when saving a sharded optimizer state dict, but there is currently no planner support for loading one. This change does not affect the default behavior of the function.

Co-authored-by: Brian <23239305+b-chu@users.noreply.github.com>
2023-11-07 11:35:20 -08:00
47ac50248a [DCP][test] Make dim_0 size of params scale with world_size in torch/distributed/checkpoint/test_fsdp_optim_state.py (#112825) (#112894)
Make dim_0 size of params scale with world_size so it can be used to test the impact on performance when scaling up. More context of performance improvement is added in: https://github.com/pytorch/pytorch/pull/111687

For this cherry-pick pair, we remove `_shard_tensor()` call in `load_sharded_optimizer_state_dict()` in optimizer.py, which is reported to scale poorly with number of GPUs. The reason behind is that `_shard_tensor()` calls into `dist.all_gather_object()`, which is extremely expensive in communication when world_size becomes large.

main: https://github.com/pytorch/pytorch/pull/111096
cherry-pick: https://github.com/pytorch/pytorch/pull/111687

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112825
Approved by: https://github.com/fegin
2023-11-06 16:14:14 -05:00
dc96ecb8ac Fix mem eff bias bug (#112673) (#112796)
This fixes #112577
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112673
Approved by: https://github.com/cpuhrsch
2023-11-03 16:28:25 -07:00
18a2ed1db1 Mirror of Xformers Fix (#112267) (#112795)
# Summary
See https://github.com/fairinternal/xformers/pull/850 for more details
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112267
Approved by: https://github.com/cpuhrsch
2023-11-03 16:18:35 -07:00
b2e1277247 Fix the meta func for mem_eff_backward (#110893) (#112792)
Fixes #110832

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110893
Approved by: https://github.com/eellison
2023-11-03 16:17:04 -07:00
b249946c40 [Release-only] Pin Docker images to 2.1 for release (#112665)
* [Release-only] Pin Docker images for release (v2)

This is to include https://github.com/pytorch/builder/pull/1575

* Use -2.1 tag

* Pin some more

* Update workflow

* Pin everything to 2.1
2023-11-03 00:36:57 -07:00
ee79fc8a35 Revert "Fix bug: not creating empty tensor with correct sizes and device. (#106734)" (#112170) (#112790)
This reverts commit 528a2c0aa97d152b8004254040076b8ae605bf9f.

The PR is wrong, see #110941.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112170
Approved by: https://github.com/albanD

Co-authored-by: rzou <zou3519@gmail.com>
2023-11-02 17:29:01 -07:00
084343ee12 Fix buffer overflow in torch.sort (#112784)
By updating fbgemm submodule to `pytorch/release/2.1` branch, that
contains following two cherry-picks:
- 30f09a2646 (formatting)
-  70c6e83c29 (actual fix for the regression)

Add regression test for it (though it can probably be limited to just CPU as reproducer only works if num_threads is 1)

Fixes https://github.com/pytorch/pytorch/issues/111189

Cherry-pick of  https://github.com/pytorch/pytorch/pull/111672 into release/2.1 branch, but with more targeted fbgemm updated

(cherry picked from commit 03da0694b7f414f8124d6a1e377e1a7484e0cfb6)
2023-11-02 17:05:31 -07:00
8a178f153e Fixed a memory leak in PyTorchFileReader. Added a test to prevent regressions. (#111814)
Fixes https://github.com/pytorch/pytorch/issues/111330.

This PR prevents PyTorchFileReader from leaking memory when initialized with an already opened file handle instead of a file name.

Cherry-pick of https://github.com/pytorch/pytorch/pull/111703 into release/2.1
2023-11-02 14:04:47 -07:00
2353915d69 Prevent OOB access in foreach_list variants (#112756)
By checking that lists sizes are the same before computing forward gradients.

Before the change
```cpp
::std::vector<at::Tensor> _foreach_add_List(c10::DispatchKeySet ks, at::TensorList self, at::TensorList other, const at::Scalar & alpha) {
  auto self_ = unpack(self, "self", 0);
  auto other_ = unpack(other, "other", 1);
  [[maybe_unused]] auto _any_requires_grad = compute_requires_grad( self, other );

  std::vector<bool> _any_has_forward_grad_result(self.size());
  for (const auto& i : c10::irange(self.size())) {
    _any_has_forward_grad_result[i] = isFwGradDefined(self[i]) || isFwGradDefined(other[i]);
  }
  ...
```
after the change:
```cpp
::std::vector<at::Tensor> _foreach_add_List(c10::DispatchKeySet ks, at::TensorList self, at::TensorList other, const at::Scalar & alpha) {
    auto self_ = unpack(self, "self", 0);
    auto other_ = unpack(other, "other", 1);
    [[maybe_unused]] auto _any_requires_grad = compute_requires_grad( self, other );

    TORCH_CHECK(
        self.size() == other.size(),
          "Tensor lists must have the same number of tensors, got ",
        self.size(),
          " and ",
        other.size());
    std::vector<bool> _any_has_forward_grad_result(self.size());
    for (const auto& i : c10::irange(self.size())) {
      _any_has_forward_grad_result[i] = isFwGradDefined(self[i]) || isFwGradDefined(other[i]);
    }

```
Add regression test

Fixes https://github.com/pytorch/pytorch/issues/112305
Cherry-pick of  https://github.com/pytorch/pytorch/pull/112349 into `release/2.1` branch
Approved by: https://github.com/Chillee

(cherry picked from commit 80de49653a0d483eebf74c3aad1d4314a329aaee)
2023-11-02 12:49:00 -07:00
c1bc460377 [MPS] Fix mps to cpu copy with storage offset (#112432)
Fix https://github.com/pytorch/pytorch/issues/108978

Cherry-pick of https://github.com/pytorch/pytorch/pull/109557 into release/2.1 branch 


(cherry picked from commit 00871189972e81a5fde230bc08137be14c59f178)

Co-authored-by: Li-Huai (Allan) Lin <qqaatw@gmail.com>
2023-10-30 13:46:50 -07:00
2dc37f4f70 [MPS] Skip virtualized devices (#111576) (#112265)
* check in (#111875)

check in impl

address comments, skip test on rocm

unused

* [MPS] Skip virtualized devices (#111576)

Skip devices that does not support `MTLGPUFamilyMac2`, for example something called "Apple Paravirtual device", which started to appear in GitHub CI, from https://github.com/malfet/deleteme/actions/runs/6577012044/job/17867739464#step:3:18
```
Found device Apple Paravirtual device isLowPower false supports Metal false
```

As first attempt to allocate memory on such device will fail with:
```
RuntimeError: MPS backend out of memory (MPS allocated: 0 bytes, other allocations: 0 bytes, max allowed: 1.70 GB). Tried to allocate 0 bytes on private pool. Use PYTORCH_MPS_HIGH_WATERMARK_RATIO=0.0 to disable upper limit for memory allocations (may cause system failure).
```

Fixes https://github.com/pytorch/pytorch/issues/111449

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111576
Approved by: https://github.com/atalman, https://github.com/clee2000, https://github.com/huydhn

* Revert "check in (#111875)"

This reverts commit 2f502cc97fd9dd407dea9e1332724b18c2eb447f.

---------

Co-authored-by: eqy <eddiey@nvidia.com>
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2023-10-27 17:14:10 -07:00
f82d6e41a4 Update NCCL to 2.18.6 for upstream bugfix (#111677) 2023-10-27 13:24:02 -07:00
eqy
c79d2936d0 [NCCL][CUDA][CUDA Graphs] Flush enqueued work before starting a graph capture 2
2.1 release cherry-pick of #110665
2023-10-27 10:53:20 -07:00
ab5ea22c1d Revert "Do not materialize entire randperm in RandomSampler (#103339)" (#112187)
This reverts commit d80174e2db679365f8b58ff8583bdc4af5a8b74c.

Reverted https://github.com/pytorch/pytorch/pull/103339 on behalf of https://github.com/kit1980 due to Cause issues on MPS, and also fails without numpy ([comment](https://github.com/pytorch/pytorch/pull/103339#issuecomment-1781705172))

Co-authored-by: PyTorch MergeBot <pytorchmergebot@users.noreply.github.com>
2023-10-26 15:03:42 -07:00
5274580eb0 Ignore beartype if its version is 0.16.0 (#111861)
ghstack-source-id: 234c0d4424891f8836b84cba634512d4e2add51a
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111859
2023-10-26 13:25:46 -04:00
cd5859373c Fix #110680 (requires_grad typo in decomp) (#110687) (#111955)
Fixes https://github.com/pytorch/pytorch/issues/110680
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110687
Approved by: https://github.com/voznesenskym, https://github.com/lezcano
ghstack dependencies: #110501, #110504, #110591, #110668
2023-10-26 10:13:40 -07:00
7cc6081f87 [release/2.1.1][dynamo] Fix circular import with einops (#111835)
* [release/2.1.1][dynamo] Add specialized variable tracker for sys.modules

Original PR: #110990

`sys.modules` is currently treated as a constant dictionary and any reference to
it will result in guards on the full contents of `sys.modules`. This instead
adds a specialized variable tracker which tries to guard only on the modules
referenced by the code. e.g.

```
sys.modules["operator"].add(x, x)
```

will generate the guard
```
___dict_contains('operator', G['sys'].modules)
```

It does this with special support for `__contains__` `__getitem__` and `.get`
which are probably the most commonly used with `sys.modules`. For anything else
we just fall back to building the dict tracker as normal.

While accessing `sys.modules` may seem unusual, it actually comes up when
inlining the `warnings.catch_warnings` context manager which internally accesses
`sys.modules["warnings"]`.

* [release/2.1.1][dynamo] Register einops functions lazily (#110575)

Original PR #110575
Fixes #110549

This fixes a circular import between dynamo and einops. We work around the issue
by registering an initialization callback that is called the first time an object
from einops is seen in dynamo.

This guaruntees that dynamo will only import `einops` after it's already fully
initialized and was already called in a function being traced.
2023-10-26 10:03:39 -07:00
af1590cdf4 Verify flatbuffer module fields are initialized (#112165)
Fixes #109793

Add validation on flatbuffer module field to prevent segfault

Cherry pick of  https://github.com/pytorch/pytorch/pull/109794 into release/2.1
Approved by: https://github.com/malfet

Co-authored-by: Daniil Kutz <kutz@ispras.ru>
2023-10-26 09:58:27 -07:00
3f59221062 Fix docker release build for release 2.1.1 (#112040) 2023-10-25 17:37:58 -04:00
ab5b9192ce [release only] Pin Docker images for release. (#111971)
* [release only] Pin all docker build and test images for the release

* lint
2023-10-25 15:49:11 -04:00
736ebd3313 Fix regression in torch.equal behavior for NaNs (#111699) (#111996)
`torch.equal(x, x)` should return false if one of `x` is a tenor of floats one of which is NaN.
So, it renders some of the optimization proposed in https://github.com/pytorch/pytorch/pull/100024 invalid, though as result `torch.equal` will become much slower for identical floating point tensors.

Add regression test that calls torch.equal for tensor containing NaN

Fixes https://github.com/pytorch/pytorch/issues/111251

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111699
Approved by: https://github.com/Skylion007, https://github.com/albanD

(cherry picked from commit 7709382b5010fbcc15bb0ae26240ae06aa4e973d)
2023-10-25 15:33:29 -04:00
6ba919da27 Add continue-on-error if ssh step is failing (#111916) (#112026)
This is debugging step and should not cause the whole workflow to fail. Hence adding continue-on-error which Prevents a job from failing when a step fails. Set to true to allow a job to pass when this step fails
Failure:
https://github.com/pytorch/pytorch/actions/runs/6627941257/job/18003997514?pr=111821

Example:
```
Run seemethere/add-github-ssh-key@v1
  with:
    GITHUB_TOKEN: ***
    activate-with-label: true
    label: with-ssh
    remove-existing-keys: true
  env:
    ALPINE_IMAGE: 308535385114.dkr.ecr.us-east-1.amazonaws.com/tool/alpine
    ANACONDA_USER: pytorch
    AWS_DEFAULT_REGION: us-east-1
    BUILD_ENVIRONMENT: windows-binary-conda
    GITHUB_TOKEN: ***
    PR_NUMBER:
    SHA1: e561cd9d[2](https://github.com/pytorch/pytorch/actions/runs/6627941257/job/18003997514?pr=111821#step:3:2)5[3](https://github.com/pytorch/pytorch/actions/runs/6627941257/job/18003997514?pr=111821#step:3:3)d8[4](https://github.com/pytorch/pytorch/actions/runs/6627941257/job/18003997514?pr=111821#step:3:4)0834d8bbef4ec98ad8[6](https://github.com/pytorch/pytorch/actions/runs/6627941257/job/18003997514?pr=111821#step:3:6)[8](https://github.com/pytorch/pytorch/actions/runs/6627941257/job/18003997514?pr=111821#step:3:8)ba01e4
    SKIP_ALL_TESTS: 1
    PYTORCH_ROOT: C:\actions-runner\_work\pytorch\pytorch/pytorch
    BUILDER_ROOT: C:\actions-runner\_work\pytorch\pytorch/builder
    PACKAGE_TYPE: conda
    DESIRED_CUDA: cu118
    GPU_ARCH_VERSION: 11.8
    GPU_ARCH_TYPE: cuda
    DESIRED_PYTHON: 3.[9](https://github.com/pytorch/pytorch/actions/runs/6627941257/job/18003997514?pr=111821#step:3:9)
ciflow reference detected, attempting to extract PR number
Error: The request could not be processed because too many files changed
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111916
Approved by: https://github.com/malfet
2023-10-25 09:09:48 -07:00
cc54a5072e fix TEST_ROCM definition to disable test_jit_cudnn_extension on rocm (#110385) (#111942)
Define TEST_ROCM before modification TEST_CUDA. Otherwise TEST_ROCM will always be false and will not disable test_jit_cudnn_extension for rocm
Fixes https://github.com/pytorch/pytorch/issues/107182

Pull Request resolved: https://github.com/pytorch/pytorch/pull/110385
Approved by: https://github.com/jithunnair-amd, https://github.com/kit1980

Co-authored-by: Dmitry Nikolaev <dmitry.nikolaev@amd.com>
2023-10-24 13:20:14 -07:00
eqy
3788d86e3e [CUDA][cuFFT] Initialize CUDA context for cuFFT before execute is called (#111877)
update, add test

Do not run on ROCM

Update test/test_spectral_ops.py

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>

Update test/test_spectral_ops.py

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>

Update test/test_spectral_ops.py

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>

Update test_spectral_ops.py

Update SpectralOps.cpp
2023-10-24 11:20:54 -07:00
1f0450eed2 add sharded tensor test with empty shard (#111679) 2023-10-23 16:35:44 -04:00
9570baa150 [ONNX] Fix aten::new_zeros due to TorchScript behavior change on Pytorch 2.1 Fix #110935 (#110956) (#111694)
Fixes #110597

Summary:

* Generic code: The `torch._C.Value.node().mustBeNone()` is encapsulated into the high-level API `JitScalarType.from_value` ; `_is_none` was also extended to allow either `None` or `torch._C.Value.node.mustBeNone()`, so users don't manually call into TorchScript API when implementing operators
* Specific to `new_zeros` (and ops of ` *_like`  and `new_*`): When checking `dtype`, we always must use ` _is_none`, which will call  proposed by #110935
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110956
Approved by: https://github.com/justinchuby, https://github.com/BowenBao
2023-10-23 16:11:50 -04:00
b3b274ddcb Fix create source distribution step for release (#111697) (#111801)
This is fixing following failure in the release branch:
```
cp: cannot create directory '/tmp/pytorch-release/2.1': No such file or directory
```
Link: https://github.com/pytorch/pytorch/actions/runs/6591657669/job/17910724990

cp will report that error if the parent directory (pytorch-release in this case) does not exist.
This is working in main since ``PT_RELEASE_NAME: pytorch-main`` however for release its ``PT_RELEASE_NAME: pytorch-release/2.1``

Test:
```
export tag_or_branch=release/2.1
tag_or_branch="${tag_or_branch//\//_}"
echo $tag_or_branch
release_2.1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111697
Approved by: https://github.com/huydhn, https://github.com/osalpekar
2023-10-23 10:39:33 -04:00
5bcfb1b9b4 [release only] Pin disabled and unstable jobs. keep CI green (#111675) 2023-10-20 14:59:55 -04:00
c496f9a40b Revert "Update fully_sharded_data_parallel to fix typing (#110545) (#111036)" (#111683)
This reverts commit ed87177528c01ed2c836e31a0ad7153e1f83c3a0.
2023-10-20 14:53:48 -04:00
39a66a66fe Fix concurrency limits for Create Release (#111597)
Also, don't run it on tags, but run on release branch and on `release` event.
Tweak linter to accept different concurrency limits for `create_release.yml`

Fixes https://github.com/pytorch/pytorch/issues/110569 as all the invocations of workflow in the past were cancelled by concurrently limit due to the tag push and release happening at roughly the same time, see https://github.com/pytorch/pytorch/actions/workflows/create_release.yml?query=event%3Arelease

Cherry-pick of https://github.com/pytorch/pytorch/pull/110759 into release/2.1 branch

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2023-10-19 13:54:29 -07:00
ed87177528 Update fully_sharded_data_parallel to fix typing (#110545) (#111036)
Fixes typing so that linter does not complain when using CustomPolicy.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110545
Approved by: https://github.com/awgu, https://github.com/Skylion007

Co-authored-by: Mihir Patel <mihir.v.patel7@gmail.com>
2023-10-19 15:48:54 -04:00
c07240e5e4 Add pypi required metadata to all wheels except linux (#111578)
* Add pypi required metadata to all wheels except linux (#111042)

Will fix package after publishing https://github.com/pytorch/pytorch/issues/100974
Poetry install requires all wheels on pypi to have same metadata. Hence including linux dependencies in all non-linux wheels

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111042
Approved by: https://github.com/malfet

* Regenerate workflows
2023-10-19 14:40:28 -04:00
bb96803a35 Improved the docs for torch.std, torch.var, torch.std_mean, torch.var_mean and torch.cov (#109326) (#110969)
Fixes #109186.

This PR updates the docs for
- `torch.var`
- `torch.var_mean`
- `torch.std`
- `torch.std_mean`
- `torch.cov`

to reflect the actual implementation behavior when `correction >= N`. The math for `torch.cov` should probably be double checked before merging.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109326
Approved by: https://github.com/albanD
2023-10-19 10:34:59 -04:00
3002bf71e6 Update chunk_sharding_spec.py (#108915) (#111151)
Fixes #108869

Implements the first solution proposed in the issue.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108915
Approved by: https://github.com/wanchaol, https://github.com/wz337

Co-authored-by: Brian <23239305+b-chu@users.noreply.github.com>
2023-10-19 10:28:53 -04:00
e7892b2e02 [FSDP] continue if param not exist in sharded load (#109116) (#111149)
If I add a param and then wrap with FSDP + load state dict, when
strict=False don't hard error here.

Differential Revision: [D49170812](https://our.internmc.facebook.com/intern/diff/D49170812/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109116
Approved by: https://github.com/fegin

Co-authored-by: Rohan Varma <rvarm1@fb.com>
2023-10-19 10:28:30 -04:00
909fcf9b21 Try to use linux.arm64.2xlarge runners (#107672) (#111039)
Try to use linux.arm64.2xlarge runners.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107672
Approved by: https://github.com/atalman

Co-authored-by: DanilBaibak <baibak@meta.com>
2023-10-12 13:39:42 -04:00
0bc598a604 Fix Android publish step with lite interpreter (#111071) (#111083)
This file needs to be added to the list like others.  The publish command `BUILD_LITE_INTERPRETER=1 android/gradlew -p android publish` finishes successfully with this and files are available on Nexus:

![Screenshot 2023-10-11 at 11 56 53](https://github.com/pytorch/pytorch/assets/475357/849d4aa7-79f6-47fa-a471-d452d7c1bdf6)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111071
Approved by: https://github.com/atalman
2023-10-11 16:33:15 -07:00
dd7fb44d20 [Release-only] Set Android version to 2.1.0 (#111009) 2023-10-11 09:54:22 -07:00
6026c29db0 Add a workflow to release Android binaries (#110976) (#110655)
This adds 2 jobs to build PyTorch Android with and without lite interpreter:

* Keep the list of currently supported ABI armeabi-v7a, arm64-v8a, x86, x86_64
* Pass all the test on emulator
* Run an the test app on emulator and my Android phone `arm64-v8a` without any issue
![Screenshot_20231010-114453](https://github.com/pytorch/pytorch/assets/475357/57e12188-1675-44d2-a259-9f9577578590)
* Run on AWS https://us-west-2.console.aws.amazon.com/devicefarm/home#/mobile/projects/b531574a-fb82-40ae-b687-8f0b81341ae0/runs/5fce6818-628a-4099-9aab-23e91a212076
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110976
Approved by: https://github.com/atalman
2023-10-11 09:53:31 -07:00
0f9ac00ac6 [Release-only] Pin test-infra checkout branch (#111041)
Lint jobs have passed
2023-10-11 09:51:07 -07:00
209f2fa8ff Move Docker official builds to Cuda 12.1.1 (#110703) (#110705)
Since our pipy released CUDA version is 12.1.1, Moving the Docker builds to 12.1.1. Related to : https://github.com/pytorch/pytorch/issues/110643
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110703
Approved by: https://github.com/DanilBaibak
2023-10-06 10:27:01 -04:00
fa1db4310d [release only] Docker build for release - trigger manually from pytorch channel (#110566) 2023-10-04 18:54:58 -04:00
e6702486f6 [release only] Docker build for release - trigger manually from pytorch channel (#110556) 2023-10-04 17:47:39 -04:00
e68aa76642 [release only] Docker build for release - trigger manually from pytorch channel (#110553) 2023-10-04 16:52:14 -04:00
88cde0c37c [release only] Docker build for release - trigger manually from pytorch channel (#110547)
* [release only] Docker build for release - trigger manually from pytorch channel

* remove_typo
2023-10-04 16:33:45 -04:00
e4c42a93bc [release only] Docker build for release - trigger manually from pytorch channel (#110309)
* Use release channel for docker release

* fix input
2023-10-04 15:33:54 -04:00
7bcf7da3a2 Add tensorboard to pip requirements (#109349) (#109823)
https://github.com/pytorch/pytorch/pull/108351/files is failing on mac and windows because we don't have the dependency
It is available on linux because it is included in .ci/docker/requirements-docs.txt

Adding skips to make it green.

Here are some outputs for future debugging
https://github.com/pytorch/pytorch/actions/runs/6192933622/job/16813841625
https://ossci-raw-job-status.s3.amazonaws.com/log/16813841625
```

2023-09-15T02:09:43.2397460Z =================================== FAILURES ===================================
2023-09-15T02:09:43.2397650Z ______________________ TestTensorBoardSummary.test_audio _______________________
2023-09-15T02:09:43.2397830Z Traceback (most recent call last):
2023-09-15T02:09:43.2398090Z   File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/test_tensorboard.py", line 417, in test_audio
2023-09-15T02:09:43.2398390Z     self.assertTrue(compare_proto(summary.audio('dummy', tensor_N(shape=(42,))), self))
2023-09-15T02:09:43.2398720Z   File "/Users/ec2-user/runner/_work/_temp/conda_environment_6192933622/lib/python3.9/unittest/case.py", line 688, in assertTrue
2023-09-15T02:09:43.2399100Z ##[endgroup]
2023-09-15T02:09:43.2399240Z     raise self.failureException(msg)
2023-09-15T02:09:43.2399400Z AssertionError: False is not true
2023-09-15T02:09:43.2399490Z
2023-09-15T02:09:43.2399590Z To execute this test, run the following from the base repo dir:
2023-09-15T02:09:43.2399820Z      python test/test_tensorboard.py -k test_audio
2023-09-15T02:09:43.2399930Z
```

https://github.com/pytorch/pytorch/actions/runs/6192933622/job/16814065258
https://ossci-raw-job-status.s3.amazonaws.com/log/16814065258
```

2023-09-15T02:38:44.6284979Z ================================== FAILURES ===================================
2023-09-15T02:38:44.6285295Z ______________________ TestTensorBoardNumpy.test_scalar _______________________
2023-09-15T02:38:44.6285556Z Traceback (most recent call last):
2023-09-15T02:38:44.6285915Z   File "C:\actions-runner\_work\pytorch\pytorch\test\test_tensorboard.py", line 794, in test_scalar
2023-09-15T02:38:44.6286325Z     res = make_np(np.float128(1.00008 + 9))
2023-09-15T02:38:44.6286705Z   File "C:\Jenkins\Miniconda3\lib\site-packages\numpy\__init__.py", line 315, in __getattr__
2023-09-15T02:38:44.6287700Z     raise AttributeError("module {!r} has no attribute "
2023-09-15T02:38:44.6288060Z AttributeError: module 'numpy' has no attribute 'float128'
2023-09-15T02:38:44.6288241Z
2023-09-15T02:38:44.6288390Z To execute this test, run the following from the base repo dir:
2023-09-15T02:38:44.6288679Z      python test\test_tensorboard.py -k test_scalar
2023-09-15T02:38:44.6288846Z
```

https://github.com/pytorch/pytorch/actions/runs/6193449301/job/16815113985
https://ossci-raw-job-status.s3.amazonaws.com/log/16815113985
```
2023-09-15T03:25:53.7797550Z =================================== FAILURES ===================================
2023-09-15T03:25:53.7797790Z __________________ TestTensorBoardSummary.test_histogram_auto __________________
2023-09-15T03:25:53.7798000Z Traceback (most recent call last):
2023-09-15T03:25:53.7798310Z   File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/test_tensorboard.py", line 426, in test_histogram_auto
2023-09-15T03:25:53.7798690Z     self.assertTrue(compare_proto(summary.histogram('dummy', tensor_N(shape=(1024,)), bins='auto', max_bins=5), self))
2023-09-15T03:25:53.7799090Z   File "/Users/ec2-user/runner/_work/_temp/conda_environment_6193449301/lib/python3.9/unittest/case.py", line 688, in assertTrue
2023-09-15T03:25:53.7799430Z     raise self.failureException(msg)
2023-09-15T03:25:53.7799610Z AssertionError: False is not true
2023-09-15T03:25:53.7799720Z
2023-09-15T03:25:53.7799840Z To execute this test, run the following from the base repo dir:
2023-09-15T03:25:53.7800170Z      python test/test_tensorboard.py -k test_histogram_auto
2023-09-15T03:25:53.7800310Z
2023-09-15T03:25:53.7800430Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
2023-09-15T03:25:53.7800870Z - generated xml file: /Users/ec2-user/runner/_work/pytorch/pytorch/test/test-reports/python-pytest/test_tensorboard/test_tensorboard-aef95b5e2d69c061.xml -
2023-09-15T03:25:53.7801200Z =========================== short test summary info ============================
```

https://github.com/pytorch/pytorch/actions/runs/6193576371/job/16815396352
https://ossci-raw-job-status.s3.amazonaws.com/log/16815396352
```
2023-09-15T03:47:02.9430070Z _________________ TestTensorBoardSummary.test_histogram_doane __________________
2023-09-15T03:47:02.9430250Z Traceback (most recent call last):
2023-09-15T03:47:02.9430520Z   File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/test_tensorboard.py", line 433, in test_histogram_doane
2023-09-15T03:47:02.9430850Z     self.assertTrue(compare_proto(summary.histogram('dummy', tensor_N(shape=(1024,)), bins='doane', max_bins=5), self))
2023-09-15T03:47:02.9431180Z   File "/Users/ec2-user/runner/_work/_temp/conda_environment_6193576371/lib/python3.9/unittest/case.py", line 688, in assertTrue
2023-09-15T03:47:02.9431390Z     raise self.failureException(msg)
2023-09-15T03:47:02.9431550Z AssertionError: False is not true
2023-09-15T03:47:02.9431640Z
2023-09-15T03:47:02.9431730Z To execute this test, run the following from the base repo dir:
2023-09-15T03:47:02.9432000Z      python test/test_tensorboard.py -k test_histogram_doane
2023-09-15T03:47:02.9432120Z
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109349
Approved by: https://github.com/huydhn

(cherry picked from commit 1cc0921eb62089392595264610cfa863d42fe9bc)

Co-authored-by: Catherine Lee <csl@fb.com>
2023-09-21 16:25:09 -06:00
1841d54370 [CI] Add torch.compile works without numpy test (#109624) (#109818)
Fixes https://github.com/pytorch/pytorch/issues/109387

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109624
Approved by: https://github.com/albanD

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2023-09-21 15:27:52 -06:00
fca42334be Fix the parameter error in test_device_mesh.py (#108758) (#109826)
Fix the parameter error in test_device_mesh.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108758
Approved by: https://github.com/awgu

(cherry picked from commit 03bf745e1d6a050a9e322d41c2b7e75db8cdbedc)

Co-authored-by: humingxue <humingxue1@huawei.com>
2023-09-21 15:13:09 -06:00
539a971161 [Release-2.1]Add finfo properties for float8 dtypes (#109808)
Add float8 finfo checks to `test_type_info.py`
Fixes https://github.com/pytorch/pytorch/issues/109737
Cherry-pick of https://github.com/pytorch/pytorch/pull/109744 into release/2.1 branch
Approved by: https://github.com/drisspg

(cherry picked from commit cddd0db241a3b8df930284fd29523da9d28b1f2c)
2023-09-21 11:51:09 -07:00
9287a0cf59 [Release/2.1][JIT] Fix typed enum handling in 3.11 (#109807)
In Python-3.11+ typed enums (such as `enum.IntEnum`) retain `__new__`,`__str__` and so on method of the base class via `__init__subclass__()` method (see https://docs.python.org/3/whatsnew/3.11.html#enum ), i.e. following code
```python
import sys
import inspect
from enum import Enum

class IntColor(int, Enum):
    RED = 1
    GREEN = 2

class Color(Enum):
    RED = 1
    GREEN = 2

def get_methods(cls):
    def predicate(m):
        if not inspect.isfunction(m) and not inspect.ismethod(m):
            return False
        return m.__name__ in cls.__dict__
    return inspect.getmembers(cls, predicate=predicate)

if __name__ == "__main__":
    print(sys.version)
    print(f"IntColor methods {get_methods(IntColor)}")
    print(f"Color methods {get_methods(Color)}")
```

Returns empty list for both cases for older Python, but on Python-3.11+ it returns list contains of enum constructors and others:
```shell
% conda run -n py310 python bar.py
3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:41:52) [Clang 15.0.7 ]
IntColor methods []
Color methods []
% conda run -n py311 python bar.py
3.11.0 | packaged by conda-forge | (main, Oct 25 2022, 06:21:25) [Clang 14.0.4 ]
IntColor methods [('__format__', <function Enum.__format__ at 0x105006ac0>), ('__new__', <function Enum.__new__ at 0x105006660>), ('__repr__', <function Enum.__repr__ at 0x1050068e0>)]
Color methods []
```

This change allows typed enums to be scriptable on 3.11, by explicitly marking several `enum.Enum` method to be dropped by jit script and adds test that typed enums are jit-scriptable.

Fixes https://github.com/pytorch/pytorch/issues/108933

Cherry-pick of https://github.com/pytorch/pytorch/pull/109717 into release/2.1 branch.
Approved by: https://github.com/atalman, https://github.com/davidberard98

(cherry picked from commit 55685d57c004f250118fcccc4e99ae883e037e2d)
2023-09-21 11:49:54 -07:00
c464075d5d [release only] Docker build - Setup release specific variables (#109809) 2023-09-21 12:24:45 -06:00
1b4161c686 [Release/2.1] [Docs] Fix compiler.list_backends invocation (#109800)
s/torch.compile.list_backends/torch.compiler.list_backends`

Fixes https://github.com/pytorch/pytorch/issues/109451

Cherry-pick of  https://github.com/pytorch/pytorch/pull/109568 into release/2.1 branch
Approved by: https://github.com/msaroufim, https://github.com/svekars

(cherry picked from commit af867c2d140c2ab071447219738e59df4ac927b9)
2023-09-21 10:54:37 -07:00
28220534de [Release/2.1] [Docs] Fix typo in torch.unflatten (#109801)
Fixes https://github.com/pytorch/pytorch/issues/109559

Cherry-pick of https://github.com/pytorch/pytorch/pull/109588 into release/2.1 branch
Approved by: https://github.com/lezcano

(cherry picked from commit 2f53bca0fc84d1829cc571d76c58ea411f4fc288)
2023-09-21 10:48:05 -07:00
da9639c752 Remove torchtext from Build Official Docker images (#109799) (#109803)
Fixes nightly official Docker image build.
Failures: https://hud.pytorch.org/hud/pytorch/pytorch/nightly/1?per_page=50&name_filter=Build%20Official

<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 8671bfc</samp>

Remove `torchtext` installation from `Dockerfile` for arm64. This fixes the arm64 build of the PyTorch Docker image.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109799
Approved by: https://github.com/seemethere
2023-09-21 11:31:22 -06:00
e534243ec2 Add docs for torch.compile(numpy) (#109789)
ghstack-source-id: 3e29b38d0bc574ab5f35eee34ebb37fa6238de7e
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109710
2023-09-21 10:31:36 -06:00
01fa8c140a Update dynamic shapes documentation (#109787)
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: 6da57e6a83233b9404734279df3883aeeb23feb7
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109764
2023-09-21 09:16:04 -06:00
5aae979614 [release-2.1] Make numpy dependency optional for torch.compile (#109608)
Cherrypick of 4ee179c952  a9bf1031d4 and fb58a72d96 into release/2.1 branch

Test plan: `python3 -c "import torch;torch.compile(lambda x:print(x))('Hello World')"`

Fixes #109387 to the release branch

* Fix `ConstantVariable` init method if NumPy is missing

By adding `np is not None` check before `isinstance(value, np.number)`

Partially addresses https://github.com/pytorch/pytorch/issues/109387

* [BE] Do not use `numpy` in `torch._inductor.codegen.cpp` (#109324)

`s/numpy.iinfo(numpy.int32)/torch.iinfo(torch.int32)/` as those two are interchangeable

Partially addresses https://github.com/pytorch/pytorch/issues/109387

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109324
Approved by: https://github.com/albanD

* Use `torch.cumsum` instead of numpy one (#109400)

`s/list(numpy.cumsum(foo))/torch.cumsum(torch.tensor(foo), 0).tolist()/`

Test plan: ` python3 ../test/inductor/test_split_cat_fx_passes.py -v`

Partially addresses https://github.com/pytorch/pytorch/issues/109387

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109400
Approved by: https://github.com/ezyang

---------

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2023-09-19 10:59:18 -07:00
ced78cc2a7 [fx][split] Copy node metadata for placeholders (#107981) (#109297)
- Follow-up to #107248 which copies metadata for placeholder nodes in the top-level FX graph
- Currently, top-level placeholders do not have their metadata copied over, causing loss of `TensorMetadata` in some `torch.compile` backends

Fixes https://github.com/pytorch/TensorRT/issues/2258
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107981
Approved by: https://github.com/angelayi

Co-authored-by: gs-olive <113141689+gs-olive@users.noreply.github.com>
2023-09-14 14:03:48 -04:00
d8db5808ce Fix CUDA-12 wheel loading on AmazonLinux (#109291)
Or any other distro that have different purelib and platlib paths Regression was introduced, when small wheel base dependency was migrated from CUDA-11 to CUDA-12

Not sure why, but minor version of the package is no longer shipped with following CUDA-12:
 - nvidia_cuda_nvrtc_cu12-12.1.105
 - nvidia-cuda-cupti-cu12-12.1.105
 - nvidia-cuda-cupti-cu12-12.1.105

But those were present in CUDA-11 release, i.e:
``` shell
bash-5.2# curl -OL 922c5996aa/nvidia_cuda_nvrtc_cu11-11.7.99-2-py3-none-manylinux1_x86_64.whl; unzip -t nvidia_cuda_nvrtc_cu11-11.7.99-2-py3-none-manylinux1_x86_64.whl |grep \.so
    testing: nvidia/cuda_nvrtc/lib/libnvrtc-builtins.so.11.7   OK
    testing: nvidia/cuda_nvrtc/lib/libnvrtc.so.11.2   OK
bash-5.2# curl -OL c64c03f49d/nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl; unzip -t nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl|grep \.so
    testing: nvidia/cuda_nvrtc/lib/libnvrtc-builtins.so.12.1   OK
    testing: nvidia/cuda_nvrtc/lib/libnvrtc.so.12   OK
```

Fixes https://github.com/pytorch/pytorch/issues/109221

This is a cherry-pick of  https://github.com/pytorch/pytorch/pull/109244 into release/2.1 branch
2023-09-14 07:16:17 -07:00
889811ab5b [ONNX] bump submodule to onnx==1.14.1 (#108895) (#109114)
Bump the pip and submodule ONNX dependencies to official stable 1.14.1; there were no code changes between 1.14.1rc2 and 1.14.1.

Also bump ORT to run tests against ort-nightly==1.16.0.dev20230908001.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108895
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi

Co-authored-by: Aaron Bockover <abock@microsoft.com>
2023-09-12 12:09:59 -04:00
1191449343 Prerequisite of ATen/native/utils header for C++ extension (#109013) (#109106)
# Motivate
Without this PR, if we would like to include the header file like ```#include <ATen/native/ForeachUtils.h>``` in our C++ extension, it will raise a Error ```/home/xxx/torch/include/ATen/native/ForeachUtils.h:7:10: fatal error: 'ATen/native/utils/ParamsHash.h' file not found```. We should fix it.

# Solution
Add the ATen/native/utils header file in the build.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109013
Approved by: https://github.com/ezyang

Co-authored-by: Yu, Guangye <guangye.yu@intel.com>
2023-09-12 11:37:29 -04:00
6d9fad8474 [ONNX] Bump onnx submodule to 1.14.1; ONNX Runtime 1.16 (#106984) (#109045)
Bump dependencies:

- ort-nightly 1.16.0.dev20230824005
- onnx 1.14.1rc2
- onnxscript 0.1.0.dev20230825
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106984
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi

Co-authored-by: Aaron Bockover <abock@microsoft.com>
2023-09-12 07:38:07 -04:00
ed62318bea [export] Fix export arg type declaration (#109060) (#109064)
Summary: Its a arbitrary length tuple of anything. Tuple[Any] means 1 element.

Test Plan: ci

Differential Revision: D49161625

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109060
Approved by: https://github.com/angelayi

Co-authored-by: Jacob Szwejbka <jakeszwe@fb.com>
2023-09-11 17:57:36 -07:00
ee67c4dd6a Refactor ios-build-test workflow to support binary release (#108322) (#109069)
This refactors the logic from CircleCI iOS [build](https://github.com/pytorch/pytorch/blob/main/.circleci/config.yml#L1323-L1344) and [upload](https://github.com/pytorch/pytorch/blob/main/.circleci/config.yml#L1369-L1377) jobs to GHA.

* Nightly artifacts will be available again on `ossci-ios-build` S3 bucket, for example `libtorch_lite_ios_nightly_2.1.0.20230517.zip`.  The last one there was s3://ossci-ios-build/libtorch_lite_ios_nightly_2.1.0.20230517.zip from May 17th
  * [LibTorch-Lite-Nightly](https://github.com/CocoaPods/Specs/blob/master/Specs/c/3/1/LibTorch-Lite-Nightly/1.14.0.20221109/LibTorch-Lite-Nightly.podspec.json) on cocoapods
* Release artifacts will be on `ossci-ios` S3 bucket, for example `s3://ossci-ios/libtorch_lite_ios_1.13.0.zip` from Nov 3rd 2022
  * [LibTorch-Lite](https://github.com/CocoaPods/Specs/blob/master/Specs/c/c/3/LibTorch-Lite/1.13.0.1/LibTorch-Lite.podspec.json) on cocoapods
  * [LibTorch](https://github.com/CocoaPods/Specs/blob/master/Specs/1/3/c/LibTorch/1.13.0.1/LibTorch.podspec.json) on cocoapods

I will clean up Circle CI code in another PR.

### Testing

Generate new release artifacts for testing from main branch.  Simulator testing have all passed.

* With lite interpreter https://github.com/pytorch/pytorch/actions/runs/6093860118
  * https://ossci-ios.s3.amazonaws.com/libtorch_lite_ios_2.1.0.zip
  * https://ossci-ios.s3.amazonaws.com/LibTorch-Lite-2.1.0.podspec

* LibTorch binary can be built without lite interpreter https://github.com/pytorch/pytorch/actions/runs/6103616035 and uses TorchScript, but it has been long dead from my understanding.  The binary can still be built and tested though.
  * https://ossci-ios.s3.amazonaws.com/libtorch_ios_2.1.0.zip
  * https://ossci-ios.s3.amazonaws.com/LibTorch-2.1.0.podspec

### Next step for release

* Once the PR is committed.  I plan to use the workflow dispatch to build the binaries manually on `release/2.1` branch.  Once they looks good, we can publish them on cocoapods.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108322
Approved by: https://github.com/atalman
2023-09-11 17:10:22 -07:00
5529b81631 Add torch_lazy_enable_device_data_cache to disable lazy device data cache (#109051)
* Add logic to enable and disable the lazy device tensor cache without modifying it

* Remove as yet unused compilation cache enable/disable global

* Lint fixes
2023-09-11 18:25:05 -04:00
7e23b4907d [quant][pt2] Fix and rename move_model_to_eval (#108891) (#109027)
Summary:
This commit fixes two silent correctness problems with
the current implementation of `move_model_to_eval`:

(1) Previously the user had to manually call `eliminate_dead_code`
before calling `move_model_to_eval`, otherwise the dropout pattern
won't actually get eliminated. This is because subgraph rewriter
complains the match is not self-contained, and so silently does
not do the replacement.

(2) We wish to error when the user calls `model.train()` or
`model.eval()` on an exported model. This error is raised
correctly immediately after export today, but no longer raised
after the user calls prepare or convert.

We fix (1) by moving the `eliminate_dead_code` call into
`move_model_to_eval`, and fix (2) by ensuring the respective
errors are thrown after prepare and convert as well.

Additionally, this commit renames `move_model_to_eval` to
`move_exported_model_to_eval` to be more explicit.

bypass-github-export-checks

Test Plan:
python test/test_quantization.py TestQuantizePT2E.test_disallow_eval_train
python test/test_quantization.py TestQuantizePT2E.test_move_exported_model_to_eval

Imported from OSS

Differential Revision: D49097293

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108891
Approved by: https://github.com/jerryzh168
2023-09-11 18:14:49 -04:00
71c9d5c3a6 Refactor torch.onnx documentation (#109026)
* Refactor torch.onnx documentation (#108379)

* Distinguish both TorchScript-based exporter (`torch.onnx.export`) and the TorchDynamo-based exporter (`torch.onnx.dynamo_export`) exporters
* Merge ONNX diagnostics page with the exporter page
* Add initial version of a quick overview on the new exporter
* Updates `torch.compiler.html` with the right page for the ONNX Runtime backend for `torch.compile`
* Renamed doc files to clearly identify files belonging to the legacy and newer onnx exporters

Fixes #108274

https://docs-preview.pytorch.org/pytorch/pytorch/108379/index.html
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108379
Approved by: https://github.com/justinchuby, https://github.com/wschin, https://github.com/malfet

* Follow-up #108379 (#108905)

Fixes #108379

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108905
Approved by: https://github.com/abock
2023-09-11 14:27:26 -07:00
91e414957b fix documentation typo (#109054) 2023-09-11 17:04:48 -04:00
ce3ed7f293 [docs] Properly link register_post_accumulate_grad_hook docs (#108157) (#109047)
it shows up now

![image](https://github.com/pytorch/pytorch/assets/31798555/0aa86839-b9c5-4b4b-b1b1-aa1c0c0abbab)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108157
Approved by: https://github.com/soulitzer, https://github.com/albanD
2023-09-11 17:03:39 -04:00
bd372d460b [ONNX] Add initial support for FP8 ONNX export (#107962) (#108939)
This PR resurrects @tcherckez-nvidia's #106379 with changes to resolve conflicts against newer `main` and defines our own constants for the new ONNX types to [avoid breaking Meta's internal usage of an old ONNX](https://github.com/pytorch/pytorch/pull/106379#issuecomment-1675189340).

- `::torch::onnx::TensorProto_DataType_FLOAT8E4M3FN=17`
- `::torch::onnx::TensorProto_DataType_FLOAT8E5M2=19`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107962
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms

Co-authored-by: Aaron Bockover <abock@microsoft.com>
2023-09-11 14:58:24 -04:00
12b8c26f35 [export] torch.export landing page (#108783) (#108962)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108783
Approved by: https://github.com/avikchaudhuri, https://github.com/gmagogsfm
2023-09-11 10:13:24 -04:00
7397cf324c Don't fastpath conj copy when conj/neg bit mismatch (#108881) (#108961)
Fixes https://github.com/pytorch/pytorch/issues/106051

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108881
Approved by: https://github.com/soulitzer
2023-09-11 10:06:42 -04:00
fa8259db8d Revert and reland fix clang-tidy warnings in torch/csrc (#108825)
* Revert "[1/N] fix clang-tidy warnings in torch/csrc (#107648)"

This reverts commit 49eeca00d1e76dd0158758f2c29da6b1d06bf54a.

Reverted https://github.com/pytorch/pytorch/pull/107648 on behalf of https://github.com/osalpekar due to This causes breakages due to underspecified type ([comment](https://github.com/pytorch/pytorch/pull/107648#issuecomment-1696372588))

* [Reland] [1/N] fix clang-tidy warnings in torch/csrc (#108114)

Reland of PR #107648 with auto replaced with Py_ssize_t in eval_frame.c. This PR applies fixes to some found issues by clang-tidy in torch/csrc.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108114
Approved by: https://github.com/Skylion007

---------

Co-authored-by: PyTorch MergeBot <pytorchmergebot@users.noreply.github.com>
Co-authored-by: cyy <cyyever@outlook.com>
2023-09-08 17:55:48 -04:00
d83c8287ea Use contiguous() to handle noncontiguous outputs during elementwise decomposition (#108140) (#108555)
Fixes https://github.com/pytorch/pytorch/issues/108218

Use contiguous() API to handle noncontiguous outputs during elementwise decomp

With this change, ops is decomposing properly (testcase from the bug):
```
graph():
    %arg0_1 : [#users=3] = placeholder[target=arg0_1]
    %abs_1 : [#users=1] = call_function[target=torch.ops.aten.abs.default](args = (%arg0_1,), kwargs = {})
    %floor : [#users=1] = call_function[target=torch.ops.aten.floor.default](args = (%abs_1,), kwargs = {})
    %sign : [#users=1] = call_function[target=torch.ops.aten.sign.default](args = (%arg0_1,), kwargs = {})
    %mul : [#users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%floor, %sign), kwargs = {})
    %sub : [#users=1] = call_function[target=torch.ops.aten.sub.Tensor](args = (%arg0_1, %mul), kwargs = {})
    return (sub,)
```
Output:
```
tensor([[ 0.2871,  0.7189,  0.7297],
        [ 0.8782, -0.4899,  0.7055]], device='hpu:0')
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108140
Approved by: https://github.com/ezyang
2023-09-07 13:44:04 -04:00
ba19c52e31 Fix multi output layout error in indexing dtype calculation (#108085) (#108693)
Differential Revision: [D48757829](https://our.internmc.facebook.com/intern/diff/D48757829)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108085
Approved by: https://github.com/yanboliang, https://github.com/davidberard98, https://github.com/jansel, https://github.com/peterbell10
2023-09-07 13:29:06 -04:00
c5c9536aa7 move IPEX backend to training/inference category (#108737) 2023-09-07 13:24:20 -04:00
6b7a777661 [dtensor] fix two more requires_grad callsite (#108358) (#108738)
redistribute return a new DTensor and those returned DTensors should
follow the input DTensor requires_grad instead of the input tensor local
tensor's requires_grad
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108358
Approved by: https://github.com/fduwjj
2023-09-07 13:10:15 -04:00
ebd3224303 add torch_api (#108617) 2023-09-07 13:08:29 -04:00
6e4ae13657 Release only change, test against test channel (#108688) 2023-09-06 17:56:41 -04:00
265e46e193 Revert "docs: Match open bracket with close bracket in unsqueeze (#95215)" (#108680)
This reverts commit 9d04d376d81be2f01e5ea6b68943390346f2494c.

Reverted https://github.com/pytorch/pytorch/pull/95215 on behalf of https://github.com/kit1980 due to Incorrect assumptions ([comment](https://github.com/pytorch/pytorch/pull/95215#issuecomment-1708852420))

Co-authored-by: PyTorch MergeBot <pytorchmergebot@users.noreply.github.com>
2023-09-06 17:55:59 -04:00
da7290dfbd [ONNX] Show sarif_report_path (#108398) (#108679)
`sarif_report_path` was not formatted correctly in the error message

@BowenBao

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108398
Approved by: https://github.com/thiagocrepaldi
2023-09-06 17:53:54 -04:00
828992cf13 Inductor cpp wrapper: fix codegen of positional args with default value (#108652)
* Inductor cpp wrapper: fix codegen of positional args with default value (#108552)

Fixes https://github.com/pytorch/pytorch/issues/108323.
Cpp wrapper has functionality regression on `llama` and `tnt_s_patch16_224` due to recent support of scaled dot product flash attention in inductor.

The schema of this OP is as follows:
```
- func: _scaled_dot_product_flash_attention(Tensor query, Tensor key, Tensor value, float dropout_p=0.0, bool is_causal=False, bool return_debug_mask=False, *, float? scale=None) -> (Tensor output, Tensor logsumexp, Tensor cum_seq_q, Tensor cum_seq_k, int max_q, int max_k, Tensor philox_seed, Tensor philox_offset, Tensor debug_attn_mask)
```

For `llama` and `tnt_s_patch16_224`, the OP is called in the below way, where the three positional args with default values are not passed (`float dropout_p=0.0, bool is_causal=False, bool return_debug_mask=False`).
```python
y = torch.ops.aten._scaled_dot_product_flash_attention.default(x0, x1, x2, scale = 0.125)
```

This PR fixes the cpp wrapper support for this case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108552
Approved by: https://github.com/jgong5, https://github.com/desertfire, https://github.com/jansel

* ut: update function name on release branch
2023-09-06 13:41:25 -04:00
48246f3dfb Add check for out of range pointer. (#107510) (#108649)
### Summary

Hi! We've been fuzzing pytorch with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz) and found an error of accessing arbitary address while parsing flatbuffer format using `torch::load` function.

pytorch version: 18bcf62bbcf7ffd47e3bcf2596f72aa07a07d65f (the last commit at the moment of reporting the issue)

### Details
The vulnerability appears while loading arbitrary user input using `torch::load` function. To detect the error the input must correspond to FlatbufferFileFormat, so the part of parsing flatbuffer in `import_ir_module` function must be executed.

Firstly error can occur in `GetMutableRoot` in `module.h`, where we add pointer to input data buffer with the value, got from dereference of this pointer (which data fully depends on the user input and can be arbitrary). so the resulting `flatbuffer_module` address can be corrupted.

Moreover, we can get the arbitrary address later at `flatbuffer_loader.cpp:305`, when we get `ival` pointer with `Get` method.
There in `IndirectHelper::Read` function we add pointer with the offset got from the dereference of this pointer, so the address can be corrupted again.

The corrupted `ival` pointer is dereferenced at `table.h` in flatbuffers project, where is used to get another address, which is later dereferenced again at `table.h` in flatbuffers project. The resulting corrupted address is written to `func` pointer at `flatbuffer_loader.cpp:274`, which is then used in `parseFunction`, where write access to the address occurs.

To fix the problem we can compute the end of memory area in `parse_and_initialize_mobile_module` function like this:
```
auto* end = static_cast<char*>(data) + size;
```
And then pass it to all the callees and insert corresponding checks.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107510
Approved by: https://github.com/albanD

Co-authored-by: Eli Kobrin <kobrineli@ispras.ru>
2023-09-06 13:21:04 -04:00
7d6971dcee [dtensor] fix new_empty_strided op (#107835) (#108600)
This PR fixes the new_empty_strided op to become replicate from sharding
when necessary, this is a quick fix to resolve https://github.com/pytorch/pytorch/issues/107661

We'll need to think more about the behavior of this op when it comes to
sharding, one possibility is to follow the input sharding, but given the
output shape of this op might not be the same as the input, it's hard to
say we should follow the input sharding, further improvement needed once
we figure out the op syntax
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107835
Approved by: https://github.com/fduwjj
2023-09-06 09:28:20 -04:00
5417e23ba8 torch.compile-functorch interaction: update docs (#108130) (#108628)
Doc Preview: https://docs-preview.pytorch.org/pytorch/pytorch/108130/torch.compiler_faq.html#torch-func-works-with-torch-compile-for-grad-and-vmap-transforms

Will also cherry-pick this for release branch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108130
Approved by: https://github.com/zou3519
2023-09-06 08:25:19 -04:00
7a9101951d Improve docs for torch.unique dim argument (#108292) (#108596)
Fixes #103142

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108292
Approved by: https://github.com/albanD

Co-authored-by: Kurt Mohler <kmohler@quansight.com>
2023-09-05 17:47:48 -04:00
03e7f0b99d [Inductor] Add fused_attention pattern matcher with additional clone (#108141) (#108327)
A previous PR https://github.com/pytorch/pytorch/pull/106274 decomposes `aten.dropout` and would create a `clone()` when `eval()` or `p=0`. This makes many SDPA-related models fail to match fused_attention pattern matchers.

This PR adds new fused_attention pattern matchers with an additional clone to re-enable the SDPA op matching.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108141
Approved by: https://github.com/jgong5, https://github.com/eellison
2023-09-05 17:09:39 -04:00
c0e7239f43 Pin pandas version for inductor Docker image (#108355) (#108593)
Building docker in trunk is failing atm https://github.com/pytorch/pytorch/actions/runs/6033657019/job/16370683676 with the following error:

```
+ conda_reinstall numpy=1.24.4
+ as_jenkins conda install -q -n py_3.10 -y --force-reinstall numpy=1.24.4
+ sudo -E -H -u jenkins env -u SUDO_UID -u SUDO_GID -u SUDO_COMMAND -u SUDO_USER env PATH=/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 conda install -q -n py_3.10 -y --force-reinstall numpy=1.24.4
Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... unsuccessful initial attempt using frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... unsuccessful initial attempt using frozen solve. Retrying with flexible solve.

PackagesNotFoundError: The following packages are not available from current channels:

  - numpy=1.24.4

Current channels:

  - https://repo.anaconda.com/pkgs/main/linux-64
  - https://repo.anaconda.com/pkgs/main/noarch
  - https://repo.anaconda.com/pkgs/r/linux-64
  - https://repo.anaconda.com/pkgs/r/noarch
```

This was pulled in by pandas 2.1.0 released yesterday https://pypi.org/project/pandas/2.1.0
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108355
Approved by: https://github.com/kit1980, https://github.com/atalman, https://github.com/malfet
2023-09-05 17:05:54 -04:00
04c1e07fd7 [quant] Move dropout replacement to move_model_to_eval (#108184) (#108255)
Summary: This commit adds a public facing
`torch.ao.quantization.move_model_to_eval` util function
for QAT users. Instead of calling model.eval() on an exported
model (which doesn't work, see
https://github.com/pytorch/pytorch/issues/103681), the user
would call this new util function instead. This ensures special
ops such as dropout and batchnorm (not supported yet) will have
the right behavior when the graph is later used for inference.

Note: Support for an equivalent `move_model_to_train` will be
added in the future. This is difficult to do for dropout
currently because the eval pattern of dropout is simply a clone
op, which we cannot just match and replace with a dropout op.

Test Plan:
python test/test_quantization.py TestQuantizePT2E.test_move_model_to_eval

Reviewers: jerryzh168, kimishpatel

Subscribers: jerryzh168, kimishpatel, supriyar

Differential Revision: [D48814735](https://our.internmc.facebook.com/intern/diff/D48814735)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108184
Approved by: https://github.com/jerryzh168
2023-09-05 13:42:37 -07:00
cb4362ba5f Error when someone calls train/eval on pre_autograd graph (#108143) (#108258)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108143
Approved by: https://github.com/andrewor14

Co-authored-by: Tugsbayasgalan Manlaibaatar <tmanlaibaatar@fb.com>
2023-09-05 13:41:16 -07:00
bddd30ca7a [inductor] Fix inputs with existing offsets (#108259)
Cherry pick of #108168
2023-09-05 16:24:48 -04:00
9cc99906e9 When byteorder record is missing load as little endian by default (#108523)
* When byteorder record is missing load as little endian by default

Fixes #101688

* Add test for warning

Also change warning type from DeprecationWarning
to UserWarning to make it visible by default.
2023-09-05 16:06:22 -04:00
a49fca4dd4 inductor change needed to update triton pin (#108129)
ghstack-source-id: 5d421f734d5d7d9428b5fed54388cc95e559cd95
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107722
2023-09-05 14:40:23 -04:00
83964c761e [inductor] Add aten.multinomial to disallowed cudagraphs ops (#108122)
Cherry pick of #108105
2023-09-05 14:37:58 -04:00
085bd1da62 [dynamo] Fix setattr nn.Module with new attribute (#108121)
Cherry pick of #108098
2023-09-05 14:36:40 -04:00
90452f41e3 [dynamo] Graph break on pack_padded_sequence (#108120)
Release branch cherrypick of #108096
2023-09-05 14:34:47 -04:00
35c3d5a080 [inductor] Fix constant_to_device issue with ir.Constant (#108119)
Cherry pick of #108087
2023-09-05 14:33:32 -04:00
d07ac50e26 Only add triton dependency to CUDA and ROCm binaries if it hasn't been set as an installation requirement yet (#108424) (#108471)
The dependency was added twice before in CUDA and ROCm binaries, one as an installation dependency from builder and the later as an extra dependency for dynamo, for example:

```
Requires-Python: >=3.8.0
Description-Content-Type: text/markdown
License-File: LICENSE
License-File: NOTICE
Requires-Dist: filelock
Requires-Dist: typing-extensions
Requires-Dist: sympy
Requires-Dist: networkx
Requires-Dist: jinja2
Requires-Dist: fsspec
Requires-Dist: pytorch-triton (==2.1.0+e6216047b8)
Provides-Extra: dynamo
Requires-Dist: pytorch-triton (==2.1.0+e6216047b8) ; extra == 'dynamo'
Requires-Dist: jinja2 ; extra == 'dynamo'
Provides-Extra: opt-einsum
Requires-Dist: opt-einsum (>=3.3) ; extra == 'opt-einsum'
```

In the previous release, we needed to remove this part from `setup.py` to build release binaries https://github.com/pytorch/pytorch/pull/96010.  With this, that step isn't needed anymore because the dependency will come from builder.

### Testing

Using the draft https://github.com/pytorch/pytorch/pull/108374 for testing and manually inspect the wheels artifact at https://github.com/pytorch/pytorch/actions/runs/6045878399 (don't want to go through all `ciflow/binaries` again)

* torch-2.1.0.dev20230901+cu121-cp39-cp39-linux_x86_64
```
Requires-Python: >=3.8.0
Description-Content-Type: text/markdown
Requires-Dist: filelock
Requires-Dist: typing-extensions
Requires-Dist: sympy
Requires-Dist: networkx
Requires-Dist: jinja2
Requires-Dist: fsspec
Requires-Dist: pytorch-triton (==2.1.0+e6216047b8) <-- This will be 2.1.0 on the release branch after https://github.com/pytorch/builder/pull/1515
Provides-Extra: dynamo
Requires-Dist: jinja2 ; extra == 'dynamo'
Provides-Extra: opt-einsum
Requires-Dist: opt-einsum (>=3.3) ; extra == 'opt-einsum'
```

* torch-2.1.0.dev20230901+cu121.with.pypi.cudnn-cp39-cp39-linux_x86_64
```
Requires-Python: >=3.8.0
Description-Content-Type: text/markdown
Requires-Dist: filelock
Requires-Dist: typing-extensions
Requires-Dist: sympy
Requires-Dist: networkx
Requires-Dist: jinja2
Requires-Dist: fsspec
Requires-Dist: pytorch-triton (==2.1.0+e6216047b8)
Requires-Dist: nvidia-cuda-nvrtc-cu12 (==12.1.105) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cuda-runtime-cu12 (==12.1.105) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cuda-cupti-cu12 (==12.1.105) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cudnn-cu12 (==8.9.2.26) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cublas-cu12 (==12.1.3.1) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cufft-cu12 (==11.0.2.54) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-curand-cu12 (==10.3.2.106) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cusolver-cu12 (==11.4.5.107) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cusparse-cu12 (==12.1.0.106) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-nccl-cu12 (==2.18.1) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-nvtx-cu12 (==12.1.105) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: triton (==2.1.0) ; platform_system == "Linux" and platform_machine == "x86_64" <--This is 2.1.0 because it already has https://github.com/pytorch/pytorch/pull/108423, but the package doesn't exist yet atm
Provides-Extra: dynamo
Requires-Dist: jinja2 ; extra == 'dynamo'
Provides-Extra: opt-einsum
Requires-Dist: opt-einsum (>=3.3) ; extra == 'opt-einsum'
```

* torch-2.1.0.dev20230901+rocm5.6-cp38-cp38-linux_x86_64
```
Requires-Python: >=3.8.0
Description-Content-Type: text/markdown
Requires-Dist: filelock
Requires-Dist: typing-extensions
Requires-Dist: sympy
Requires-Dist: networkx
Requires-Dist: jinja2
Requires-Dist: fsspec
Requires-Dist: pytorch-triton-rocm (==2.1.0+34f8189eae) <-- This will be 2.1.0 on the release branch after https://github.com/pytorch/builder/pull/1515
Provides-Extra: dynamo
Requires-Dist: jinja2 ; extra == 'dynamo'
Provides-Extra: opt-einsum
Requires-Dist: opt-einsum (>=3.3) ; extra == 'opt-einsum'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108424
Approved by: https://github.com/atalman
2023-09-05 09:22:31 -04:00
8a3b017769 Add triton dependency to PyPI PyTorch package (#108423) 2023-09-01 16:51:10 -04:00
a82894b0d3 Added info for each artifact option, added a help option to TORCH_LOGS, and changed the error message (#107758) (#108365)
New message when invalid option is provided
<img width="1551" alt="image" src="https://github.com/pytorch/pytorch/assets/6355099/8b61534a-ee55-431e-94fe-2ffa25b7fd5c">

TORCH_LOGS="help"
<img width="1558" alt="image" src="https://github.com/pytorch/pytorch/assets/6355099/72e8939c-92fa-4141-8114-79db71451d42">

TORCH_LOGS="+help"
<img width="1551" alt="image" src="https://github.com/pytorch/pytorch/assets/6355099/2cdc94ac-505a-478c-aa58-0175526075d2">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107758
Approved by: https://github.com/ezyang, https://github.com/mlazos
ghstack dependencies: #106192
2023-09-01 16:11:59 -04:00
050fc31538 [MPS] Fix .item() for multi-dim scalar (#107913) (#108410)
By refactoring `_local_scalar_dense_mps` to use `_empty_like` to allocate CPU tensor.
Also, print a more reasonable error message when dst dim is less than src in mps_copy_

This fixes regression introduced by https://github.com/pytorch/pytorch/pull/105617 and adds regression test.

<!--
copilot:poem
-->
### <samp>🤖 Generated by Copilot at abd06e6</samp>

> _Sing, O Muse, of the valiant deeds of the PyTorch developers_
> _Who strive to improve the performance and usability of tensors_
> _And who, with skill and wisdom, fixed a bug in the MPS backend_
> _That caused confusion and dismay to many a user of `item()`_

Fixes https://github.com/pytorch/pytorch/issues/107867

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107913
Approved by: https://github.com/albanD

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2023-09-01 11:58:26 -04:00
b3cb05b396 Update to RNN documentation (issue #106085) (#106222) (#108385)
Addresses [issue #106085](https://github.com/pytorch/pytorch/issues/106085).

In `torch/nn/modules/rnn.py`:
- Adds documentation string to RNNBase class.
- Adds parameters to __init__ methods for RNN, LSTM, and GRU, classes.
- Adds type annotations to __init__ methods for RNN, LSTM, and GRU.

In `torch/ao/nn/quantized/dynamic/modules/rnn.py`:
- Adds type specifications to `_FLOAT_MODULE` attributes in RNNBase, RNN, LSTM, and GRU classes.
> This resolves a `mypy` assignment error `Incompatible types in assignment (expression has type "Type[LSTM]", base class "RNNBase" defined the type as "Type[RNNBase]")` that seemed to be a result of fully specified type annotations in `torch/nn/modules/rnn.py`).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106222
Approved by: https://github.com/mikaylagawarecki
2023-09-01 11:46:16 -04:00
fec68a2799 Add channels_last3d support for mkldnn conv and mkldnn deconv (#95271) (#108216)
### Motivation

- Add channels_last3d support for mkldnn conv and mkldnn deconv.
- Use `ideep::convolution_transpose_forward::compute_v3` instead of `ideep::convolution_transpose_forward::compute`.  compute_v3 uses `is_channels_last` to notify ideep whether to go CL or not to align with the memory format check of PyTorch.

### Testing
1 socket (28 cores):

- memory format: torch.contiguous_format

module | shape | forward / ms | backward / ms
-- | -- | -- | --
conv3d | input size: (32, 32, 10, 100, 100), weight size: (32, 32, 3, 3, 3) | 64.56885 | 150.1796
conv3d | input size: (32, 16, 10, 200, 200), weight size: (16, 16, 3, 3, 3) | 100.6754 | 231.8883
conv3d | input size: (16, 4, 5, 300, 300), weight size: (4, 4, 3, 3, 3) | 19.31751 | 68.31131

module | shape | forward / ms | backward / ms
-- | -- | -- | --
ConvTranspose3d | input size: (32, 32, 10, 100, 100), weight size: (32, 32, 3, 3, 3) | 122.7646 | 207.5125
ConvTranspose3d | input size: (32, 16, 10, 200, 200), weight size: (16, 16, 3, 3, 3) | 202.4542 | 368.5492
ConvTranspose3d | input size: (16, 4, 5, 300, 300), weight size: (4, 4, 3, 3, 3) | 122.959 | 84.62577

- memory format: torch.channels_last_3d

module | shape | forward / ms | backward / ms
-- | -- | -- | --
conv3d | input size: (32, 32, 10, 100, 100), weight size: (32, 32, 3, 3, 3) | 40.06993 | 114.317
conv3d | input size: (32, 16, 10, 200, 200), weight size: (16, 16, 3, 3, 3 | 49.08249 | 133.4079
conv3d | input size: (16, 4, 5, 300, 300), weight size: (4, 4, 3, 3, 3) | 5.873911 | 17.58647

module | shape | forward / ms | backward / ms
-- | -- | -- | --
ConvTranspose3d | input size: (32, 32, 10, 100, 100), weight size: (32, 32, 3, 3, 3) | 88.4246 | 208.2269
ConvTranspose3d | input size: (32, 16, 10, 200, 200), weight size: (16, 16, 3, 3, 3 | 140.0725 | 270.4172
ConvTranspose3d | input size: (16, 4, 5, 300, 300), weight size: (4, 4, 3, 3, 3) | 23.0223 | 37.16972

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95271
Approved by: https://github.com/jgong5, https://github.com/cpuhrsch
2023-09-01 11:44:48 -04:00
f139dda1cc [functorch] make torch.compile support opt-in (#108134) 2023-09-01 10:38:41 -04:00
5252dfb762 Fix triton upload channel detection (#108291) (#108311)
This should be nightly for nightly and test for release candidates.  There are 2 bugs:

* The shell needs to set to `bash` explicitly, otherwise, GHA uses `sh` which doesn't recognized `[[` as shown in https://github.com/pytorch/pytorch/actions/runs/6030476858/job/16362717792#step:6:10
*`${GITHUB_REF_NAME}` is un-quoted.  This is basically https://www.shellcheck.net/wiki/SC2248 but this wasn't captured by actionlint, and shellcheck doesn't work with workflow YAML file.  I will think about how to add a lint rule for this later then.

### Testing

https://github.com/pytorch/pytorch/actions/runs/6031330411 to confirm that setting the channel is performed correctly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108291
Approved by: https://github.com/osalpekar, https://github.com/atalman
2023-09-01 09:41:27 -04:00
da1ccca830 Remove commit hash when building triton wheel and conda in release mode (#108203) (#108251)
This is the follow-up of https://github.com/pytorch/pytorch/pull/108187 to set the correct release version without commit hash for triton wheel and conda binaries when building them in release mode.

### Testing

* With commit hash (nightly): https://github.com/pytorch/pytorch/actions/runs/6019021716
* Without commit hash https://github.com/pytorch/pytorch/actions/runs/6019378616 (by adding `--release` into the PR)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108203
Approved by: https://github.com/atalman
2023-08-30 14:27:06 -04:00
c9cbdaf24f [ROCm] Update ROCm pin to fix triton wheel lib issue (#108229)
main PR already merged: https://github.com/pytorch/pytorch/pull/108137
2023-08-30 09:39:59 -04:00
f187e42a54 Fix various issues on build-triton-wheel workflow (#108187) (#108200)
There are more issues that I expect at the beginning:

* Triton was uploaded on `main` instead of `nightly` and release branch
* The environment `conda-aws-upload` wasn't used correctly in both wheel and conda upload
* Conda update wasn't run in a separate ephemeral runner
* Duplicated upload logic, should have just use `bash .circleci/scripts/binary_upload.sh` instead
* Handle `CONDA_PYTORCHBOT_TOKEN` and `CONDA_PYTORCHBOT_TOKEN_TEST` tokens in a similar way as https://github.com/pytorch/test-infra/pull/4530

Part of https://github.com/pytorch/pytorch/issues/108154
2023-08-30 09:37:49 -04:00
9175987fcc Fix the use of inputs.build_environment in #107868 (#108075) (#108177)
It should be `${{ inputs.build_environment }}`, although I wonder why not just clean up the artifacts directory for all build instead of just `aarch64`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108075
Approved by: https://github.com/atalman, https://github.com/seemethere
2023-08-30 09:36:31 -04:00
d8e6594fb8 skip dynamic shape test for test_conv_bn_fuse (#108113) (#108139)
For test_conv_bn_fuse dynamic case, we always fuse bn with convolution and there only a external convolution call, not loops, so it will failed when we do a dynamic loop vars check. This PR will skip this case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108113
Approved by: https://github.com/huydhn
2023-08-30 09:35:08 -04:00
f82c027774 Fix LayerNorm(bias=False) error (#108078)
ghstack-source-id: 613c4f3608b1a375013fc9da64545c1084025650
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108060
2023-08-30 09:30:22 -04:00
6d20b39d3f [CI] Release only chnages use anaconda token for test env (#108064) 2023-08-28 12:41:57 -04:00
17f400404f [CI] Release only changes for 2.1 release (#108053)
* [CI] Release only changes for 2.1 release

* include circle script

* release only changes for test-infra

* More test-infra related
2023-08-28 11:55:58 -04:00
276 changed files with 6143 additions and 2780 deletions

View File

@ -1 +1 @@
05d67b9418cacda0d356c2102d7c1a887948b013
34f8189eae57a23cc15b4b4f032fe25757e0db8e

View File

@ -7,18 +7,14 @@ source "$(dirname "${BASH_SOURCE[0]}")/common_utils.sh"
function install_huggingface() {
local version
version=$(get_pinned_commit huggingface)
pip_install pandas
pip_install scipy
pip_install z3-solver
pip_install pandas==2.0.3
pip_install "transformers==${version}"
}
function install_timm() {
local commit
commit=$(get_pinned_commit timm)
pip_install pandas
pip_install scipy
pip_install z3-solver
pip_install pandas==2.0.3
pip_install "git+https://github.com/rwightman/pytorch-image-models@${commit}"
}

17
.ci/docker/common/install_onnx.sh Normal file → Executable file
View File

@ -4,6 +4,10 @@ set -ex
source "$(dirname "${BASH_SOURCE[0]}")/common_utils.sh"
retry () {
"$@" || (sleep 10 && "$@") || (sleep 20 && "$@") || (sleep 40 && "$@")
}
# A bunch of custom pip dependencies for ONNX
pip_install \
beartype==0.10.4 \
@ -18,22 +22,17 @@ pip_install \
# onnx-weekly. Otherwise, onnx-weekly could be
# overwritten by onnx.
pip_install \
onnxruntime==1.15.1 \
parameterized==0.8.1 \
pytest-cov==4.0.0 \
pytest-subtests==0.10.0 \
tabulate==0.9.0 \
transformers==4.31.0
# Using 1.15dev branch for the following not yet released features and fixes.
# - Segfault fix for shape inference.
# - Inliner to workaround ORT segfault.
pip_install onnx-weekly==1.15.0.dev20230717
pip_install coloredlogs packaging
retry pip_install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ --no-cache-dir --no-input ort-nightly==1.16.0.dev20230908001
# TODO: change this when onnx-script is on testPypi
# pip_install onnxscript-preview==0.1.0.dev20230809 --no-deps
# NOTE: temp change for CI to run on unpublished onnxscript PR.
pip_install "onnxscript@git+https://github.com/microsoft/onnxscript@f69be19ebd3f2e0d7efe64b0c7be3329cbab3822" --no-deps
pip_install onnx==1.14.1
pip_install onnxscript-preview==0.1.0.dev20230828 --no-deps
# Cache the transformers model to be used later by ONNX tests. We need to run the transformers
# package to download the model. By default, the model is cached at ~/.cache/huggingface/hub/

View File

@ -271,7 +271,12 @@ pytest-cpp==2.3.0
#Pinned versions: 2.3.0
#test that import:
z3-solver
z3-solver==4.12.2.0
#Description: The Z3 Theorem Prover Project
#Pinned versions:
#test that import:
tensorboard==2.13.0
#Description: Also included in .ci/docker/requirements-docs.txt
#Pinned versions:
#test that import: test_tensorboard

View File

@ -180,7 +180,7 @@ function install_numpy_pytorch_interop() {
function clone_pytorch_xla() {
if [[ ! -d ./xla ]]; then
git clone --recursive --quiet https://github.com/pytorch/xla.git
git clone --recursive -b r2.1 https://github.com/pytorch/xla.git
pushd xla
# pin the xla hash so that we don't get broken by changes to xla
git checkout "$(cat ../.github/ci_commit_pins/xla.txt)"

View File

@ -544,6 +544,10 @@ test_without_numpy() {
python -c "import sys;sys.path.insert(0, 'fake_numpy');from unittest import TestCase;import torch;x=torch.randn(3,3);TestCase().assertRaises(RuntimeError, lambda: x.numpy())"
# Regression test for https://github.com/pytorch/pytorch/issues/66353
python -c "import sys;sys.path.insert(0, 'fake_numpy');import torch;print(torch.tensor([torch.tensor(0.), torch.tensor(1.)]))"
# Regression test for https://github.com/pytorch/pytorch/issues/109387
if [[ "${TEST_CONFIG}" == *dynamo* ]]; then
python -c "import sys;sys.path.insert(0, 'fake_numpy');import torch;torch.compile(lambda x:print(x))('Hello World')"
fi
popd
}

View File

@ -35,7 +35,7 @@ if [[ "$BUILD_ENVIRONMENT" == *cuda* ]]; then
fi
# TODO: Move both of them to Windows AMI
python -m pip install pytest-rerunfailures==10.3 pytest-cpp==2.3.0
python -m pip install pytest-rerunfailures==10.3 pytest-cpp==2.3.0 tensorboard==2.13.0
# Install Z3 optional dependency for Windows builds.
python -m pip install z3-solver

View File

@ -62,7 +62,7 @@ git --no-pager log --max-count 1
popd
# Clone the Builder main repo
retry git clone -q https://github.com/pytorch/builder.git "$BUILDER_ROOT"
retry git clone -q https://github.com/pytorch/builder.git -b release/2.1 "$BUILDER_ROOT"
pushd "$BUILDER_ROOT"
echo "Using builder from "
git --no-pager log --max-count 1

View File

@ -90,7 +90,7 @@ if [[ "$PACKAGE_TYPE" == conda ]]; then
if [[ "\${TORCH_CONDA_BUILD_FOLDER}" == "pytorch-nightly" ]]; then
PYTORCH_CHANNEL="pytorch-nightly"
fi
retry conda install \${EXTRA_CONDA_FLAGS} -yq -c nvidia -c "\${PYTORCH_CHANNEL}" "pytorch-cuda=\${cu_ver}"
retry conda install \${EXTRA_CONDA_FLAGS} -yq -c nvidia -c pytorch-test "pytorch-cuda=\${cu_ver}"
fi
conda install \${EXTRA_CONDA_FLAGS} -y "\$pkg" --offline
)
@ -98,9 +98,9 @@ elif [[ "$PACKAGE_TYPE" != libtorch ]]; then
if [[ "$(uname -m)" == aarch64 ]]; then
# Using "extra-index-url" until all needed aarch64 dependencies are
# added to "https://download.pytorch.org/whl/nightly/"
pip install "\$pkg" --extra-index-url "https://download.pytorch.org/whl/nightly/${DESIRED_CUDA}"
pip install "\$pkg" --extra-index-url "https://download.pytorch.org/whl/test/${DESIRED_CUDA}"
else
pip install "\$pkg" --index-url "https://download.pytorch.org/whl/nightly/${DESIRED_CUDA}"
pip install "\$pkg" --index-url "https://download.pytorch.org/whl/test/${DESIRED_CUDA}"
fi
retry pip install -q numpy protobuf typing-extensions
fi

View File

@ -77,7 +77,9 @@ else
export PYTORCH_BUILD_VERSION="${BASE_BUILD_VERSION}+$DESIRED_CUDA"
fi
if [[ -n "${PYTORCH_EXTRA_INSTALL_REQUIREMENTS:-}" ]]; then
# The build with with-pypi-cudnn suffix is only applicabe to
# pypi small wheel Linux x86 build
if [[ -n "${PYTORCH_EXTRA_INSTALL_REQUIREMENTS:-}" ]] && [[ "$(uname)" == 'Linux' && "$(uname -m)" == "x86_64" ]]; then
export PYTORCH_BUILD_VERSION="${PYTORCH_BUILD_VERSION}-with-pypi-cudnn"
fi

View File

@ -11,7 +11,7 @@ PKG_DIR=${PKG_DIR:-/tmp/workspace/final_pkgs}
# currently set within `designate_upload_channel`
UPLOAD_CHANNEL=${UPLOAD_CHANNEL:-nightly}
# Designates what subfolder to put packages into
UPLOAD_SUBFOLDER=${UPLOAD_SUBFOLDER:-cpu}
UPLOAD_SUBFOLDER=${UPLOAD_SUBFOLDER:-}
UPLOAD_BUCKET="s3://pytorch"
BACKUP_BUCKET="s3://pytorch-backup"
BUILD_NAME=${BUILD_NAME:-}
@ -64,12 +64,17 @@ s3_upload() {
local pkg_type
extension="$1"
pkg_type="$2"
s3_dir="${UPLOAD_BUCKET}/${pkg_type}/${UPLOAD_CHANNEL}/${UPLOAD_SUBFOLDER}/"
s3_root_dir="${UPLOAD_BUCKET}/${pkg_type}/${UPLOAD_CHANNEL}"
if [[ -z ${UPLOAD_SUBFOLDER:-} ]]; then
s3_upload_dir="${s3_root_dir}/"
else
s3_upload_dir="${s3_root_dir}/${UPLOAD_SUBFOLDER}/"
fi
(
for pkg in ${PKG_DIR}/*.${extension}; do
(
set -x
${AWS_S3_CP} --no-progress --acl public-read "${pkg}" "${s3_dir}"
${AWS_S3_CP} --no-progress --acl public-read "${pkg}" "${s3_upload_dir}"
)
done
)
@ -82,15 +87,17 @@ pip install -q awscli
case "${PACKAGE_TYPE}" in
conda)
conda_upload
# Fetch platform (eg. win-64, linux-64, etc.) from index file
# Because there's no actual conda command to read this
subdir=$(\
tar -xOf ${PKG_DIR}/*.bz2 info/index.json \
| grep subdir \
| cut -d ':' -f2 \
| sed -e 's/[[:space:]]//' -e 's/"//g' -e 's/,//' \
)
BACKUP_DIR="conda/${subdir}"
for conda_archive in ${PKG_DIR}/*.tar.bz2; do
# Fetch platform (eg. win-64, linux-64, etc.) from index file because
# there's no actual conda command to read this
subdir=$(\
tar -xOf "${conda_archive}" info/index.json \
| grep subdir \
| cut -d ':' -f2 \
| sed -e 's/[[:space:]]//' -e 's/"//g' -e 's/,//' \
)
BACKUP_DIR="conda/${subdir}"
done
;;
libtorch)
s3_upload "zip" "libtorch"

View File

@ -1 +1 @@
e1ee592d9806216d7ac0bb711cae6307b0c5b68a
r2.1

View File

@ -7,6 +7,7 @@
- docs/source/onnx.rst
- docs/source/onnx*
- docs/source/scripts/onnx/**
- docs/source/_static/img/onnx/**
- scripts/onnx/**
- test/onnx/**
- tools/onnx/**

View File

@ -25,3 +25,4 @@ sympy==1.11.1
pytest-cpp==2.3.0
rockset==1.0.3
z3-solver==4.12.2.0
tensorboard==2.13.0

View File

@ -60,12 +60,18 @@ def build_triton(
build_conda: bool = False,
build_rocm: bool = False,
py_version: Optional[str] = None,
release: bool = False,
) -> Path:
env = os.environ.copy()
if "MAX_JOBS" not in env:
max_jobs = os.cpu_count() or 1
env["MAX_JOBS"] = str(max_jobs)
if not release:
# Nightly binaries include the triton commit hash, i.e. 2.1.0+e6216047b8
# while release build should only include the version, i.e. 2.1.0
version = f"{version}+{commit_hash[:10]}"
with TemporaryDirectory() as tmpdir:
triton_basedir = Path(tmpdir) / "triton"
triton_pythondir = triton_basedir / "python"
@ -80,7 +86,7 @@ def build_triton(
if build_conda:
with open(triton_basedir / "meta.yaml", "w") as meta:
print(
f"package:\n name: torchtriton\n version: {version}+{commit_hash[:10]}\n",
f"package:\n name: torchtriton\n version: {version}\n",
file=meta,
)
print("source:\n path: .\n", file=meta)
@ -103,7 +109,7 @@ def build_triton(
patch_init_py(
triton_pythondir / "triton" / "__init__.py",
version=f"{version}+{commit_hash[:10]}",
version=f"{version}",
)
if py_version is None:
py_version = f"{sys.version_info.major}.{sys.version_info.minor}"
@ -129,11 +135,11 @@ def build_triton(
patch_setup_py(
triton_pythondir / "setup.py",
name=triton_pkg_name,
version=f"{version}+{commit_hash[:10]}",
version=f"{version}",
)
patch_init_py(
triton_pythondir / "triton" / "__init__.py",
version=f"{version}+{commit_hash[:10]}",
version=f"{version}",
)
if build_rocm:
@ -157,12 +163,14 @@ def main() -> None:
from argparse import ArgumentParser
parser = ArgumentParser("Build Triton binaries")
parser.add_argument("--release", action="store_true")
parser.add_argument("--build-conda", action="store_true")
parser.add_argument("--build-rocm", action="store_true")
parser.add_argument("--py-version", type=str)
parser.add_argument("--commit-hash", type=str)
parser.add_argument("--triton-version", type=str, default=read_triton_version())
args = parser.parse_args()
build_triton(
build_rocm=args.build_rocm,
commit_hash=args.commit_hash
@ -171,6 +179,7 @@ def main() -> None:
version=args.triton_version,
build_conda=args.build_conda,
py_version=args.py_version,
release=args.release,
)

View File

@ -1,6 +1,5 @@
#!/usr/bin/env python3
import argparse
import sys
from pathlib import Path
@ -10,9 +9,11 @@ import yaml
REPO_ROOT = Path(__file__).resolve().parent.parent.parent
WORKFLOWS = REPO_ROOT / ".github" / "workflows"
EXPECTED_GROUP = (
EXPECTED_GROUP_PREFIX = (
"${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}"
"-${{ github.event_name == 'workflow_dispatch' }}"
)
EXPECTED_GROUP = (
EXPECTED_GROUP_PREFIX + "-${{ github.event_name == 'workflow_dispatch' }}"
)
@ -26,15 +27,8 @@ def should_check(filename: Path) -> bool:
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description="Ensure all relevant GitHub actions jobs will be cancelled based on a concurrency key"
)
args = parser.parse_args()
files = list(WORKFLOWS.glob("*.yml"))
errors_found = False
files = [f for f in files if should_check(f)]
files = [f for f in WORKFLOWS.glob("*.yml") if should_check(f)]
names = set()
for filename in files:
with open(filename) as f:
@ -46,7 +40,18 @@ if __name__ == "__main__":
errors_found = True
names.add(name)
actual = data.get("concurrency", {})
if not actual.get("group", "").startswith(EXPECTED_GROUP):
if filename.name == "create_release.yml":
if not actual.get("group", "").startswith(EXPECTED_GROUP_PREFIX):
print(
f"'concurrency' incorrect or not found in '{filename.relative_to(REPO_ROOT)}'",
file=sys.stderr,
)
print(
f"concurrency group should start with {EXPECTED_GROUP_PREFIX} but found {actual.get('group', None)}",
file=sys.stderr,
)
errors_found = True
elif not actual.get("group", "").startswith(EXPECTED_GROUP):
print(
f"'concurrency' incorrect or not found in '{filename.relative_to(REPO_ROOT)}'",
file=sys.stderr,

View File

@ -62,9 +62,10 @@ SUPPORTED_PERIODICAL_MODES: Dict[str, Callable[[Optional[str]], bool]] = {
}
# The link to the published list of disabled jobs
DISABLED_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json"
# Pinning Disabled and Unstable job to Oct 4, 2023.
DISABLED_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/disabled-jobs.json?versionId=EniFrNbB6taGjwKyN94j4oqUeeN8ALfI"
# and unstable jobs
UNSTABLE_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json"
UNSTABLE_JOBS_URL = "https://ossci-metrics.s3.amazonaws.com/unstable-jobs.json?versionId=2voGK5DSv0Hzvxhc23ChGcOLEBIO2vHf"
# Some constants used to handle disabled and unstable jobs
JOB_NAME_SEP = "/"

View File

@ -16,6 +16,12 @@ from typing import Dict, List, Optional, Tuple
CUDA_ARCHES = ["11.8", "12.1"]
CUDA_ARCHES_FULL_VERSION = {"11.8": "11.8.0", "12.1": "12.1.1"}
CUDA_ARCHES_CUDNN_VERSION = {"11.8": "8", "12.1": "8"}
ROCM_ARCHES = ["5.5", "5.6"]
@ -24,6 +30,21 @@ CPU_CXX11_ABI_ARCH = ["cpu-cxx11-abi"]
CPU_AARCH64_ARCH = ["cpu-aarch64"]
PYTORCH_EXTRA_INSTALL_REQUIREMENTS = (
"nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | " # noqa: B950
"nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'"
)
def arch_type(arch_version: str) -> str:
if arch_version in CUDA_ARCHES:
@ -39,22 +60,19 @@ def arch_type(arch_version: str) -> str:
WHEEL_CONTAINER_IMAGES = {
**{
gpu_arch: f"pytorch/manylinux-builder:cuda{gpu_arch}"
for gpu_arch in CUDA_ARCHES
},
**{
gpu_arch: f"pytorch/manylinux-builder:rocm{gpu_arch}"
for gpu_arch in ROCM_ARCHES
},
"cpu": "pytorch/manylinux-builder:cpu",
"cpu-cxx11-abi": "pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi",
"cpu-aarch64": "pytorch/manylinuxaarch64-builder:cpu-aarch64",
"11.8": "pytorch/manylinux-builder:cuda11.8-2.1",
"12.1": "pytorch/manylinux-builder:cuda12.1-2.1",
"5.5": "pytorch/manylinux-builder:rocm5.5-2.1",
"5.6": "pytorch/manylinux-builder:rocm5.6-2.1",
"cpu": "pytorch/manylinux-builder:cpu-2.1",
"cpu-cxx11-abi": "pytorch/manylinuxcxx11-abi-builder:cpu-cxx11-abi-2.1",
"cpu-aarch64": "pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1",
}
CONDA_CONTAINER_IMAGES = {
**{gpu_arch: f"pytorch/conda-builder:cuda{gpu_arch}" for gpu_arch in CUDA_ARCHES},
"cpu": "pytorch/conda-builder:cpu",
"11.8": "pytorch/conda-builder:cuda11.8-2.1",
"12.1": "pytorch/conda-builder:cuda12.1-2.1",
"cpu": "pytorch/conda-builder:cpu-2.1",
}
PRE_CXX11_ABI = "pre-cxx11"
@ -63,24 +81,46 @@ RELEASE = "release"
DEBUG = "debug"
LIBTORCH_CONTAINER_IMAGES: Dict[Tuple[str, str], str] = {
**{
(gpu_arch, PRE_CXX11_ABI): f"pytorch/manylinux-builder:cuda{gpu_arch}"
for gpu_arch in CUDA_ARCHES
},
**{
(gpu_arch, CXX11_ABI): f"pytorch/libtorch-cxx11-builder:cuda{gpu_arch}"
for gpu_arch in CUDA_ARCHES
},
**{
(gpu_arch, PRE_CXX11_ABI): f"pytorch/manylinux-builder:rocm{gpu_arch}"
for gpu_arch in ROCM_ARCHES
},
**{
(gpu_arch, CXX11_ABI): f"pytorch/libtorch-cxx11-builder:rocm{gpu_arch}"
for gpu_arch in ROCM_ARCHES
},
("cpu", PRE_CXX11_ABI): "pytorch/manylinux-builder:cpu",
("cpu", CXX11_ABI): "pytorch/libtorch-cxx11-builder:cpu",
(
"11.8",
PRE_CXX11_ABI,
): "pytorch/manylinux-builder:cuda11.8-2.1",
(
"12.1",
PRE_CXX11_ABI,
): "pytorch/manylinux-builder:cuda12.1-2.1",
(
"11.8",
CXX11_ABI,
): "pytorch/libtorch-cxx11-builder:cuda11.8-2.1",
(
"12.1",
CXX11_ABI,
): "pytorch/libtorch-cxx11-builder:cuda12.1-2.1",
(
"5.5",
PRE_CXX11_ABI,
): "pytorch/manylinux-builder:rocm5.5-2.1",
(
"5.6",
PRE_CXX11_ABI,
): "pytorch/manylinux-builder:rocm5.6-2.1",
(
"5.5",
CXX11_ABI,
): "pytorch/libtorch-cxx11-builder:rocm5.5-2.1",
(
"5.6",
CXX11_ABI,
): "pytorch/libtorch-cxx11-builder:rocm5.6-2.1",
(
"cpu",
PRE_CXX11_ABI,
): "pytorch/manylinux-builder:cpu-2.1",
(
"cpu",
CXX11_ABI,
): "pytorch/libtorch-cxx11-builder:cpu-2.1",
}
FULL_PYTHON_VERSIONS = ["3.8", "3.9", "3.10", "3.11"]
@ -238,17 +278,7 @@ def generate_wheels_matrix(
"devtoolset": "",
"container_image": WHEEL_CONTAINER_IMAGES[arch_version],
"package_type": package_type,
"pytorch_extra_install_requirements": "nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | " # noqa: B950
"nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'",
"pytorch_extra_install_requirements": PYTORCH_EXTRA_INSTALL_REQUIREMENTS,
"build_name": f"{package_type}-py{python_version}-{gpu_arch_type}{gpu_arch_version}-with-pypi-cudnn".replace( # noqa: B950
".", "_"
),
@ -273,6 +303,9 @@ def generate_wheels_matrix(
"build_name": f"{package_type}-py{python_version}-{gpu_arch_type}{gpu_arch_version}".replace(
".", "_"
),
"pytorch_extra_install_requirements": PYTORCH_EXTRA_INSTALL_REQUIREMENTS
if os != "linux"
else "",
}
)
return ret

View File

@ -0,0 +1,45 @@
#!/usr/bin/env python3
"""Generates a matrix for docker releases through github actions
Will output a condensed version of the matrix. Will include fllowing:
* CUDA version short
* CUDA full verison
* CUDNN version short
* Image type either runtime or devel
* Platform linux/arm64,linux/amd64
"""
import json
from typing import Dict, List
import generate_binary_build_matrix
DOCKER_IMAGE_TYPES = ["runtime", "devel"]
def generate_docker_matrix() -> Dict[str, List[Dict[str, str]]]:
ret: List[Dict[str, str]] = []
for cuda, version in generate_binary_build_matrix.CUDA_ARCHES_FULL_VERSION.items():
for image in DOCKER_IMAGE_TYPES:
platform = (
"linux/arm64,linux/amd64" if image == "runtime" else "linux/amd64"
)
ret.append(
{
"cuda": cuda,
"cuda_full_version": version,
"cudnn_version": generate_binary_build_matrix.CUDA_ARCHES_CUDNN_VERSION[
cuda
],
"image_type": image,
"platform": platform,
}
)
return {"include": ret}
if __name__ == "__main__":
build_matrix = generate_docker_matrix()
print(json.dumps(build_matrix))

View File

@ -8,7 +8,7 @@
# NOTE: If testing pytorch/builder changes you can change this variable to change what pytorch/builder reference
# the binary builds will check out
{%- set builder_repo = "pytorch/builder" -%}
{%- set builder_branch = "main" -%}
{%- set builder_branch = "release/2.1" -%}
{%- macro concurrency(build_environment) -%}
concurrency:
@ -37,6 +37,7 @@ concurrency:
!{{ display_ec2_information() }}
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560

View File

@ -55,12 +55,12 @@ jobs:
uses: ./.github/workflows/_binary-build-linux.yml
with:!{{ upload.binary_env_as_input(config) }}
{%- if "aarch64" in build_environment %}
runs_on: linux.t4g.2xlarge
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
{%- endif %}
build_name: !{{ config["build_name"] }}
build_environment: !{{ build_environment }}
{%- if config.pytorch_extra_install_requirements is defined %}
{%- if config.pytorch_extra_install_requirements is defined and config.pytorch_extra_install_requirements|d('')|length > 0 %}
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: !{{ config.pytorch_extra_install_requirements }}
{%- endif %}
secrets:
@ -74,7 +74,7 @@ jobs:
build_name: !{{ config["build_name"] }}
build_environment: !{{ build_environment }}
{%- if "aarch64" in build_environment %}
runs_on: linux.t4g.2xlarge
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
{%- elif config["gpu_arch_type"] == "rocm" %}
runs_on: linux.rocm.gpu
@ -97,13 +97,13 @@ jobs:
with:
name: !{{ config["build_name"] }}
path: "${{ runner.temp }}/artifacts/"
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: ROCm set GPU_FLAG
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: !{{ config["container_image"] }}
- name: Test Pytorch binary

View File

@ -61,6 +61,9 @@ jobs:
runs-on: macos-12-xl
timeout-minutes: !{{ common.timeout_minutes }}
!{{ upload.binary_env(config, true) }}
{%- if config.pytorch_extra_install_requirements is defined and config.pytorch_extra_install_requirements|d('')|length > 0 %}
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: !{{ config.pytorch_extra_install_requirements }}
{%- endif %}
# For sccache access (only on non-forked PRs)
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
@ -74,8 +77,8 @@ jobs:
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
echo "DEVELOPER_DIR=/Applications/Xcode_13.3.1.app/Contents/Developer" >> "${GITHUB_ENV}"
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
uses: nick-fields/retry@v2.8.2
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}

View File

@ -67,6 +67,6 @@
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
{%- endmacro %}

View File

@ -59,11 +59,14 @@ jobs:
runs-on: windows.4xlarge.nonephemeral
timeout-minutes: !{{ common.timeout_minutes }}
!{{ upload.binary_env(config, True) }}
{%- if config.pytorch_extra_install_requirements is defined and config.pytorch_extra_install_requirements|d('')|length > 0 %}
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: !{{ config.pytorch_extra_install_requirements }}
{%- endif %}
steps:
!{{ common.setup_ec2_windows() }}
!{{ set_runner_specific_vars() }}
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Populate binary env
shell: bash
run: |
@ -102,8 +105,8 @@ jobs:
with:
name: !{{ config["build_name"] }}
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Populate binary env
shell: bash
run: |

View File

@ -36,7 +36,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -58,25 +58,25 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -140,5 +140,5 @@ jobs:
if: always()
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -36,7 +36,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -58,25 +58,25 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -185,5 +185,5 @@ jobs:
if: always()
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -41,7 +41,7 @@ jobs:
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -63,30 +63,30 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.1
if: ${{ inputs.cuda-version != 'cpu' }}
- name: Output disk space left
@ -197,5 +197,5 @@ jobs:
file-suffix: bazel-${{ github.job }}_${{ steps.get-job-id.outputs.job-id }}
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -139,12 +139,13 @@ jobs:
run: env
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
continue-on-error: true
with:
github-secret: ${{ secrets.github-token }}
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }}
@ -159,10 +160,12 @@ jobs:
- name: Clean workspace
shell: bash
run: |
set -eux
rm -rf "${GITHUB_WORKSPACE}"
mkdir "${GITHUB_WORKSPACE}"
if [[ inputs.build_environment == 'linux-aarch64-binary-manywheel' ]]; then
if [[ ${{ inputs.build_environment }} == 'linux-aarch64-binary-manywheel' ]]; then
rm -rf "${RUNNER_TEMP}/artifacts"
mkdir "${RUNNER_TEMP}/artifacts"
fi
@ -170,7 +173,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -184,7 +186,7 @@ jobs:
- name: Checkout pytorch/builder to builder dir
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -210,7 +212,7 @@ jobs:
- name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ inputs.DOCKER_IMAGE }}
@ -267,7 +269,7 @@ jobs:
- name: Teardown Linux
if: always()
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
- name: Chown workspace
if: always()

View File

@ -127,13 +127,14 @@ jobs:
} >> "${GITHUB_ENV} }}"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
continue-on-error: true
with:
github-secret: ${{ secrets.github-token }}
# Setup the environment
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }}
@ -154,7 +155,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
@ -167,7 +167,7 @@ jobs:
- name: Checkout pytorch/builder to builder dir
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -198,12 +198,12 @@ jobs:
path: "${{ runner.temp }}/artifacts/"
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.1
if: ${{ inputs.GPU_ARCH_TYPE == 'cuda' && steps.filter.outputs.is-test-matrix-empty == 'False' }}
- name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ inputs.DOCKER_IMAGE }}
@ -213,7 +213,7 @@ jobs:
- name: Teardown Linux
if: always()
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
- name: Chown workspace
if: always()

View File

@ -97,7 +97,7 @@ jobs:
SHA1: ${{ github.event.pull_request.head.sha || github.sha }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: true
@ -121,7 +121,7 @@ jobs:
shell: bash -e -l {0}
run: |
# reference ends with an RC suffix
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
if [[ "${GITHUB_REF_NAME}" = *-rc[0-9]* ]]; then
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
fi

View File

@ -22,7 +22,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -43,7 +43,7 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Set up JDK 8
uses: actions/setup-java@v3
@ -52,7 +52,7 @@ jobs:
distribution: 'temurin'
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: 3.8
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}

View File

@ -66,7 +66,7 @@ jobs:
name: build-docs-${{ matrix.docs_type }}-${{ inputs.push }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: |
@ -77,19 +77,19 @@ jobs:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -187,5 +187,5 @@ jobs:
s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}/functorchdocs
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -7,14 +7,6 @@ on:
required: true
type: string
description: Top-level label for what's being built/tested.
ios-platform:
required: true
type: string
description: Which iOS platform to build for.
ios-arch:
required: true
type: string
description: Which iOS arch to build for.
sync-tag:
required: false
type: string
@ -31,8 +23,6 @@ on:
env:
GIT_DEFAULT_BRANCH: ${{ github.event.repository.default_branch }}
BUILD_ENVIRONMENT: ${{ inputs.build-environment }}
IOS_PLATFORM: ${{ inputs.ios-platform }}
IOS_ARCH: ${{ inputs.ios-arch }}
jobs:
filter:
@ -43,7 +33,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -63,33 +53,30 @@ jobs:
matrix: ${{ fromJSON(needs.filter.outputs.test-matrix) }}
fail-fast: false
runs-on: ${{ matrix.runner }}
env:
IOS_PLATFORM: ${{ matrix.ios_platform }}
IOS_ARCH: ${{ matrix.ios_arch }}
BUILD_LITE_INTERPRETER: ${{ matrix.use_lite_interpreter }}
USE_PYTORCH_METAL: ${{ matrix.use_metal }}
USE_COREML_DELEGATE: ${{ matrix.use_coreml }}
CUSTOM_OP_LIST: ${{ matrix.use_custom_op_list }}
# TODO: Bump it to 2.2.0 after cherry pick this or figure out a better way
# to get this version instead of hard coding it here
PYTORCH_VERSION: 2.1.0
timeout-minutes: 240
steps:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Populate CI build options
shell: bash
run: |
# Most builds use the lite interpreter, if certain builds shouldn't
# build the lite interpreter this env variable should get over-written
# in the following case statement
echo "BUILD_LITE_INTERPRETER=1" >> "${GITHUB_ENV}"
set -ex
case ${BUILD_ENVIRONMENT} in
*metal*)
echo "USE_PYTORCH_METAL=1" >> "${GITHUB_ENV}"
;;
*full_jit*)
echo "BUILD_LITE_INTERPRETER=0" >> "${GITHUB_ENV}"
;;
*custom*)
echo "SELECTED_OP_LIST=${GITHUB_WORKSPACE}/ios/TestApp/custom_build/mobilenetv2.yaml" >> "${GITHUB_ENV}"
;;
*coreml*)
echo "USE_COREML_DELEGATE=1" >> "${GITHUB_ENV}"
;;
esac
if [ -n "${CUSTOM_OP_LIST:-}" ]; then
echo "SELECTED_OP_LIST=${GITHUB_WORKSPACE}/ios/TestApp/custom_build/${CUSTOM_OP_LIST}" >> "${GITHUB_ENV}"
fi
- name: Install brew dependencies
uses: nick-fields/retry@v2.8.2
@ -102,7 +89,7 @@ jobs:
brew install libtool
- name: Setup miniconda for iOS
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: "3.9"
environment-file: .github/requirements/conda-env-iOS
@ -116,54 +103,67 @@ jobs:
retry_wait_seconds: 90
command: |
set -x
cd ios/TestApp
# install fastlane
pushd ios/TestApp
# Install fastlane
sudo gem install bundler && bundle install
bundle update fastlane
popd
- name: Build PyTorch Mobile Runtime
- name: Build PyTorch mobile runtime
shell: bash
run: |
set -eux
# shellcheck disable=SC1091
export TCLLIBPATH="/usr/local/lib"
python -VV
${CONDA_RUN} scripts/build_ios.sh
- name: Build TestApp
if: inputs.ios-platform == 'SIMULATOR'
if: matrix.ios_platform == 'SIMULATOR'
timeout-minutes: 15
run: |
# run the ruby build script
# Run the ruby build script
if ! [ -x "$(command -v xcodebuild)" ]; then
echo 'Error: xcodebuild is not installed.'
exit 1
fi
ruby scripts/xcode_build.rb -i build_ios/install -x ios/TestApp/TestApp.xcodeproj -p "${IOS_PLATFORM}"
- name: Run Simulator Tests
if: inputs.ios-platform == 'SIMULATOR'
- name: Run simulator tests
if: matrix.ios_platform == 'SIMULATOR'
shell: bash
run: |
set -eux
# shellcheck disable=SC1091
# use the pytorch nightly build to generate models
${CONDA_RUN} pip3 install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
# generate models for differnet backends
cd "${GITHUB_WORKSPACE}/ios/TestApp/benchmark"
# Use the pytorch nightly build to generate models
${CONDA_RUN} pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
# Generate models for differnet backends
pushd "${GITHUB_WORKSPACE}/ios/TestApp/benchmark"
mkdir -p ../models
# NB: Both of the following scripts only export models with lite interpreter
if [ "${USE_COREML_DELEGATE}" == 1 ]; then
${CONDA_RUN} python coreml_backend.py
else
cd "${GITHUB_WORKSPACE}"
pushd "${GITHUB_WORKSPACE}"
${CONDA_RUN} python test/mobile/model_test/gen_test_model.py ios-test
popd
fi
cd "${GITHUB_WORKSPACE}/ios/TestApp/benchmark"
if [ "${BUILD_LITE_INTERPRETER}" == 1 ]; then
echo "Setting up the TestApp for LiteInterpreter"
ruby setup.rb --lite 1
else
# Generate some models for JIT without lite interpreter
${CONDA_RUN} python trace_model.py
echo "Setting up the TestApp for Full JIT"
ruby setup.rb
fi
cd "${GITHUB_WORKSPACE}/ios/TestApp"
# instruments -s -devices
popd
pushd "${GITHUB_WORKSPACE}/ios/TestApp"
# Instruments -s -devices
if [ "${BUILD_LITE_INTERPRETER}" == 1 ]; then
if [ "${USE_COREML_DELEGATE}" == 1 ]; then
bundle exec fastlane scan --only_testing TestAppTests/TestAppTests/testCoreML
@ -173,9 +173,282 @@ jobs:
else
bundle exec fastlane scan --only_testing TestAppTests/TestAppTests/testFullJIT
fi
popd
- name: Dump Simulator Tests On a Failure
if: failure() && inputs.ios-platform == 'SIMULATOR'
- name: Dump simulator tests on failure
if: failure() && matrix.ios_platform == 'SIMULATOR'
run: |
echo "Simulator Tests Logs:"
cat /Users/runner/Library/Logs/scan/*.log
- name: Prepare the build artifacts for upload
shell: bash
run: |
set -eux
# The structure of the folder is as follows:
#
# RUNNER_TEMP/
# └── IOS_ARCH/
# ├── LICENSE
# ├── install
# │ ├── include
# │ │ └── headers
# │ └── lib
# │ ├── libXNNPACK.a
# │ ├── libc10.a
# │ ├── libclog.a
# │ ├── libcpuinfo.a
# │ ├── libeigen_blas.a
# │ ├── libpthreadpool.a
# │ ├── libpytorch_qnnpack.a
# │ ├── libtorch.a
# │ └── libtorch_cpu.a
# ├── src
# │ └── LibTorch-Lite.h
# └── version.txt
SETUP_DIR="${RUNNER_TEMP}/${IOS_ARCH}"
mkdir -p "${SETUP_DIR}/src"
cp -R "${GITHUB_WORKSPACE}/build_ios/install" "${SETUP_DIR}"
# Copy the umbrella header and license
if [ "${BUILD_LITE_INTERPRETER}" == 1 ]; then
cp "${GITHUB_WORKSPACE}/ios/LibTorch-Lite.h" "${SETUP_DIR}/src"
else
cp "${GITHUB_WORKSPACE}/ios/LibTorch.h" "${SETUP_DIR}/src"
fi
# Copy license and version
cp "${GITHUB_WORKSPACE}/LICENSE" "${SETUP_DIR}"
echo "${PYTORCH_VERSION}" > "${SETUP_DIR}"/version.txt
# Save the podspec for the upload job later
if [ "${BUILD_LITE_INTERPRETER}" == "1" ]; then
DATE=$(date -u +%Y%m%d)
cp "${GITHUB_WORKSPACE}"/ios/LibTorch-Lite-Nightly.podspec.template "${SETUP_DIR}"/LibTorch-Lite-Nightly.podspec
sed -i '' -e "s/IOS_NIGHTLY_BUILD_VERSION/${PYTORCH_VERSION}.${DATE}/g" "${SETUP_DIR}"/LibTorch-Lite-Nightly.podspec
cp "${GITHUB_WORKSPACE}"/ios/LibTorch-Lite.podspec.template "${SETUP_DIR}"/LibTorch-Lite.podspec
sed -i '' -e "s/IOS_BUILD_VERSION/${PYTORCH_VERSION}/g" "${SETUP_DIR}"/LibTorch-Lite.podspec
else
# NB: There is no nightly build without lite interpreter atm
cp "${GITHUB_WORKSPACE}"/ios/LibTorch.podspec.template "${SETUP_DIR}"/LibTorch.podspec
sed -i '' -e "s/IOS_BUILD_VERSION/${PYTORCH_VERSION}/g" "${SETUP_DIR}"/LibTorch.podspec
fi
pushd "${SETUP_DIR}"
# NB: It's important to zip all the files before uploading because the GHA will upload
# all files sequentially which is both slow and has too many requests. More info is at
# https://github.com/actions/upload-artifact#too-many-uploads-resulting-in-429-responses
zip -r "${IOS_ARCH}.zip" install src version.txt LICENSE ./*.podspec
popd
- uses: actions/upload-artifact@v3
with:
name: pytorch-ios-build-artifacts-${{ matrix.ios_arch }}
if-no-files-found: error
path: ${{ runner.temp }}/${{ matrix.ios_arch }}/${{ matrix.ios_arch }}.zip
upload-ios-artifacts:
# NB: this job run on GitHub MacOS ephemeral runner so that it can use lipo
# to create the fat iOS binaries for both x86_64 and arm64
runs-on: macos-12
needs: build
# NB: Only upload release build, if we need it, we could also turn on nightly here
environment: ${{ (github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/v'))) && 'ios-upload' || '' }}
steps:
- uses: actions/checkout@v3
# For awscli S3 upload
- uses: actions/setup-python@v4
with:
python-version: '3.10'
cache: pip
# For cocoapods pod upload
- uses: ruby/setup-ruby@v1
with:
ruby-version: '3.2'
bundler-cache: true
- name: Download arm64 artifacts
uses: actions/download-artifact@v3
with:
name: pytorch-ios-build-artifacts-arm64
- name: Download x86_64 artifacts
uses: actions/download-artifact@v3
with:
name: pytorch-ios-build-artifacts-x86_64
- name: Unzip arm64 and x86_64 artifacts
shell: bash
run: |
set -eux
for ARCH in "arm64" "x86_64"; do
TMP_DIR="${RUNNER_TEMP}/${ARCH}"
mkdir -p "${TMP_DIR}"
cp "${ARCH}.zip" "${TMP_DIR}"
pushd "${TMP_DIR}"
unzip -o "${ARCH}.zip"
popd
done
- name: Prepare the artifact
env:
IS_NIGHTLY: ${{ github.event.ref == 'refs/heads/nightly' }}
shell: bash
working-directory: ${{ runner.temp }}/arm64
run: |
set -eux
DEST_DIR="${RUNNER_TEMP}"/ios
echo "DEST_DIR=${DEST_DIR}" >> "$GITHUB_ENV"
# Prepare all the sub directories
mkdir -p "${DEST_DIR}"/install/lib
# Copy header and share files, arm64 or x86_64 both work
cp -R install/include "${DEST_DIR}"/install
cp -R install/share "${DEST_DIR}"/install
# The last dash is important to copy only files under src
cp -R src "${DEST_DIR}"
cp LICENSE "${DEST_DIR}"
if [ "${IS_NIGHTLY}" == true ]; then
PYTORCH_VERSION=$(cat version.txt)
DATE=$(date -u +%Y%m%d)
echo "${PYTORCH_VERSION}.${DATE}" > "${DEST_DIR}"/version.txt
else
cp version.txt "${DEST_DIR}"
fi
PYTORCH_VERSION=$(cat "${DEST_DIR}"/version.txt)
echo "PYTORCH_VERSION=${PYTORCH_VERSION}" >> "$GITHUB_ENV"
pushd install/lib
# shellcheck disable=SC2207
LIBRARIES=($(ls ./*.a))
popd
for LIB in "${LIBRARIES[@]}"; do
FROM_LIBS=("${RUNNER_TEMP}"/arm64/install/lib/"${LIB}" "${RUNNER_TEMP}"/x86_64/install/lib/"${LIB}")
# Create a fat binary for both arm64 and x86_64
lipo -create "${FROM_LIBS[@]}" -o "${DEST_DIR}"/install/lib/"${LIB}"
# Print the info
lipo -i "${DEST_DIR}"/install/lib/"${LIB}"
done
BUILD_LITE_INTERPRETER=1
if [ -f "${RUNNER_TEMP}"/arm64/LibTorch.podspec ]; then
# If LibTorch.podspec is used instead of LibTorch-Lite.podspec, the artifact is built
# without lite interpreter
BUILD_LITE_INTERPRETER=0
fi
echo "BUILD_LITE_INTERPRETER=${BUILD_LITE_INTERPRETER}" >> "$GITHUB_ENV"
- name: Prepare the podspec
env:
IS_NIGHTLY: ${{ github.event.ref == 'refs/heads/nightly' }}
shell: bash
working-directory: ${{ env.DEST_DIR }}
run: |
set -eux
ARTIFACT_NAME=libtorch
SPEC_NAME=LibTorch
if [ "${BUILD_LITE_INTERPRETER}" == "1" ]; then
ARTIFACT_NAME="${ARTIFACT_NAME}_lite_ios"
SPEC_NAME="${SPEC_NAME}-Lite"
else
ARTIFACT_NAME="${ARTIFACT_NAME}_ios"
fi
if [ "${IS_NIGHTLY}" == true ]; then
ARTIFACT_NAME="${ARTIFACT_NAME}_nightly_${PYTORCH_VERSION}.zip"
SPEC_NAME="${SPEC_NAME}-Nightly"
else
ARTIFACT_NAME="${ARTIFACT_NAME}_${PYTORCH_VERSION}.zip"
fi
SPEC_NAME_WITH_VERSION="${SPEC_NAME}-${PYTORCH_VERSION}.podspec"
SPEC_NAME="${SPEC_NAME}.podspec"
# Also copy the spec file
cp "${RUNNER_TEMP}"/arm64/"${SPEC_NAME}" "${SPEC_NAME_WITH_VERSION}"
# NB: It's important to zip all the files before uploading because the GHA will upload
# all files sequentially which is both slow and has too many requests. More info is at
# https://github.com/actions/upload-artifact#too-many-uploads-resulting-in-429-responses
zip -r "${ARTIFACT_NAME}" install src version.txt LICENSE
{
echo "ARTIFACT_NAME=${ARTIFACT_NAME}"
echo "SPEC_NAME_WITH_VERSION=${SPEC_NAME_WITH_VERSION}"
echo "SPEC_NAME=${SPEC_NAME}"
} >> "$GITHUB_ENV"
- uses: actions/upload-artifact@v3
with:
name: pytorch-ios-artifacts
if-no-files-found: error
path: ${{ env.DEST_DIR }}/${{ env.ARTIFACT_NAME }}
- uses: actions/upload-artifact@v3
with:
name: pytorch-ios-podspec
if-no-files-found: error
path: ${{ env.DEST_DIR }}/${{ env.SPEC_NAME_WITH_VERSION }}
- name: Set DRY_RUN
if: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || (startsWith(github.event.ref, 'refs/tags/v'))) }}
shell: bash
run: |
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
- name: Upload the artifact to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
IS_NIGHTLY: ${{ github.event.ref == 'refs/heads/nightly' }}
shell: bash
working-directory: ${{ env.DEST_DIR }}
run: |
set -eux
pip install -q awscli==1.29.40
DRY_RUN=${DRY_RUN:-enabled}
AWS_S3_CP="aws s3 cp --dryrun"
if [ "${DRY_RUN}" == "disabled" ]; then
AWS_S3_CP="aws s3 cp"
fi
if [ "${IS_NIGHTLY}" == true ]; then
BUCKET_NAME="ossci-ios-build"
else
BUCKET_NAME="ossci-ios"
fi
${AWS_S3_CP} "${ARTIFACT_NAME}" "s3://${BUCKET_NAME}/" --acl public-read
${AWS_S3_CP} "${SPEC_NAME_WITH_VERSION}" "s3://${BUCKET_NAME}/" --acl public-read
- name: Upload the artifact to cocoapods (nightly only)
env:
# We need to set this secret to upload to cocoapods. However, we might want
# to NOT set this for PROD release so that we can upload the artifacts manually
COCOAPODS_TRUNK_TOKEN: ${{ secrets.COCOAPODS_TRUNK_TOKEN || '' }}
if: ${{ github.event_name == 'push' && github.event.ref == 'refs/heads/nightly' && env.COCOAPODS_TRUNK_TOKEN != '' }}
shell: bash
working-directory: ${{ runner.temp }}/arm64
run: |
set -eux
gem install cocoapods
pod trunk me
# Upload the spec to cocoapods
pod trunk push --verbose --allow-warnings --use-libraries --skip-import-validation "${SPEC_NAME}"

View File

@ -73,7 +73,7 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -82,19 +82,19 @@ jobs:
# checkout because when we run this action we don't *have* a local
# checkout. In other cases you should prefer a local checkout.
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -192,5 +192,5 @@ jobs:
path: sccache-stats-*.json
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -57,7 +57,7 @@ jobs:
timeout-minutes: ${{ inputs.timeout-minutes }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
if: ${{ !contains(matrix.runner, 'gcp.a100') }}
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -66,25 +66,25 @@ jobs:
docker exec -it $(docker container ps --format '{{.ID}}') bash
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
id: install-nvidia-driver
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.1
if: contains(inputs.build-environment, 'cuda') && !contains(matrix.config, 'nogpu')
- name: Lock NVIDIA A100 40GB Frequency
@ -292,7 +292,7 @@ jobs:
path: ./**/core.[1-9]*
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()
# NB: We are currently having an intermittent GPU-related issue on G5 runners with

View File

@ -71,11 +71,11 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps:
- name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Set xcode version
env:
@ -87,7 +87,7 @@ jobs:
- name: Setup miniconda
if: inputs.environment-file == ''
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -97,7 +97,7 @@ jobs:
# environment even though the arch is x86-64
- name: Setup miniconda using the provided environment file
if: inputs.environment-file != ''
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: ${{ inputs.python-version }}
environment-file: ${{ inputs.environment-file }}
@ -206,4 +206,4 @@ jobs:
- name: Clean up disk space
if: always()
continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1

View File

@ -41,7 +41,7 @@ jobs:
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -85,7 +85,7 @@ jobs:
use-gha: true
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}

View File

@ -69,11 +69,11 @@ jobs:
done
- name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Download build artifacts
uses: ./.github/actions/download-build-artifacts
@ -82,7 +82,7 @@ jobs:
use-gha: true
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -205,4 +205,4 @@ jobs:
- name: Clean up disk space
if: always()
continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1

View File

@ -48,7 +48,7 @@ jobs:
steps:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: true
@ -57,12 +57,12 @@ jobs:
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}

View File

@ -22,7 +22,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -41,17 +41,25 @@ jobs:
strategy:
matrix: ${{ fromJSON(needs.filter.outputs.test-matrix) }}
fail-fast: false
# NB: This job can only run on GitHub Linux runner atm. This is an ok thing though
# because that runner is ephemeral and could access upload secrets
runs-on: ${{ matrix.runner }}
env:
# GitHub runner installs Android SDK on this path
ANDROID_ROOT: /usr/local/lib/android
ANDROID_NDK_VERSION: '21.4.7075529'
BUILD_LITE_INTERPRETER: ${{ matrix.use_lite_interpreter }}
# 4 of them are supported atm: armeabi-v7a, arm64-v8a, x86, x86_64
SUPPORT_ABI: '${{ matrix.support_abi }}'
steps:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: 3.8
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}.txt
- name: Install NDK
uses: nick-fields/retry@v2.8.2
@ -60,12 +68,12 @@ jobs:
max_attempts: 3
retry_wait_seconds: 90
command: |
set -eux
# Install NDK 21 after GitHub update
# https://github.com/actions/virtual-environments/issues/5595
ANDROID_ROOT="/usr/local/lib/android"
ANDROID_SDK_ROOT="${ANDROID_ROOT}/sdk"
ANDROID_NDK="${ANDROID_SDK_ROOT}/ndk-bundle"
ANDROID_NDK_VERSION="21.4.7075529"
SDKMANAGER="${ANDROID_SDK_ROOT}/cmdline-tools/latest/bin/sdkmanager"
# NB: This step downloads and installs NDK, thus it could be flaky.
@ -86,8 +94,10 @@ jobs:
- name: Build PyTorch Android
run: |
set -eux
echo "CMAKE_PREFIX_PATH=${CONDA_PREFIX:-"$(dirname "$(which conda)")/../"}" >> "${GITHUB_ENV}"
${CONDA_RUN} ./scripts/build_pytorch_android.sh x86
${CONDA_RUN} ./scripts/build_pytorch_android.sh "${SUPPORT_ABI}"
- name: Run tests
uses: reactivecircus/android-emulator-runner@v2

View File

@ -60,10 +60,10 @@ jobs:
git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main
uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.1
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: |
@ -78,7 +78,7 @@ jobs:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: true

View File

@ -48,10 +48,10 @@ jobs:
git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main
uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.1
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: |
@ -67,7 +67,7 @@ jobs:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: true

View File

@ -0,0 +1,48 @@
name: Build Android binaries
on:
push:
branches:
- nightly
tags:
# NOTE: Binary build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
paths:
- .github/workflows/build-android-binaries.yml
- .github/workflows/_run_android_tests.yml
- android/**
pull_request:
paths:
- .github/workflows/build-android-binaries.yml
- .github/workflows/_run_android_tests.yml
- android/**
# NB: We can use this workflow dispatch to test and build the binaries manually
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
jobs:
android-build-test:
name: android-build-test
uses: ./.github/workflows/_run_android_tests.yml
with:
test-matrix: |
{ include: [
{ config: 'default',
shard: 1,
num_shards: 1,
runner: 'ubuntu-20.04-16x',
use_lite_interpreter: 1,
support_abi: 'armeabi-v7a,arm64-v8a,x86,x86_64',
},
{ config: 'default',
shard: 1,
num_shards: 1,
runner: 'ubuntu-20.04-16x',
use_lite_interpreter: 0,
support_abi: 'armeabi-v7a,arm64-v8a,x86,x86_64',
},
]}

View File

@ -0,0 +1,70 @@
name: Build iOS binaries
on:
push:
branches:
- nightly
tags:
# NOTE: Binary build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
paths:
- .github/workflows/build-ios-binaries.yml
- .github/workflows/_ios-build-test.yml
pull_request:
paths:
- .github/workflows/build-ios-binaries.yml
- .github/workflows/_ios-build-test.yml
# NB: We can use this workflow dispatch to test and build iOS binaries manually
workflow_dispatch:
inputs:
use_lite_interpreter:
description: "Use PyTorch lite interpreter?"
type: string
default: 1
use_coreml:
description: "Use Apple Core ML?"
type: string
default: 1
use_custom_op_list:
description: "Specify the custom ops list to include in the binaries"
type: string
default: ""
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
jobs:
# TODO: Figure out how to migrate this job to M1 runner
ios-build-test:
name: ios-build-test
uses: ./.github/workflows/_ios-build-test.yml
with:
build-environment: ios-build-test
sync-tag: ios-build-test
test-matrix: |
{ include: [
{ config: "default",
shard: 1,
num_shards: 1,
runner: "macos-12",
ios_platform: "SIMULATOR",
ios_arch: "x86_64",
use_lite_interpreter: ${{ inputs.use_lite_interpreter || 1 }},
use_metal: 0,
use_coreml: ${{ inputs.use_coreml || 1 }},
use_custom_op_list: ${{ inputs.use_custom_op_list || '' }}
},
{ config: "default",
shard: 1,
num_shards: 1,
runner: "macos-12",
ios_platform: "OS",
ios_arch: "arm64",
use_lite_interpreter: ${{ inputs.use_lite_interpreter || 1 }},
use_metal: 1,
use_coreml: ${{ inputs.use_coreml || 1 }},
use_custom_op_list: ${{ inputs.use_custom_op_list || '' }}
}
]}

View File

@ -3,7 +3,11 @@ name: Build Triton wheels
on:
push:
branches:
- main
- release/2.1
tags:
# NOTE: Binary build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
paths:
- .github/workflows/build-triton-wheel.yml
- .github/scripts/build_triton_wheel.py
@ -43,12 +47,12 @@ jobs:
BUILD_DEVICE: ${{ matrix.device }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
@ -56,11 +60,13 @@ jobs:
uses: ./.github/actions/setup-linux
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ env.DOCKER_IMAGE }}
- name: Build Triton wheel
env:
IS_RELEASE_TAG: ${{ startsWith(github.event.ref, 'refs/tags/v') }}
run: |
set -x
mkdir -p "${RUNNER_TEMP}/artifacts/"
@ -98,64 +104,75 @@ jobs:
BUILD_ROCM="--build-rocm"
fi
RELEASE=""
if [[ "${IS_RELEASE_TAG}" == true ]]; then
RELEASE="--release"
fi
docker exec -t "${container_name}" yum install -y zlib-devel zip
docker exec -t "${container_name}" "${PYTHON_EXECUTABLE}" -m pip install -U setuptools==67.4.0
docker exec -t "${container_name}" "${PYTHON_EXECUTABLE}" /pytorch/.github/scripts/build_triton_wheel.py $BUILD_ROCM
docker exec -t "${container_name}" "${PYTHON_EXECUTABLE}" /pytorch/.github/scripts/build_triton_wheel.py $BUILD_ROCM $RELEASE
docker exec -t "${container_name}" chown -R 1000.1000 /artifacts
- uses: actions/upload-artifact@v3
with:
name: "pytorch-triton-wheel-${{ matrix.py_vers }}"
# NB: Use the same name here and all wheels can be downloaded by referring to the same artifact
name: pytorch-triton-wheel
if-no-files-found: error
path:
${{ runner.temp }}/artifacts/*
path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()
upload-wheel:
runs-on: linux.20_04.4x
runs-on: ubuntu-22.04
needs: build-wheel
container:
image: continuumio/miniconda3:4.12.0
env:
GITHUB_TOKEN: ${{ secrets.github-token }}
environment: ${{ (github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/v'))) && 'conda-aws-upload' || '' }}
steps:
- name: Download Build Artifacts (3.8)
- uses: actions/checkout@v3
- name: Download Build Artifacts
uses: actions/download-artifact@v3
with:
name: "pytorch-triton-wheel-3.8"
path: "${{ runner.temp }}/artifacts/"
- name: Download Build Artifacts (3.9)
uses: actions/download-artifact@v3
with:
name: "pytorch-triton-wheel-3.9"
path: "${{ runner.temp }}/artifacts/"
- name: Download Build Artifacts (3.10)
uses: actions/download-artifact@v3
with:
name: "pytorch-triton-wheel-3.10"
path: "${{ runner.temp }}/artifacts/"
- name: Download Build Artifacts (3.11)
uses: actions/download-artifact@v3
with:
name: "pytorch-triton-wheel-3.11"
path: "${{ runner.temp }}/artifacts/"
- name: Upload binaries
if: ${{ github.event_name == 'push' && github.event.ref == 'refs/heads/main' }}
env:
PKG_DIR: "${{ runner.temp }}/artifacts"
# When running these on pull_request events these should be blank
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_S3_UPDATE_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_S3_UPDATE_SECRET_ACCESS_KEY }}
UPLOAD_BUCKET: "s3://pytorch"
name: pytorch-triton-wheel
path: ${{ runner.temp }}/artifacts/
- name: Set DRY_RUN (only for tagged pushes)
if: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || (startsWith(github.event.ref, 'refs/tags/v'))) }}
shell: bash
run: |
set -ex
pip install -q awscli
s3_dir="${UPLOAD_BUCKET}/whl/nightly/"
for pkg in "${PKG_DIR}/"*.whl; do
aws s3 cp --no-progress --acl public-read "${pkg}" "${s3_dir}"
done
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/v') }}
shell: bash
run: |
set -ex
# reference ends with an RC suffix
if [[ "${GITHUB_REF_NAME}" = *-rc[0-9]* ]]; then
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
fi
# NB: This step is gated by DRY_RUN, which is enabled everywhere except nightly and release branches
- name: Upload binaries
env:
PACKAGE_TYPE: wheel
# The UPLOAD_SUBFOLDER needs to be empty here so that triton wheels are uploaded
# to nightly or test
UPLOAD_SUBFOLDER: ""
PKG_DIR: ${{ runner.temp }}/artifacts
# When running these on pull_request events these should be blank
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
shell: bash
run: |
set -ex
bash .circleci/scripts/binary_upload.sh
build-conda:
name: "Build Triton Conda"
runs-on: [self-hosted, linux.2xlarge]
@ -164,19 +181,17 @@ jobs:
matrix:
py_vers: [ "3.8", "3.9", "3.10", "3.11" ]
timeout-minutes: 40
environment: ${{ (github.event_name == 'push' && github.event.ref == 'refs/heads/main') && 'conda-aws-upload' || '' }}
env:
DOCKER_IMAGE: pytorch/conda-builder:cpu
PY_VERS: ${{ matrix.py_vers }}
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
@ -184,11 +199,13 @@ jobs:
uses: ./.github/actions/setup-linux
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ env.DOCKER_IMAGE }}
- name: Build Triton conda package
env:
IS_RELEASE_TAG: ${{ startsWith(github.event.ref, 'refs/tags/v') }}
run: |
set -x
mkdir -p "${RUNNER_TEMP}/artifacts/"
@ -198,31 +215,76 @@ jobs:
-v "${GITHUB_WORKSPACE}:/pytorch" \
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
-w /artifacts/ \
-e ANACONDA_API_TOKEN \
"${DOCKER_IMAGE}" \
)
RELEASE=""
if [[ "${IS_RELEASE_TAG}" == true ]]; then
RELEASE="--release"
fi
docker exec -t "${container_name}" yum install -y llvm11 llvm11-devel llvm11-static llvm11-libs zlib-devel
docker exec -t "${container_name}" python /pytorch/.github/scripts/build_triton_wheel.py --build-conda --py-version="${PY_VERS}"
- name: Upload artifacts to Anaconda
if: ${{ github.event_name == 'push' && github.event.ref == 'refs/heads/main' }}
run: |
container_name=$(docker container ps --format '{{.ID}}')
docker exec -t "${container_name}" sh -c "anaconda upload /artifacts/torch*.tar.bz2 -u pytorch-nightly --label main --no-progress --force"
- name: Chown artifacts
run: |
container_name=$(docker container ps --format '{{.ID}}')
docker exec -t "${container_name}" python /pytorch/.github/scripts/build_triton_wheel.py --build-conda --py-version="${PY_VERS}" $RELEASE
docker exec -t "${container_name}" chown -R 1000.1000 /artifacts
- uses: actions/upload-artifact@v3
with:
name: "pytorch-triton-conda-${{ matrix.py_vers }}"
# NB: Use the same name here and all wheels can be downloaded by referring to the same artifact
name: pytorch-triton-conda
if-no-files-found: error
path:
${{ runner.temp }}/artifacts/*
path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()
upload-conda:
runs-on: ubuntu-22.04
needs: build-conda
container:
image: continuumio/miniconda3:4.12.0
environment: ${{ (github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/v'))) && 'conda-aws-upload' || '' }}
steps:
- uses: actions/checkout@v3
- name: Download Build Artifacts
uses: actions/download-artifact@v3
with:
name: pytorch-triton-conda
path: ${{ runner.temp }}/artifacts/
- name: Set DRY_RUN (only for tagged pushes)
if: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || (startsWith(github.event.ref, 'refs/tags/v'))) }}
shell: bash
run: |
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/v') }}
shell: bash
run: |
set -ex
# reference ends with an RC suffix
if [[ "${GITHUB_REF_NAME}" = *-rc[0-9]* ]]; then
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
fi
# NB: This step is gated by DRY_RUN, which is enabled everywhere except nightly and release branches
- name: Upload binaries to Anaconda
env:
PACKAGE_TYPE: conda
PKG_DIR: ${{ runner.temp }}/artifacts
# When running these on pull_request events these should be blank
CONDA_PYTORCHBOT_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
CONDA_PYTORCHBOT_TOKEN_TEST: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
shell: bash
run: |
set -ex
if [[ "${UPLOAD_CHANNEL}" = "nightly" ]]; then
export ANACONDA_API_TOKEN="${CONDA_PYTORCHBOT_TOKEN}"
else
export ANACONDA_API_TOKEN="${CONDA_PYTORCHBOT_TOKEN_TEST}"
fi
bash .circleci/scripts/binary_upload.sh

View File

@ -29,7 +29,7 @@ jobs:
runs-on: linux.20_04.4x
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
fetch-depth: 1

View File

@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Run close_nonexistent_disable_issues.py
env:

View File

@ -2,10 +2,9 @@ name: Create Release
on:
push:
tags: ['v*']
branches:
- main
- nightly
- release/*
release:
types: [published]
pull_request:
@ -31,6 +30,8 @@ jobs:
run: |
tag_or_branch="${PT_GITHUB_REF#refs/tags/}"
tag_or_branch="${tag_or_branch#refs/heads/}"
# replace directory separators with _ in branch name
tag_or_branch="${tag_or_branch//\//_}"
echo "PT_RELEASE_NAME=pytorch-$tag_or_branch" >> "$GITHUB_ENV"
echo "PT_RELEASE_FILE=pytorch-$tag_or_branch.tar.gz" >> "$GITHUB_ENV"
- name: Create source distribution
@ -52,5 +53,5 @@ jobs:
files: ${{env.PT_RELEASE_FILE}}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name }}
cancel-in-progress: true

View File

@ -61,21 +61,21 @@ jobs:
# [see note: pytorch repo ref]
# deep clone (fetch-depth 0) required for git merge-base
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Build docker image
id: build-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ matrix.docker-image-name }}
always-rebuild: true
push: true
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.build-docker-image.outputs.docker-image }}
@ -105,5 +105,5 @@ jobs:
if: always()
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -2,6 +2,15 @@ name: Build Official Docker Images
on:
workflow_dispatch:
inputs:
channel:
description: "Channel to use (nightly, test, release)"
required: false
type: choice
default: test
options:
- release
- test
pull_request:
paths:
- Dockerfile
@ -15,6 +24,7 @@ on:
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
- ciflow/nightly/*
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
@ -26,28 +36,46 @@ env:
DOCKER_REGISTRY: ghcr.io
NO_BUILD_SUFFIX: true
USE_BUILDX: 1
WITH_PUSH: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || (startsWith(github.event.ref, 'refs/tags/') && !startsWith(github.event.ref, 'refs/tags/ciflow/'))) }}
WITH_PUSH: ${{ inputs.channel == 'release' }}
jobs:
generate-matrix:
if: github.repository_owner == 'pytorch'
runs-on: [self-hosted, linux.large]
outputs:
matrix: ${{ steps.generate-matrix.outputs.matrix }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
with:
fetch-depth: 1
submodules: true
- name: Get docker release matrix
id: generate-matrix
run: |
MATRIX_BLOB="$(python3 .github/scripts/generate_docker_release_matrix.py)"
echo "${MATRIX_BLOB}"
echo "matrix=${MATRIX_BLOB}" >> "${GITHUB_OUTPUT}"
build:
if: ${{ github.repository == 'pytorch/pytorch' }}
runs-on: [self-hosted, linux.2xlarge]
environment: ${{ (github.ref == 'refs/heads/main' || startsWith(github.event.ref, 'refs/tags/v')) && 'docker-build' || '' }}
timeout-minutes: 240
needs: generate-matrix
strategy:
matrix:
include:
# nvidia specific images don't exist for arm64 so only build the runtime image
- image_type: runtime
platform: linux/arm64,linux/amd64
- image_type: devel
platform: linux/amd64
matrix: ${{ fromJson(needs.generate-matrix.outputs.matrix) }}
fail-fast: false
env:
BUILD_IMAGE_TYPE: ${{ matrix.image_type }}
BUILD_PLATFORMS: ${{ matrix.platform }}
CHANNEL: ${{ inputs.channel }}
CUDA_VERSION: ${{ matrix.cuda_full_version }}
CUDA_VERSION_SHORT: ${{ matrix.cuda }}
CUDNN_VERSION: ${{ matrix.cudnn_version }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
@ -60,7 +88,7 @@ jobs:
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Login to GitHub Container Registry
if: ${{ env.WITH_PUSH == 'true' }}
if: ${{ inputs.channel == 'release' }}
uses: docker/login-action@v2
with:
registry: ghcr.io
@ -80,14 +108,16 @@ jobs:
set -eou pipefail
# To get QEMU binaries in our PATH
echo "${RUNNER_TEMP}/bin" >> "${GITHUB_PATH}"
# Generate PyTorch version to use
echo "PYTORCH_VERSION=$(python3 .github/scripts/generate_pytorch_version.py)" >> "${GITHUB_ENV}"
- name: Setup nightly specific variables
if: ${{ github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/ciflow/nightly/') }}
# Generate PyTorch version to use without suffix
echo "PYTORCH_VERSION=2.1.2" >> "${GITHUB_ENV}"
- name: Setup release specific variables
run: |
{
echo "DOCKER_IMAGE=pytorch-nightly";
echo "INSTALL_CHANNEL=pytorch-nightly";
echo "INSTALL_CHANNEL=pytorch-test";
if [[ ${CHANNEL} == "release" ]]; then
echo "INSTALL_CHANNEL=pytorch";
fi
echo "TRITON_VERSION=$(cut -f 1 .ci/docker/triton_version.txt)+$(cut -c -10 .ci/docker/ci_commit_pins/triton.txt)";
} >> "${GITHUB_ENV}"
- name: Run docker build / push
@ -97,10 +127,11 @@ jobs:
- name: Push nightly tags
if: ${{ github.event.ref == 'refs/heads/nightly' && matrix.image_type == 'runtime' }}
run: |
PYTORCH_DOCKER_TAG="${PYTORCH_VERSION}-runtime"
CUDA_VERSION=$(python3 -c "import re;print(re.search('CUDA_VERSION\s+=\s+([0-9\.]+)',open('docker.Makefile').read())[1],end='')")
PYTORCH_DOCKER_TAG="${PYTORCH_VERSION}-cuda$(CUDA_VERSION_SHORT)-cudnn$(CUDNN_VERSION)-runtime"
PYTORCH_NIGHTLY_COMMIT=$(docker run ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_DOCKER_TAG}" \
python -c 'import torch; print(torch.version.git_version[:7],end="")')
docker tag ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_DOCKER_TAG}" \
ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}-cu${CUDA_VERSION}"
docker push ghcr.io/pytorch/pytorch-nightly:"${PYTORCH_NIGHTLY_COMMIT}-cu${CUDA_VERSION}"
@ -109,5 +140,5 @@ jobs:
ghcr.io/pytorch/pytorch-nightly:latest
docker push ghcr.io/pytorch/pytorch-nightly:latest
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -47,12 +47,13 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.8"
runs_on: linux.t4g.2xlarge
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_8-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_8-cpu-aarch64-test: # Testing
@ -67,11 +68,11 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
runs_on: linux.t4g.2xlarge
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
@ -86,14 +87,14 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cpu-aarch64
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_9-cpu-aarch64-build:
@ -107,12 +108,13 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.9"
runs_on: linux.t4g.2xlarge
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_9-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_9-cpu-aarch64-test: # Testing
@ -127,11 +129,11 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
runs_on: linux.t4g.2xlarge
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
@ -146,14 +148,14 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cpu-aarch64
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_10-cpu-aarch64-build:
@ -167,12 +169,13 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.10"
runs_on: linux.t4g.2xlarge
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_10-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_10-cpu-aarch64-test: # Testing
@ -187,11 +190,11 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
runs_on: linux.t4g.2xlarge
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
@ -206,14 +209,14 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cpu-aarch64
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_11-cpu-aarch64-build:
@ -227,12 +230,13 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.11"
runs_on: linux.t4g.2xlarge
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_11-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_11-cpu-aarch64-test: # Testing
@ -247,11 +251,11 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
runs_on: linux.t4g.2xlarge
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
@ -266,12 +270,12 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64
DOCKER_IMAGE: pytorch/manylinuxaarch64-builder:cpu-aarch64-2.1
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cpu-aarch64
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -47,7 +47,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu
build_environment: linux-binary-conda
@ -65,7 +65,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu
build_environment: linux-binary-conda
@ -83,14 +83,14 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_8-cuda11_8-build:
@ -105,7 +105,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda11_8
build_environment: linux-binary-conda
@ -124,7 +124,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda11_8
build_environment: linux-binary-conda
@ -143,14 +143,14 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_8-cuda12_1-build:
@ -165,7 +165,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda12_1
build_environment: linux-binary-conda
@ -184,7 +184,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda12_1
build_environment: linux-binary-conda
@ -203,14 +203,14 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cpu-build:
@ -224,7 +224,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu
build_environment: linux-binary-conda
@ -242,7 +242,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu
build_environment: linux-binary-conda
@ -260,14 +260,14 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cuda11_8-build:
@ -282,7 +282,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda11_8
build_environment: linux-binary-conda
@ -301,7 +301,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda11_8
build_environment: linux-binary-conda
@ -320,14 +320,14 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cuda12_1-build:
@ -342,7 +342,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda12_1
build_environment: linux-binary-conda
@ -361,7 +361,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda12_1
build_environment: linux-binary-conda
@ -380,14 +380,14 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cpu-build:
@ -401,7 +401,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu
build_environment: linux-binary-conda
@ -419,7 +419,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu
build_environment: linux-binary-conda
@ -437,14 +437,14 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cuda11_8-build:
@ -459,7 +459,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda11_8
build_environment: linux-binary-conda
@ -478,7 +478,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda11_8
build_environment: linux-binary-conda
@ -497,14 +497,14 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cuda12_1-build:
@ -519,7 +519,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda12_1
build_environment: linux-binary-conda
@ -538,7 +538,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda12_1
build_environment: linux-binary-conda
@ -557,14 +557,14 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cpu-build:
@ -578,7 +578,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu
build_environment: linux-binary-conda
@ -596,7 +596,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu
build_environment: linux-binary-conda
@ -614,14 +614,14 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cuda11_8-build:
@ -636,7 +636,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda11_8
build_environment: linux-binary-conda
@ -655,7 +655,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda11_8
build_environment: linux-binary-conda
@ -674,14 +674,14 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8
DOCKER_IMAGE: pytorch/conda-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda11_8
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cuda12_1-build:
@ -696,7 +696,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda12_1
build_environment: linux-binary-conda
@ -715,7 +715,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda12_1
build_environment: linux-binary-conda
@ -734,12 +734,12 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1
DOCKER_IMAGE: pytorch/conda-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cuda12_1
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -42,7 +42,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -61,7 +61,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi

View File

@ -47,7 +47,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -66,7 +66,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -85,7 +85,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -93,7 +93,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-shared-without-deps-cxx11-abi-build:
@ -107,7 +107,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-without-deps-cxx11-abi
@ -126,7 +126,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-without-deps-cxx11-abi
@ -145,7 +145,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-without-deps-cxx11-abi
@ -153,7 +153,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-with-deps-cxx11-abi-build:
@ -167,7 +167,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-static-with-deps-cxx11-abi
@ -186,7 +186,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-static-with-deps-cxx11-abi
@ -205,7 +205,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-static-with-deps-cxx11-abi
@ -213,7 +213,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-without-deps-cxx11-abi-build:
@ -227,7 +227,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-static-without-deps-cxx11-abi
@ -246,7 +246,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-static-without-deps-cxx11-abi
@ -265,7 +265,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-static-without-deps-cxx11-abi
@ -273,7 +273,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-with-deps-cxx11-abi-build:
@ -288,7 +288,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -308,7 +308,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -328,7 +328,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-with-deps-cxx11-abi
@ -336,7 +336,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-without-deps-cxx11-abi-build:
@ -351,7 +351,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-without-deps-cxx11-abi
@ -371,7 +371,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-without-deps-cxx11-abi
@ -391,7 +391,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-shared-without-deps-cxx11-abi
@ -399,7 +399,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-with-deps-cxx11-abi-build:
@ -414,7 +414,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-static-with-deps-cxx11-abi
@ -434,7 +434,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-static-with-deps-cxx11-abi
@ -454,7 +454,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-static-with-deps-cxx11-abi
@ -462,7 +462,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-without-deps-cxx11-abi-build:
@ -477,7 +477,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-static-without-deps-cxx11-abi
@ -497,7 +497,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-static-without-deps-cxx11-abi
@ -517,7 +517,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda11_8-static-without-deps-cxx11-abi
@ -525,7 +525,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-with-deps-cxx11-abi-build:
@ -540,7 +540,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -560,7 +560,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -580,7 +580,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-with-deps-cxx11-abi
@ -588,7 +588,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-without-deps-cxx11-abi-build:
@ -603,7 +603,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-without-deps-cxx11-abi
@ -623,7 +623,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-without-deps-cxx11-abi
@ -643,7 +643,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-shared-without-deps-cxx11-abi
@ -651,7 +651,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-with-deps-cxx11-abi-build:
@ -666,7 +666,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-static-with-deps-cxx11-abi
@ -686,7 +686,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-static-with-deps-cxx11-abi
@ -706,7 +706,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-static-with-deps-cxx11-abi
@ -714,7 +714,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-without-deps-cxx11-abi-build:
@ -729,7 +729,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-static-without-deps-cxx11-abi
@ -749,7 +749,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-static-without-deps-cxx11-abi
@ -769,7 +769,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cuda12_1-static-without-deps-cxx11-abi
@ -777,7 +777,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_5-shared-with-deps-cxx11-abi-build:
@ -792,7 +792,7 @@ jobs:
DESIRED_CUDA: rocm5.5
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_5-shared-with-deps-cxx11-abi
@ -814,7 +814,7 @@ jobs:
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
steps:
@ -828,7 +828,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -840,7 +839,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -854,9 +853,9 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/libtorch-cxx11-builder:rocm5.5
docker-image: pytorch/libtorch-cxx11-builder:rocm5.5-2.1
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -873,7 +872,7 @@ jobs:
DESIRED_CUDA: rocm5.5
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_5-shared-with-deps-cxx11-abi
@ -881,7 +880,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_5-static-with-deps-cxx11-abi-build:
@ -896,7 +895,7 @@ jobs:
DESIRED_CUDA: rocm5.5
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_5-static-with-deps-cxx11-abi
@ -918,7 +917,7 @@ jobs:
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
steps:
@ -932,7 +931,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -944,7 +942,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -958,9 +956,9 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/libtorch-cxx11-builder:rocm5.5
docker-image: pytorch/libtorch-cxx11-builder:rocm5.5-2.1
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -977,7 +975,7 @@ jobs:
DESIRED_CUDA: rocm5.5
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.5-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_5-static-with-deps-cxx11-abi
@ -985,7 +983,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_6-shared-with-deps-cxx11-abi-build:
@ -1000,7 +998,7 @@ jobs:
DESIRED_CUDA: rocm5.6
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_6-shared-with-deps-cxx11-abi
@ -1022,7 +1020,7 @@ jobs:
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
steps:
@ -1036,7 +1034,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1048,7 +1045,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1062,9 +1059,9 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/libtorch-cxx11-builder:rocm5.6
docker-image: pytorch/libtorch-cxx11-builder:rocm5.6-2.1
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -1081,7 +1078,7 @@ jobs:
DESIRED_CUDA: rocm5.6
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_6-shared-with-deps-cxx11-abi
@ -1089,7 +1086,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_6-static-with-deps-cxx11-abi-build:
@ -1104,7 +1101,7 @@ jobs:
DESIRED_CUDA: rocm5.6
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_6-static-with-deps-cxx11-abi
@ -1126,7 +1123,7 @@ jobs:
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
steps:
@ -1140,7 +1137,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1152,7 +1148,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1166,9 +1162,9 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/libtorch-cxx11-builder:rocm5.6
docker-image: pytorch/libtorch-cxx11-builder:rocm5.6-2.1
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -1185,7 +1181,7 @@ jobs:
DESIRED_CUDA: rocm5.6
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:rocm5.6-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-rocm5_6-static-with-deps-cxx11-abi
@ -1193,5 +1189,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -42,7 +42,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -61,7 +61,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11

View File

@ -47,7 +47,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -66,7 +66,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -85,7 +85,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-with-deps-pre-cxx11
@ -93,7 +93,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-shared-without-deps-pre-cxx11-build:
@ -107,7 +107,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-without-deps-pre-cxx11
@ -126,7 +126,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-without-deps-pre-cxx11
@ -145,7 +145,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-shared-without-deps-pre-cxx11
@ -153,7 +153,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-with-deps-pre-cxx11-build:
@ -167,7 +167,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-static-with-deps-pre-cxx11
@ -186,7 +186,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-static-with-deps-pre-cxx11
@ -205,7 +205,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-static-with-deps-pre-cxx11
@ -213,7 +213,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-without-deps-pre-cxx11-build:
@ -227,7 +227,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-static-without-deps-pre-cxx11
@ -246,7 +246,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-static-without-deps-pre-cxx11
@ -265,7 +265,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cpu-static-without-deps-pre-cxx11
@ -273,7 +273,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-with-deps-pre-cxx11-build:
@ -288,7 +288,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -308,7 +308,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -328,7 +328,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-with-deps-pre-cxx11
@ -336,7 +336,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-without-deps-pre-cxx11-build:
@ -351,7 +351,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-without-deps-pre-cxx11
@ -371,7 +371,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-without-deps-pre-cxx11
@ -391,7 +391,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-shared-without-deps-pre-cxx11
@ -399,7 +399,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-with-deps-pre-cxx11-build:
@ -414,7 +414,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-static-with-deps-pre-cxx11
@ -434,7 +434,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-static-with-deps-pre-cxx11
@ -454,7 +454,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-static-with-deps-pre-cxx11
@ -462,7 +462,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-without-deps-pre-cxx11-build:
@ -477,7 +477,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-static-without-deps-pre-cxx11
@ -497,7 +497,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-static-without-deps-pre-cxx11
@ -517,7 +517,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda11_8-static-without-deps-pre-cxx11
@ -525,7 +525,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-with-deps-pre-cxx11-build:
@ -540,7 +540,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -560,7 +560,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -580,7 +580,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-with-deps-pre-cxx11
@ -588,7 +588,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-without-deps-pre-cxx11-build:
@ -603,7 +603,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-without-deps-pre-cxx11
@ -623,7 +623,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-without-deps-pre-cxx11
@ -643,7 +643,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-shared-without-deps-pre-cxx11
@ -651,7 +651,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-with-deps-pre-cxx11-build:
@ -666,7 +666,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-static-with-deps-pre-cxx11
@ -686,7 +686,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-static-with-deps-pre-cxx11
@ -706,7 +706,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-static-with-deps-pre-cxx11
@ -714,7 +714,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-without-deps-pre-cxx11-build:
@ -729,7 +729,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-static-without-deps-pre-cxx11
@ -749,7 +749,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-static-without-deps-pre-cxx11
@ -769,7 +769,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-cuda12_1-static-without-deps-pre-cxx11
@ -777,7 +777,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_5-shared-with-deps-pre-cxx11-build:
@ -792,7 +792,7 @@ jobs:
DESIRED_CUDA: rocm5.5
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_5-shared-with-deps-pre-cxx11
@ -814,7 +814,7 @@ jobs:
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
steps:
@ -828,7 +828,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -840,7 +839,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -854,9 +853,9 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.5
docker-image: pytorch/manylinux-builder:rocm5.5-2.1
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -873,7 +872,7 @@ jobs:
DESIRED_CUDA: rocm5.5
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_5-shared-with-deps-pre-cxx11
@ -881,7 +880,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_5-static-with-deps-pre-cxx11-build:
@ -896,7 +895,7 @@ jobs:
DESIRED_CUDA: rocm5.5
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_5-static-with-deps-pre-cxx11
@ -918,7 +917,7 @@ jobs:
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
steps:
@ -932,7 +931,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -944,7 +942,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -958,9 +956,9 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.5
docker-image: pytorch/manylinux-builder:rocm5.5-2.1
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -977,7 +975,7 @@ jobs:
DESIRED_CUDA: rocm5.5
GPU_ARCH_VERSION: 5.5
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.5-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_5-static-with-deps-pre-cxx11
@ -985,7 +983,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_6-shared-with-deps-pre-cxx11-build:
@ -1000,7 +998,7 @@ jobs:
DESIRED_CUDA: rocm5.6
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_6-shared-with-deps-pre-cxx11
@ -1022,7 +1020,7 @@ jobs:
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
steps:
@ -1036,7 +1034,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1048,7 +1045,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1062,9 +1059,9 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.6
docker-image: pytorch/manylinux-builder:rocm5.6-2.1
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -1081,7 +1078,7 @@ jobs:
DESIRED_CUDA: rocm5.6
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_6-shared-with-deps-pre-cxx11
@ -1089,7 +1086,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_6-static-with-deps-pre-cxx11-build:
@ -1104,7 +1101,7 @@ jobs:
DESIRED_CUDA: rocm5.6
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_6-static-with-deps-pre-cxx11
@ -1126,7 +1123,7 @@ jobs:
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
SKIP_ALL_TESTS: 1
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
steps:
@ -1140,7 +1137,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1152,7 +1148,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1166,9 +1162,9 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.6
docker-image: pytorch/manylinux-builder:rocm5.6-2.1
- name: Test Pytorch binary
uses: ./pytorch/.github/actions/test-pytorch-binary
- name: Teardown ROCm
@ -1185,7 +1181,7 @@ jobs:
DESIRED_CUDA: rocm5.6
GPU_ARCH_VERSION: 5.6
GPU_ARCH_TYPE: rocm
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6
DOCKER_IMAGE: pytorch/manylinux-builder:rocm5.6-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: pre-cxx11
build_name: libtorch-rocm5_6-static-with-deps-pre-cxx11
@ -1193,5 +1189,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -43,7 +43,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel
@ -62,7 +62,7 @@ jobs:
DESIRED_CUDA: cu118
GPU_ARCH_VERSION: 11.8
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8
DOCKER_IMAGE: pytorch/manylinux-builder:cuda11.8-2.1
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda11_8
build_environment: linux-binary-manywheel
@ -82,11 +82,11 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1-with-pypi-cudnn
build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_8-cuda12_1-with-pypi-cudnn-test: # Testing
@ -102,7 +102,7 @@ jobs:
DESIRED_CUDA: cu121
GPU_ARCH_VERSION: 12.1
GPU_ARCH_TYPE: cuda
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1
DOCKER_IMAGE: pytorch/manylinux-builder:cuda12.1-2.1
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1-with-pypi-cudnn
build_environment: linux-binary-manywheel

File diff suppressed because it is too large Load Diff

View File

@ -75,7 +75,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -87,7 +86,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -136,7 +135,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu
use_s3: False
@ -144,7 +143,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -187,7 +186,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -199,7 +197,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -248,7 +246,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu
use_s3: False
@ -256,7 +254,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -299,7 +297,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -311,7 +308,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -360,7 +357,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu
use_s3: False
@ -368,7 +365,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -411,7 +408,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -423,7 +419,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -472,7 +468,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu
use_s3: False
@ -480,5 +476,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -48,6 +48,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.8"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
# For sccache access (only on non-forked PRs)
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
@ -75,7 +76,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -87,7 +87,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -136,7 +136,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
DESIRED_PYTHON: "3.8"
build_name: wheel-py3_8-cpu
use_s3: False
@ -144,7 +144,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -160,6 +160,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.9"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
# For sccache access (only on non-forked PRs)
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
@ -187,7 +188,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -199,7 +199,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -248,7 +248,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
DESIRED_PYTHON: "3.9"
build_name: wheel-py3_9-cpu
use_s3: False
@ -256,7 +256,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -272,6 +272,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.10"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
# For sccache access (only on non-forked PRs)
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
@ -299,7 +300,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -311,7 +311,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -360,7 +360,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
DESIRED_PYTHON: "3.10"
build_name: wheel-py3_10-cpu
use_s3: False
@ -368,7 +368,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -384,6 +384,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.11"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
# For sccache access (only on non-forked PRs)
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
@ -411,7 +412,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -423,7 +423,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -472,7 +472,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
DESIRED_PYTHON: "3.11"
build_name: wheel-py3_11-cpu
use_s3: False
@ -480,5 +480,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -73,7 +73,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -85,7 +84,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -134,7 +133,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.8"
build_name: conda-py3_8-cpu
use_s3: False
@ -142,7 +141,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -185,7 +184,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -197,7 +195,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -246,7 +244,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.9"
build_name: conda-py3_9-cpu
use_s3: False
@ -254,7 +252,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -297,7 +295,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -309,7 +306,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -358,7 +355,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.10"
build_name: conda-py3_10-cpu
use_s3: False
@ -366,7 +363,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -409,7 +406,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -421,7 +417,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -470,7 +466,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu
DOCKER_IMAGE: pytorch/conda-builder:cpu-2.1
DESIRED_PYTHON: "3.11"
build_name: conda-py3_11-cpu
use_s3: False
@ -478,5 +474,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -77,7 +77,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -89,7 +88,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -138,7 +137,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: shared-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-with-deps-cxx11-abi
@ -147,7 +146,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-shared-without-deps-cxx11-abi-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -194,7 +193,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -206,7 +204,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -255,7 +253,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: shared-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-shared-without-deps-cxx11-abi
@ -264,7 +262,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-with-deps-cxx11-abi-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -311,7 +309,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -323,7 +320,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -372,7 +369,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: static-with-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-static-with-deps-cxx11-abi
@ -381,7 +378,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-without-deps-cxx11-abi-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -428,7 +425,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -440,7 +436,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -489,7 +485,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu
DOCKER_IMAGE: pytorch/libtorch-cxx11-builder:cpu-2.1
LIBTORCH_VARIANT: static-without-deps
DESIRED_DEVTOOLSET: cxx11-abi
build_name: libtorch-cpu-static-without-deps-cxx11-abi
@ -498,5 +494,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -46,6 +46,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.8"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
# For sccache access (only on non-forked PRs)
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
@ -73,7 +74,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -85,7 +85,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -134,7 +134,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
DESIRED_PYTHON: "3.8"
build_name: wheel-py3_8-cpu
use_s3: False
@ -142,7 +142,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -158,6 +158,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.9"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
# For sccache access (only on non-forked PRs)
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
@ -185,7 +186,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -197,7 +197,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -246,7 +246,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
DESIRED_PYTHON: "3.9"
build_name: wheel-py3_9-cpu
use_s3: False
@ -254,7 +254,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -270,6 +270,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.10"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
# For sccache access (only on non-forked PRs)
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
@ -297,7 +298,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -309,7 +309,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -358,7 +358,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
DESIRED_PYTHON: "3.10"
build_name: wheel-py3_10-cpu
use_s3: False
@ -366,7 +366,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -382,6 +382,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.11"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
# For sccache access (only on non-forked PRs)
AWS_ACCESS_KEY_ID: ${{ secrets.MACOS_SCCACHE_S3_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.MACOS_SCCACHE_S3_SECRET_ACCESS_KEY }}
@ -409,7 +410,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -421,7 +421,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -470,7 +470,7 @@ jobs:
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu
DOCKER_IMAGE: pytorch/manylinux-builder:cpu-2.1
DESIRED_PYTHON: "3.11"
build_name: wheel-py3_11-cpu
use_s3: False
@ -478,5 +478,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -63,6 +63,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -92,7 +93,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -104,7 +104,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -174,6 +174,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -208,7 +209,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -220,7 +220,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -268,7 +268,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_8-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -302,6 +302,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -331,7 +332,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -343,7 +343,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -414,6 +414,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -448,7 +449,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -460,7 +460,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -509,7 +509,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_8-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -543,6 +543,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -572,7 +573,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -584,7 +584,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -655,6 +655,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -689,7 +690,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -701,7 +701,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -750,7 +750,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -783,6 +783,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -812,7 +813,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -824,7 +824,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -894,6 +894,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -928,7 +929,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -940,7 +940,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -988,7 +988,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1022,6 +1022,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1051,7 +1052,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1063,7 +1063,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1134,6 +1134,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1168,7 +1169,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1180,7 +1180,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1229,7 +1229,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1263,6 +1263,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1292,7 +1293,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1304,7 +1304,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1375,6 +1375,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1409,7 +1410,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1421,7 +1421,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1470,7 +1470,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1503,6 +1503,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1532,7 +1533,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1544,7 +1544,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1614,6 +1614,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1648,7 +1649,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1660,7 +1660,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1708,7 +1708,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1742,6 +1742,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1771,7 +1772,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1783,7 +1783,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1854,6 +1854,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1888,7 +1889,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1900,7 +1900,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1949,7 +1949,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1983,6 +1983,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2012,7 +2013,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2024,7 +2024,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2095,6 +2095,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2129,7 +2130,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2141,7 +2141,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2190,7 +2190,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2223,6 +2223,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2252,7 +2253,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2264,7 +2264,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2334,6 +2334,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2368,7 +2369,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2380,7 +2380,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2428,7 +2428,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2462,6 +2462,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2491,7 +2492,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2503,7 +2503,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2574,6 +2574,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2608,7 +2609,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2620,7 +2620,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2669,7 +2669,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2703,6 +2703,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2732,7 +2733,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2744,7 +2744,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2815,6 +2815,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2849,7 +2850,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2861,7 +2861,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2910,5 +2910,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -60,6 +60,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -89,7 +90,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -101,7 +101,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -175,6 +175,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -209,7 +210,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -221,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder

View File

@ -67,6 +67,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -96,7 +97,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -108,7 +108,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -182,6 +182,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -216,7 +217,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -228,7 +228,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -280,7 +280,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-shared-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -317,6 +317,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -346,7 +347,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -358,7 +358,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -432,6 +432,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -466,7 +467,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -478,7 +478,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -530,7 +530,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-with-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -567,6 +567,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -596,7 +597,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -608,7 +608,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -682,6 +682,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -716,7 +717,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -728,7 +728,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -780,7 +780,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -817,6 +817,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -846,7 +847,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -858,7 +858,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -932,6 +932,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -966,7 +967,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -978,7 +978,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1030,7 +1030,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-with-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1068,6 +1068,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1097,7 +1098,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1109,7 +1109,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1184,6 +1184,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1218,7 +1219,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1230,7 +1230,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1283,7 +1283,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1321,6 +1321,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1350,7 +1351,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1362,7 +1362,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1437,6 +1437,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1471,7 +1472,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1483,7 +1483,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1536,7 +1536,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-with-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1574,6 +1574,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1603,7 +1604,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1615,7 +1615,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1690,6 +1690,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1724,7 +1725,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1736,7 +1736,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1789,7 +1789,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1827,6 +1827,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1856,7 +1857,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1868,7 +1868,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1943,6 +1943,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1977,7 +1978,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1989,7 +1989,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2042,7 +2042,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-with-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2080,6 +2080,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2109,7 +2110,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2121,7 +2121,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2196,6 +2196,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2230,7 +2231,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2242,7 +2242,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2295,7 +2295,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2333,6 +2333,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2362,7 +2363,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2374,7 +2374,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2449,6 +2449,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2483,7 +2484,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2495,7 +2495,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2548,7 +2548,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-with-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2586,6 +2586,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2615,7 +2616,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2627,7 +2627,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2702,6 +2702,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2736,7 +2737,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2748,7 +2748,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2801,7 +2801,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2839,6 +2839,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2868,7 +2869,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2880,7 +2880,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2955,6 +2955,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2989,7 +2990,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3001,7 +3001,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -3054,5 +3054,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -60,6 +60,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -89,7 +90,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -101,7 +101,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -175,6 +175,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -209,7 +210,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -221,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder

View File

@ -67,6 +67,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -96,7 +97,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -108,7 +108,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -182,6 +182,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -216,7 +217,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -228,7 +228,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -280,7 +280,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-shared-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -317,6 +317,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -346,7 +347,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -358,7 +358,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -432,6 +432,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -466,7 +467,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -478,7 +478,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -530,7 +530,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-with-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -567,6 +567,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -596,7 +597,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -608,7 +608,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -682,6 +682,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -716,7 +717,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -728,7 +728,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -780,7 +780,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -817,6 +817,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -846,7 +847,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -858,7 +858,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -932,6 +932,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -966,7 +967,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -978,7 +978,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1030,7 +1030,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-with-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1068,6 +1068,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1097,7 +1098,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1109,7 +1109,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1184,6 +1184,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1218,7 +1219,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1230,7 +1230,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1283,7 +1283,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1321,6 +1321,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1350,7 +1351,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1362,7 +1362,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1437,6 +1437,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1471,7 +1472,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1483,7 +1483,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1536,7 +1536,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-with-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1574,6 +1574,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1603,7 +1604,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1615,7 +1615,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1690,6 +1690,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1724,7 +1725,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1736,7 +1736,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1789,7 +1789,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1827,6 +1827,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1856,7 +1857,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1868,7 +1868,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1943,6 +1943,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1977,7 +1978,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1989,7 +1989,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2042,7 +2042,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-with-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2080,6 +2080,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2109,7 +2110,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2121,7 +2121,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2196,6 +2196,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2230,7 +2231,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2242,7 +2242,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2295,7 +2295,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2333,6 +2333,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2362,7 +2363,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2374,7 +2374,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2449,6 +2449,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2483,7 +2484,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2495,7 +2495,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2548,7 +2548,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-with-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2586,6 +2586,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2615,7 +2616,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2627,7 +2627,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2702,6 +2702,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2736,7 +2737,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2748,7 +2748,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2801,7 +2801,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2839,6 +2839,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2868,7 +2869,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2880,7 +2880,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2955,6 +2955,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2989,7 +2990,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3001,7 +3001,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -3054,5 +3054,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -46,6 +46,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.8"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -63,6 +64,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -92,7 +94,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -104,7 +105,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -174,6 +175,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -208,7 +210,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -220,7 +221,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -268,7 +269,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_8-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -285,6 +286,7 @@ jobs:
GPU_ARCH_TYPE: cuda
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.8"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -302,6 +304,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -331,7 +334,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -343,7 +345,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -414,6 +416,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -448,7 +451,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -460,7 +462,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -509,7 +511,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_8-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -526,6 +528,7 @@ jobs:
GPU_ARCH_TYPE: cuda
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.8"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -543,6 +546,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -572,7 +576,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -584,7 +587,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -655,6 +658,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -689,7 +693,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -701,7 +704,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -750,7 +753,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -766,6 +769,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.9"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -783,6 +787,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -812,7 +817,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -824,7 +828,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -894,6 +898,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -928,7 +933,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -940,7 +944,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -988,7 +992,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_9-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1005,6 +1009,7 @@ jobs:
GPU_ARCH_TYPE: cuda
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.9"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -1022,6 +1027,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1051,7 +1057,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1063,7 +1068,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1134,6 +1139,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1168,7 +1174,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1180,7 +1185,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1229,7 +1234,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_9-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1246,6 +1251,7 @@ jobs:
GPU_ARCH_TYPE: cuda
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.9"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -1263,6 +1269,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1292,7 +1299,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1304,7 +1310,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1375,6 +1381,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1409,7 +1416,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1421,7 +1427,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1470,7 +1476,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1486,6 +1492,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.10"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -1503,6 +1510,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1532,7 +1540,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1544,7 +1551,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1614,6 +1621,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1648,7 +1656,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1660,7 +1667,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1708,7 +1715,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_10-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1725,6 +1732,7 @@ jobs:
GPU_ARCH_TYPE: cuda
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.10"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -1742,6 +1750,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1771,7 +1780,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1783,7 +1791,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1854,6 +1862,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -1888,7 +1897,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1900,7 +1908,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1949,7 +1957,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_10-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1966,6 +1974,7 @@ jobs:
GPU_ARCH_TYPE: cuda
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.10"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -1983,6 +1992,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2012,7 +2022,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2024,7 +2033,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2095,6 +2104,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2129,7 +2139,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2141,7 +2150,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2190,7 +2199,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2206,6 +2215,7 @@ jobs:
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.11"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -2223,6 +2233,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2252,7 +2263,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2264,7 +2274,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2334,6 +2344,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2368,7 +2379,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2380,7 +2390,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2428,7 +2438,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_11-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2445,6 +2455,7 @@ jobs:
GPU_ARCH_TYPE: cuda
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.11"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -2462,6 +2473,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2491,7 +2503,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2503,7 +2514,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2574,6 +2585,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2608,7 +2620,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2620,7 +2631,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2669,7 +2680,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_11-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2686,6 +2697,7 @@ jobs:
GPU_ARCH_TYPE: cuda
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.11"
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
steps:
- name: Display EC2 information
shell: bash
@ -2703,6 +2715,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2732,7 +2745,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2744,7 +2756,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2815,6 +2827,7 @@ jobs:
echo "system info $(uname -a)"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: seemethere/add-github-ssh-key@v1
continue-on-error: true
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
# Needed for binary builds, see: https://github.com/pytorch/pytorch/issues/73339#issuecomment-1058981560
@ -2849,7 +2862,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2861,7 +2873,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2910,5 +2922,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -26,7 +26,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Run BC Lint Action
uses: pytorch/test-infra/.github/actions/bc-lint@main
uses: pytorch/test-infra/.github/actions/bc-lint@release/2.1
with:
repo: ${{ github.event.pull_request.head.repo.full_name }}
base_sha: ${{ github.event.pull_request.base.sha }}

View File

@ -15,12 +15,13 @@ on:
# When any other step fails, it's job will be retried once by retryBot.
jobs:
lintrunner:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
fetch-depth: 0
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
test-infra-ref: 'release/2.1'
script: |
# The generic Linux job chooses to use base env, not the one setup by the image
CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]")
@ -62,12 +63,13 @@ jobs:
exit $RC
quick-checks:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
fetch-depth: 0
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
test-infra-ref: 'release/2.1'
script: |
# The generic Linux job chooses to use base env, not the one setup by the image
CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]")
@ -103,7 +105,7 @@ jobs:
if: github.event_name == 'pull_request' && !contains(github.event.pull_request.labels.*.name, 'skip-pr-sanity-checks')
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
fetch-depth: -1
@ -116,12 +118,13 @@ jobs:
bash .github/scripts/pr-sanity-check.sh
workflow-checks:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
fetch-depth: 0
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
test-infra-ref: 'release/2.1'
script: |
# The generic Linux job chooses to use base env, not the one setup by the image
CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]")
@ -151,12 +154,13 @@ jobs:
exit $RC
toc:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
fetch-depth: 0
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
test-infra-ref: 'release/2.1'
script: |
# The generic Linux job chooses to use base env, not the one setup by the image
CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]")
@ -189,12 +193,13 @@ jobs:
test-tools:
name: Test tools
if: ${{ github.repository == 'pytorch/pytorch' }}
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
fetch-depth: 0
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
test-infra-ref: 'release/2.1'
script: |
# The generic Linux job chooses to use base env, not the one setup by the image
CONDA_ENV=$(conda env list --json | jq -r ".envs | .[-1]")
@ -210,7 +215,7 @@ jobs:
runs-on: linux.20_04.4x
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
fetch-depth: 1
@ -240,7 +245,7 @@ jobs:
# [see note: pytorch repo ref]
# deep clone (fetch-depth 0) required, to allow us to use git log
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
fetch-depth: 1

View File

@ -21,7 +21,7 @@ jobs:
environment: upload-stats
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false

View File

@ -112,30 +112,38 @@ jobs:
cuda-version: "11.8"
test-matrix: ${{ needs.win-vs2019-cuda11_8-py3-build.outputs.test-matrix }}
ios-12-5-1-x86-64-coreml:
name: ios-12-5-1-x86-64-coreml
# TODO: Figure out how to migrate this job to M1 runner
ios-build-test:
name: ios-build-test
if: github.event_name != 'schedule' || github.event.schedule == '45 0,8,16 * * 1-5' || github.event.schedule == '45 4 * * 0,6'
uses: ./.github/workflows/_ios-build-test.yml
with:
build-environment: ios-12-5-1-x86-64-coreml
ios-platform: SIMULATOR
ios-arch: x86_64
build-environment: ios-build-test
sync-tag: ios-build-test
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 1, runner: "macos-12" },
]}
ios-12-5-1-arm64-custom-ops:
name: ios-12-5-1-arm64-custom-ops
if: github.event_name != 'schedule' || github.event.schedule == '45 0,8,16 * * 1-5' || github.event.schedule == '45 4 * * 0,6'
uses: ./.github/workflows/_ios-build-test.yml
with:
build-environment: ios-12-5-1-arm64-custom-ops
ios-platform: OS
ios-arch: arm64
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 1, runner: "macos-12" },
{ config: "default",
shard: 1,
num_shards: 1,
runner: "macos-12",
ios_platform: "SIMULATOR",
ios_arch: "x86_64",
use_lite_interpreter: 1,
use_metal: 0,
use_coreml: 1,
use_custom_op_list: ""
},
{ config: "default",
shard: 1,
num_shards: 1,
runner: "macos-12",
ios_platform: "OS",
ios_arch: "arm64",
use_lite_interpreter: 1,
use_metal: 1,
use_coreml: 1,
use_custom_op_list: "mobilenetv2.yaml"
}
]}
buck-build-test:
@ -180,7 +188,14 @@ jobs:
with:
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 1, runner: "ubuntu-20.04-16x" },
{ config: "default",
shard: 1,
num_shards: 1,
runner: "ubuntu-20.04-16x"
use_lite_interpreter: 1,
# Just set x86 for testing here
support_abi: x86,
},
]}
linux-vulkan-focal-py3_11-clang10-build:

View File

@ -14,7 +14,7 @@ jobs:
if: ${{ github.repository == 'pytorch/pytorch' }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false

View File

@ -44,7 +44,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
uses: pytorch/test-infra/.github/actions/upload-alerts@main
uses: pytorch/test-infra/.github/actions/upload-alerts@release/2.1
with:
alerts: '${{ steps.alert_creation_step.outputs.script-output }}'
organization: "pytorch"

View File

@ -37,7 +37,7 @@ jobs:
run: echo "${TRIGGERING_WORKFLOW}"
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- uses: actions/setup-python@v4
with:

View File

@ -29,7 +29,7 @@ jobs:
name: Upload dynamo performance stats for ${{ github.event.workflow_run.id }}, attempt ${{ github.event.workflow_run.run_attempt }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
fetch-depth: 1

4
.gitignore vendored
View File

@ -364,3 +364,7 @@ venv/
# Log files
*.log
sweep/
# Android build artifacts
android/pytorch_android/.cxx
android/pytorch_android_torchvision/.cxx

View File

@ -63,7 +63,7 @@ RUN --mount=type=cache,target=/opt/ccache \
FROM conda as conda-installs
ARG PYTHON_VERSION=3.8
ARG CUDA_VERSION=11.7
ARG CUDA_VERSION=12.1
ARG CUDA_CHANNEL=nvidia
ARG INSTALL_CHANNEL=pytorch-nightly
# Automatically set by buildx
@ -73,8 +73,8 @@ ARG TARGETPLATFORM
# On arm64 we can only install wheel packages.
RUN case ${TARGETPLATFORM} in \
"linux/arm64") pip install --extra-index-url https://download.pytorch.org/whl/cpu/ torch torchvision torchaudio torchtext ;; \
*) /opt/conda/bin/conda install -c "${INSTALL_CHANNEL}" -c "${CUDA_CHANNEL}" -y "python=${PYTHON_VERSION}" pytorch torchvision torchaudio torchtext "pytorch-cuda=$(echo $CUDA_VERSION | cut -d'.' -f 1-2)" ;; \
"linux/arm64") pip install --extra-index-url https://download.pytorch.org/whl/cpu/ torch torchvision torchaudio ;; \
*) /opt/conda/bin/conda install -c "${INSTALL_CHANNEL}" -c "${CUDA_CHANNEL}" -y "python=${PYTHON_VERSION}" pytorch torchvision torchaudio "pytorch-cuda=$(echo $CUDA_VERSION | cut -d'.' -f 1-2)" ;; \
esac && \
/opt/conda/bin/conda clean -ya
RUN /opt/conda/bin/pip install torchelastic

View File

@ -1,6 +1,6 @@
ABI_FILTERS=armeabi-v7a,arm64-v8a,x86,x86_64
VERSION_NAME=2.1.0-SNAPSHOT
VERSION_NAME=2.1.0
GROUP=org.pytorch
MAVEN_GROUP=org.pytorch
SONATYPE_STAGING_PROFILE=orgpytorch

View File

@ -41,6 +41,7 @@ android {
println 'Build pytorch_jni'
exclude 'org/pytorch/LiteModuleLoader.java'
exclude 'org/pytorch/LiteNativePeer.java'
exclude 'org/pytorch/LitePyTorchAndroid.java'
} else {
println 'Build pytorch_jni_lite'
}

View File

@ -17,6 +17,7 @@ sourceSets {
java {
srcDir '../src/main/java'
exclude 'org/pytorch/PyTorchAndroid.java'
exclude 'org/pytorch/LitePyTorchAndroid.java'
exclude 'org/pytorch/LiteModuleLoader.java'
exclude 'org/pytorch/LiteNativePeer.java'
}

View File

@ -10,6 +10,7 @@ import java.io.IOException;
import java.util.HashMap;
import java.util.Map;
import org.junit.Test;
import org.junit.Ignore;
public abstract class PytorchTestBase {
private static final String TEST_MODULE_ASSET_NAME = "android_api_module.ptl";
@ -413,7 +414,10 @@ public abstract class PytorchTestBase {
}
@Test
@Ignore
public void testSpectralOps() throws IOException {
// NB: This model fails without lite interpreter. The error is as follows:
// RuntimeError: stft requires the return_complex parameter be given for real inputs
runModel("spectral_ops");
}

View File

@ -10,12 +10,6 @@
#include <fbjni/fbjni.h>
#include "pytorch_jni_common.h"
#if defined(__ANDROID__)
#ifndef USE_PTHREADPOOL
#define USE_PTHREADPOOL
#endif /* USE_PTHREADPOOL */
#include <caffe2/utils/threadpool/pthreadpool-cpp.h>
#endif
namespace pytorch_jni {
@ -666,32 +660,4 @@ at::IValue JIValue::JIValueToAtIValue(
typeCode);
}
#if defined(__ANDROID__)
class PyTorchAndroidJni : public facebook::jni::JavaClass<PyTorchAndroidJni> {
public:
constexpr static auto kJavaDescriptor = "Lorg/pytorch/PyTorchAndroid;";
static void registerNatives() {
javaClassStatic()->registerNatives({
makeNativeMethod(
"nativeSetNumThreads", PyTorchAndroidJni::setNumThreads),
});
}
static void setNumThreads(facebook::jni::alias_ref<jclass>, jint numThreads) {
caffe2::pthreadpool()->set_thread_count(numThreads);
}
};
#endif
void common_registerNatives() {
static const int once = []() {
#if defined(__ANDROID__)
pytorch_jni::PyTorchAndroidJni::registerNatives();
#endif
return 0;
}();
((void)once);
}
} // namespace pytorch_jni

View File

@ -17,6 +17,11 @@
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>
#include <android/log.h>
#ifndef USE_PTHREADPOOL
#define USE_PTHREADPOOL
#endif /* USE_PTHREADPOOL */
#include <caffe2/utils/threadpool/pthreadpool-cpp.h>
#endif
namespace pytorch_jni {
@ -235,6 +240,34 @@ class PytorchJni : public facebook::jni::HybridClass<PytorchJni> {
}
};
#if defined(__ANDROID__)
class PyTorchAndroidJni : public facebook::jni::JavaClass<PyTorchAndroidJni> {
public:
constexpr static auto kJavaDescriptor = "Lorg/pytorch/PyTorchAndroid;";
static void registerNatives() {
javaClassStatic()->registerNatives({
makeNativeMethod(
"nativeSetNumThreads", PyTorchAndroidJni::setNumThreads),
});
}
static void setNumThreads(facebook::jni::alias_ref<jclass>, jint numThreads) {
caffe2::pthreadpool()->set_thread_count(numThreads);
}
};
#endif
void common_registerNatives() {
static const int once = []() {
#if defined(__ANDROID__)
pytorch_jni::PyTorchAndroidJni::registerNatives();
#endif
return 0;
}();
((void)once);
}
} // namespace pytorch_jni
JNIEXPORT jint JNICALL JNI_OnLoad(JavaVM* vm, void*) {

View File

@ -18,6 +18,11 @@
#include <android/asset_manager.h>
#include <android/asset_manager_jni.h>
#include <android/log.h>
#ifndef USE_PTHREADPOOL
#define USE_PTHREADPOOL
#endif /* USE_PTHREADPOOL */
#include <caffe2/utils/threadpool/pthreadpool-cpp.h>
#endif
namespace pytorch_jni {
@ -199,6 +204,34 @@ class PytorchJni : public facebook::jni::HybridClass<PytorchJni> {
}
};
#if defined(__ANDROID__)
class PyTorchAndroidJni : public facebook::jni::JavaClass<PyTorchAndroidJni> {
public:
constexpr static auto kJavaDescriptor = "Lorg/pytorch/LitePyTorchAndroid;";
static void registerNatives() {
javaClassStatic()->registerNatives({
makeNativeMethod(
"nativeSetNumThreads", PyTorchAndroidJni::setNumThreads),
});
}
static void setNumThreads(facebook::jni::alias_ref<jclass>, jint numThreads) {
caffe2::pthreadpool()->set_thread_count(numThreads);
}
};
#endif
void common_registerNatives() {
static const int once = []() {
#if defined(__ANDROID__)
pytorch_jni::PyTorchAndroidJni::registerNatives();
#endif
return 0;
}();
((void)once);
}
} // namespace pytorch_jni
JNIEXPORT jint JNICALL JNI_OnLoad(JavaVM* vm, void*) {

View File

@ -0,0 +1,50 @@
package org.pytorch;
import android.content.res.AssetManager;
import com.facebook.jni.annotations.DoNotStrip;
import com.facebook.soloader.nativeloader.NativeLoader;
import com.facebook.soloader.nativeloader.SystemDelegate;
public final class LitePyTorchAndroid {
static {
if (!NativeLoader.isInitialized()) {
NativeLoader.init(new SystemDelegate());
}
NativeLoader.loadLibrary("pytorch_jni_lite");
PyTorchCodegenLoader.loadNativeLibs();
}
/**
* Attention: This is not recommended way of loading production modules, as prepackaged assets
* increase apk size etc. For production usage consider using loading from file on the disk {@link
* org.pytorch.Module#load(String)}.
*
* <p>This method is meant to use in tests and demos.
*/
public static Module loadModuleFromAsset(
final AssetManager assetManager, final String assetName, final Device device) {
return new Module(new LiteNativePeer(assetName, assetManager, device));
}
public static Module loadModuleFromAsset(
final AssetManager assetManager, final String assetName) {
return new Module(new LiteNativePeer(assetName, assetManager, Device.CPU));
}
/**
* Globally sets the number of threads used on native side. Attention: Has global effect, all
* modules use one thread pool with specified number of threads.
*
* @param numThreads number of threads, must be positive number.
*/
public static void setNumThreads(int numThreads) {
if (numThreads < 1) {
throw new IllegalArgumentException("Number of threads cannot be less than 1");
}
nativeSetNumThreads(numThreads);
}
@DoNotStrip
private static native void nativeSetNumThreads(int numThreads);
}

View File

@ -10,7 +10,7 @@ public final class PyTorchAndroid {
if (!NativeLoader.isInitialized()) {
NativeLoader.init(new SystemDelegate());
}
NativeLoader.loadLibrary("pytorch_jni_lite");
NativeLoader.loadLibrary("pytorch_jni");
PyTorchCodegenLoader.loadNativeLibs();
}

View File

@ -41,6 +41,11 @@ android {
buildConfigField("long[]", "INPUT_TENSOR_SHAPE", "new long[]{1, 3, 224, 224}")
buildConfigField("boolean", "NATIVE_BUILD", 'false')
buildConfigField("boolean", "USE_VULKAN_DEVICE", 'false')
buildConfigField(
"int",
"BUILD_LITE_INTERPRETER",
System.env.BUILD_LITE_INTERPRETER != null ? System.env.BUILD_LITE_INTERPRETER : "1"
)
addManifestPlaceholders([APP_NAME: "@string/app_name", MAIN_ACTIVITY: "org.pytorch.testapp.MainActivity"])
}
buildTypes {
@ -63,14 +68,15 @@ android {
mnet {
dimension "model"
applicationIdSuffix ".mnet"
buildConfigField("String", "MODULE_ASSET_NAME", "\"mnet.pt\"")
buildConfigField("String", "MODULE_ASSET_NAME", "\"mobilenet_v2.ptl\"")
addManifestPlaceholders([APP_NAME: "MNET"])
buildConfigField("String", "LOGCAT_TAG", "\"pytorch-mnet\"")
}
// NB: This is not working atm https://github.com/pytorch/pytorch/issues/102966
mnetVulkan {
dimension "model"
applicationIdSuffix ".mnet_vulkan"
buildConfigField("String", "MODULE_ASSET_NAME", "\"mnet_vulkan.pt\"")
buildConfigField("String", "MODULE_ASSET_NAME", "\"mobilenet_v2_vulkan.ptl\"")
buildConfigField("boolean", "USE_VULKAN_DEVICE", 'true')
addManifestPlaceholders([APP_NAME: "MNET_VULKAN"])
buildConfigField("String", "LOGCAT_TAG", "\"pytorch-mnet-vulkan\"")
@ -78,7 +84,7 @@ android {
resnet18 {
dimension "model"
applicationIdSuffix ".resnet18"
buildConfigField("String", "MODULE_ASSET_NAME", "\"resnet18.pt\"")
buildConfigField("String", "MODULE_ASSET_NAME", "\"resnet18.ptl\"")
addManifestPlaceholders([APP_NAME: "RN18"])
buildConfigField("String", "LOGCAT_TAG", "\"pytorch-resnet18\"")
}
@ -149,8 +155,8 @@ dependencies {
//nativeBuildImplementation(name: 'pytorch_android_torchvision-release', ext: 'aar')
//extractForNativeBuild(name: 'pytorch_android-release', ext: 'aar')
nightlyImplementation 'org.pytorch:pytorch_android:1.12.0-SNAPSHOT'
nightlyImplementation 'org.pytorch:pytorch_android_torchvision:1.12.0-SNAPSHOT'
nightlyImplementation 'org.pytorch:pytorch_android:2.2.0-SNAPSHOT'
nightlyImplementation 'org.pytorch:pytorch_android_torchvision:2.2.0-SNAPSHOT'
aarImplementation(name:'pytorch_android', ext:'aar')
aarImplementation(name:'pytorch_android_torchvision', ext:'aar')

View File

@ -1,6 +1,7 @@
package org.pytorch.testapp;
import android.content.Context;
import android.content.res.AssetManager;
import android.os.Bundle;
import android.os.Handler;
import android.os.HandlerThread;
@ -16,6 +17,8 @@ import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.OutputStream;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.nio.FloatBuffer;
import org.pytorch.Device;
import org.pytorch.IValue;
@ -42,7 +45,13 @@ public class MainActivity extends AppCompatActivity {
new Runnable() {
@Override
public void run() {
final Result result = doModuleForward();
final Result result;
try {
result = doModuleForward();
} catch (ClassNotFoundException | NoSuchMethodException | IllegalAccessException |
InvocationTargetException e) {
throw new RuntimeException(e);
}
runOnUiThread(
new Runnable() {
@Override
@ -118,7 +127,7 @@ public class MainActivity extends AppCompatActivity {
@WorkerThread
@Nullable
protected Result doModuleForward() {
protected Result doModuleForward() throws ClassNotFoundException, IllegalAccessException, NoSuchMethodException, InvocationTargetException {
if (mModule == null) {
final long[] shape = BuildConfig.INPUT_TENSOR_SHAPE;
long numElements = 1;
@ -129,12 +138,29 @@ public class MainActivity extends AppCompatActivity {
mInputTensor =
Tensor.fromBlob(
mInputTensorBuffer, BuildConfig.INPUT_TENSOR_SHAPE, MemoryFormat.CHANNELS_LAST);
PyTorchAndroid.setNumThreads(1);
mModule =
BuildConfig.USE_VULKAN_DEVICE
? PyTorchAndroid.loadModuleFromAsset(
getAssets(), BuildConfig.MODULE_ASSET_NAME, Device.VULKAN)
: PyTorchAndroid.loadModuleFromAsset(getAssets(), BuildConfig.MODULE_ASSET_NAME);
Class ptAndroid;
if (BuildConfig.BUILD_LITE_INTERPRETER == 1) {
ptAndroid = Class.forName("org.pytorch.LitePyTorchAndroid");
}
else {
ptAndroid = Class.forName("org.pytorch.PyTorchAndroid");
}
Method setNumThreads = ptAndroid.getMethod("setNumThreads", int.class);
setNumThreads.invoke(null,1);
Method loadModuleFromAsset = ptAndroid.getMethod(
"loadModuleFromAsset",
AssetManager.class,
String.class,
Device.class
);
mModule = (Module) (BuildConfig.USE_VULKAN_DEVICE
? loadModuleFromAsset.invoke(
null, getAssets(), BuildConfig.MODULE_ASSET_NAME, Device.VULKAN)
: loadModuleFromAsset.invoke(
null, getAssets(), BuildConfig.MODULE_ASSET_NAME, Device.CPU));
}
final long startTime = SystemClock.elapsedRealtime();

View File

@ -125,6 +125,7 @@ file(GLOB native_ao_sparse_h
"native/ao_sparse/quantized/cpu/*.h")
file(GLOB native_quantized_h "native/quantized/*.h" "native/quantized/cpu/*.h" "native/quantized/cudnn/*.h")
file(GLOB native_cpu_h "native/cpu/*.h")
file(GLOB native_utils_h "native/utils/*.h")
file(GLOB native_cuda_cu "native/cuda/*.cu")
file(GLOB native_cuda_cpp "native/cuda/*.cpp")
@ -540,7 +541,7 @@ install(FILES "${CMAKE_CURRENT_BINARY_DIR}/cmake-exports/ATenConfig.cmake"
set(INSTALL_HEADERS ${base_h} ${ATen_CORE_HEADERS})
if(NOT INTERN_BUILD_MOBILE)
list(APPEND INSTALL_HEADERS ${native_h} ${native_cpu_h} ${native_ao_sparse_h} ${native_quantized_h} ${cuda_h} ${native_cuda_h} ${native_hip_h} ${cudnn_h} ${hip_h} ${mps_h} ${native_mps_h} ${miopen_h})
list(APPEND INSTALL_HEADERS ${native_h} ${native_cpu_h} ${native_ao_sparse_h} ${native_quantized_h} ${cuda_h} ${native_cuda_h} ${native_hip_h} ${cudnn_h} ${hip_h} ${mps_h} ${native_mps_h} ${native_utils_h} ${miopen_h})
# Metal
if(USE_PYTORCH_METAL_EXPORT)
# Add files needed from exporting metal models(optimized_for_mobile)

View File

@ -371,6 +371,22 @@ inline void deprecated_AT_DISPATCH_ALL_TYPES_AND_HALF_AND_COMPLEX() {}
AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES_AND3( \
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, __VA_ARGS__))
#define AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES_AND4( \
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, SCALARTYPE4, ...) \
AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES(__VA_ARGS__) \
AT_DISPATCH_CASE(SCALARTYPE1, __VA_ARGS__) \
AT_DISPATCH_CASE(SCALARTYPE2, __VA_ARGS__) \
AT_DISPATCH_CASE(SCALARTYPE3, __VA_ARGS__) \
AT_DISPATCH_CASE(SCALARTYPE4, __VA_ARGS__)
#define AT_DISPATCH_FLOATING_AND_COMPLEX_TYPES_AND4( \
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, SCALARTYPE4, TYPE, NAME, ...) \
AT_DISPATCH_SWITCH( \
TYPE, \
NAME, \
AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES_AND4( \
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, SCALARTYPE4, __VA_ARGS__))
#define AT_DISPATCH_CASE_INTEGRAL_TYPES(...) \
AT_DISPATCH_CASE(at::ScalarType::Byte, __VA_ARGS__) \
AT_DISPATCH_CASE(at::ScalarType::Char, __VA_ARGS__) \

View File

@ -187,17 +187,18 @@ expand_inplace(
// See NOTE [ ExpandUtils Borrowing ] above for `MaybeOwned` explanation.
inline std::tuple<c10::MaybeOwned<Tensor>, c10::MaybeOwned<Tensor>>
expand_outplace(const Tensor& to_expand1, const Tensor& to_expand2) {
if (to_expand1.sizes().equals(to_expand2.sizes())) {
auto s1 = to_expand1.sym_sizes();
auto s2 = to_expand2.sym_sizes();
if (s1.equals(s2)) {
return std::make_tuple(
c10::MaybeOwned<Tensor>::borrowed(to_expand1),
c10::MaybeOwned<Tensor>::borrowed(to_expand2));
}
auto expanded_size =
infer_size_dimvector(to_expand1.sizes(), to_expand2.sizes());
auto expanded_size = infer_size_symdimvector(s1, s2);
return std::make_tuple(
c10::MaybeOwned<Tensor>::owned(to_expand1.expand(expanded_size)),
c10::MaybeOwned<Tensor>::owned(to_expand2.expand(expanded_size)));
c10::MaybeOwned<Tensor>::owned(to_expand1.expand_symint(expanded_size)),
c10::MaybeOwned<Tensor>::owned(to_expand2.expand_symint(expanded_size)));
}
inline std::tuple<c10::MaybeOwned<Tensor>, c10::MaybeOwned<Tensor>>

View File

@ -5,9 +5,13 @@
#include <c10/cuda/CUDACachingAllocator.h>
#include <c10/cuda/CUDAFunctions.h>
#include <chrono>
#include <thread>
namespace at::cuda {
static bool _cuda_graphs_debug = false;
constexpr int kSynchronizeBusyWaitMillis = 10;
MempoolId_t graph_pool_handle() {
#if !defined(USE_ROCM) || ROCM_VERSION >= 50300
@ -55,6 +59,25 @@ CaptureId_t capture_sequence_id() {
* describes memory management for captures.
*/
std::atomic<int> CUDAGraph::pending_event_queries = 0;
// Track any outstanding event queries that could happen e.g., in a NCCL watchdog so that they
// can be resolved before the capture begins. Note that event queries are not allowed during a
// graph capture in the default capture mode.
void CUDAGraph::inc_pending_event_queries() {
pending_event_queries++;
}
void CUDAGraph::dec_pending_event_queries() {
TORCH_INTERNAL_ASSERT(pending_event_queries > 0,
"Attempted to decrement the number of outstanding events to be queried, but it was <= 0.");
pending_event_queries--;
}
int CUDAGraph::num_pending_event_queries() {
return pending_event_queries;
}
CUDAGraph::CUDAGraph()
// CUDAStreams may not be default-constructed.
: capture_stream_(at::cuda::getCurrentCUDAStream()) {
@ -115,6 +138,15 @@ void CUDAGraph::capture_begin(MempoolId_t pool/*=0*/, cudaStreamCaptureMode capt
// due to the capture status being updated _after_ a capture had already started.
c10::cuda::CUDACachingAllocator::beginAllocateStreamToPool(capture_dev_, capture_stream_, mempool_id_);
// At this point, any NCCL watchdogs should be aware that we are in capture mode
// and therefore should not enqueue any additional work that could be event-queried.
// We still must wait on any existing work that has not been cleaned up.
while (num_pending_event_queries()) {
TORCH_WARN_ONCE("Waiting for pending NCCL work to finish before starting graph capture.");
std::this_thread::sleep_for(
std::chrono::milliseconds(kSynchronizeBusyWaitMillis));
}
// cudaStreamCaptureModeGlobal is the most conservative option to
// prevent potentially unsafe CUDA API calls during capture. See
// https://docs.nvidia.com/cuda/cuda-runtime-api/group__CUDART__STREAM.html#group__CUDART__STREAM_1g9d0535d93a214cbf126835257b16ba85

View File

@ -5,6 +5,8 @@
#include <c10/cuda/CUDAGraphsC10Utils.h>
#include <c10/cuda/CUDAStream.h>
#include <mutex>
namespace at {
struct CUDAGeneratorImpl;
@ -19,6 +21,9 @@ struct TORCH_CUDA_CPP_API CUDAGraph {
CUDAGraph();
~CUDAGraph();
static void inc_pending_event_queries();
static void dec_pending_event_queries();
static int num_pending_event_queries();
void capture_begin(MempoolId_t pool={0, 0}, cudaStreamCaptureMode capture_mode = cudaStreamCaptureModeGlobal);
void capture_end();
void replay();
@ -33,6 +38,8 @@ struct TORCH_CUDA_CPP_API CUDAGraph {
cudaGraphExec_t graph_exec_ = NULL;
#endif
static std::atomic<int> pending_event_queries;
// internal states so reset() can do its best cleaning up
// Set to true in capture_end if cudaStreamEndCapture succeeded
// Set back to false soon after, when graph_ is consumed by cudaGraphInstantiate

View File

@ -161,8 +161,10 @@ CUDA_STUB3(cuModuleGetFunction, CUfunction *, CUmodule, const char *);
CUDA_STUB4(cuOccupancyMaxActiveBlocksPerMultiprocessor, int *, CUfunction, int, size_t);
CUDA_STUB2(cuGetErrorString, CUresult, const char **);
CUDA_STUB1(cuCtxGetCurrent, CUcontext *);
CUDA_STUB1(cuCtxSetCurrent, CUcontext);
CUDA_STUB1(cuModuleUnload, CUmodule);
CUDA_STUB3(cuDevicePrimaryCtxGetState, CUdevice, unsigned int *, int *);
CUDA_STUB2(cuDevicePrimaryCtxRetain, CUcontext *, CUdevice);
CUDA_STUB4(cuLinkCreate, unsigned int, CUjit_option *, void **, CUlinkState *);
CUDA_STUB3(cuLinkComplete, CUlinkState, void **, size_t *);
CUDA_STUB3(cuFuncSetAttribute, CUfunction, CUfunction_attribute, int);

View File

@ -51,8 +51,10 @@ namespace at { namespace cuda {
_(cuLaunchKernel) \
_(cuLaunchCooperativeKernel) \
_(cuCtxGetCurrent) \
_(cuCtxSetCurrent) \
_(cuModuleUnload) \
_(cuDevicePrimaryCtxGetState) \
_(cuDevicePrimaryCtxRetain) \
_(cuLinkCreate) \
_(cuLinkAddData) \
_(cuLinkComplete) \

View File

@ -93,10 +93,17 @@ MPSDevice::MPSDevice() : _mtl_device(nil), _mtl_indexing_library(nil) {
NSArray* devices = [MTLCopyAllDevices() autorelease];
for (unsigned long i = 0; i < [devices count]; i++) {
id<MTLDevice> device = devices[i];
if (![device isLowPower]) { // exclude Intel GPUs
_mtl_device = [device retain];
break;
if ([device isLowPower]) { // exclude Intel GPUs
continue;
}
if (![device supportsFamily:MTLGPUFamilyMac2]) {
// Exclude devices that does not support Metal 2.0
// Virtualised MPS device on MacOS 12.6 should fail this check
TORCH_WARN("Skipping device ", [[device name] UTF8String], " that does not support Metal 2.0");
continue;
}
_mtl_device = [device retain];
break;
}
TORCH_INTERNAL_ASSERT_DEBUG_ONLY(_mtl_device);
}

View File

@ -147,9 +147,9 @@ void MPSStream::addCompletedHandler(MTLCommandBufferHandler block) {
}
void MPSStream::fill(id<MTLBuffer> buffer, uint8_t value, size_t length, size_t offset, SyncType syncType) {
TORCH_INTERNAL_ASSERT(length >= offset);
if (length == 0)
if (length == 0) {
return;
}
dispatch_sync(_serialQueue, ^() {
@autoreleasepool {
endKernelCoalescing();

Some files were not shown because too many files have changed in this diff Show More