Compare commits

...

68 Commits

Author SHA1 Message Date
7bcf7da3a2 Add tensorboard to pip requirements (#109349) (#109823)
https://github.com/pytorch/pytorch/pull/108351/files is failing on mac and windows because we don't have the dependency
It is available on linux because it is included in .ci/docker/requirements-docs.txt

Adding skips to make it green.

Here are some outputs for future debugging
https://github.com/pytorch/pytorch/actions/runs/6192933622/job/16813841625
https://ossci-raw-job-status.s3.amazonaws.com/log/16813841625
```

2023-09-15T02:09:43.2397460Z =================================== FAILURES ===================================
2023-09-15T02:09:43.2397650Z ______________________ TestTensorBoardSummary.test_audio _______________________
2023-09-15T02:09:43.2397830Z Traceback (most recent call last):
2023-09-15T02:09:43.2398090Z   File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/test_tensorboard.py", line 417, in test_audio
2023-09-15T02:09:43.2398390Z     self.assertTrue(compare_proto(summary.audio('dummy', tensor_N(shape=(42,))), self))
2023-09-15T02:09:43.2398720Z   File "/Users/ec2-user/runner/_work/_temp/conda_environment_6192933622/lib/python3.9/unittest/case.py", line 688, in assertTrue
2023-09-15T02:09:43.2399100Z ##[endgroup]
2023-09-15T02:09:43.2399240Z     raise self.failureException(msg)
2023-09-15T02:09:43.2399400Z AssertionError: False is not true
2023-09-15T02:09:43.2399490Z
2023-09-15T02:09:43.2399590Z To execute this test, run the following from the base repo dir:
2023-09-15T02:09:43.2399820Z      python test/test_tensorboard.py -k test_audio
2023-09-15T02:09:43.2399930Z
```

https://github.com/pytorch/pytorch/actions/runs/6192933622/job/16814065258
https://ossci-raw-job-status.s3.amazonaws.com/log/16814065258
```

2023-09-15T02:38:44.6284979Z ================================== FAILURES ===================================
2023-09-15T02:38:44.6285295Z ______________________ TestTensorBoardNumpy.test_scalar _______________________
2023-09-15T02:38:44.6285556Z Traceback (most recent call last):
2023-09-15T02:38:44.6285915Z   File "C:\actions-runner\_work\pytorch\pytorch\test\test_tensorboard.py", line 794, in test_scalar
2023-09-15T02:38:44.6286325Z     res = make_np(np.float128(1.00008 + 9))
2023-09-15T02:38:44.6286705Z   File "C:\Jenkins\Miniconda3\lib\site-packages\numpy\__init__.py", line 315, in __getattr__
2023-09-15T02:38:44.6287700Z     raise AttributeError("module {!r} has no attribute "
2023-09-15T02:38:44.6288060Z AttributeError: module 'numpy' has no attribute 'float128'
2023-09-15T02:38:44.6288241Z
2023-09-15T02:38:44.6288390Z To execute this test, run the following from the base repo dir:
2023-09-15T02:38:44.6288679Z      python test\test_tensorboard.py -k test_scalar
2023-09-15T02:38:44.6288846Z
```

https://github.com/pytorch/pytorch/actions/runs/6193449301/job/16815113985
https://ossci-raw-job-status.s3.amazonaws.com/log/16815113985
```
2023-09-15T03:25:53.7797550Z =================================== FAILURES ===================================
2023-09-15T03:25:53.7797790Z __________________ TestTensorBoardSummary.test_histogram_auto __________________
2023-09-15T03:25:53.7798000Z Traceback (most recent call last):
2023-09-15T03:25:53.7798310Z   File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/test_tensorboard.py", line 426, in test_histogram_auto
2023-09-15T03:25:53.7798690Z     self.assertTrue(compare_proto(summary.histogram('dummy', tensor_N(shape=(1024,)), bins='auto', max_bins=5), self))
2023-09-15T03:25:53.7799090Z   File "/Users/ec2-user/runner/_work/_temp/conda_environment_6193449301/lib/python3.9/unittest/case.py", line 688, in assertTrue
2023-09-15T03:25:53.7799430Z     raise self.failureException(msg)
2023-09-15T03:25:53.7799610Z AssertionError: False is not true
2023-09-15T03:25:53.7799720Z
2023-09-15T03:25:53.7799840Z To execute this test, run the following from the base repo dir:
2023-09-15T03:25:53.7800170Z      python test/test_tensorboard.py -k test_histogram_auto
2023-09-15T03:25:53.7800310Z
2023-09-15T03:25:53.7800430Z This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
2023-09-15T03:25:53.7800870Z - generated xml file: /Users/ec2-user/runner/_work/pytorch/pytorch/test/test-reports/python-pytest/test_tensorboard/test_tensorboard-aef95b5e2d69c061.xml -
2023-09-15T03:25:53.7801200Z =========================== short test summary info ============================
```

https://github.com/pytorch/pytorch/actions/runs/6193576371/job/16815396352
https://ossci-raw-job-status.s3.amazonaws.com/log/16815396352
```
2023-09-15T03:47:02.9430070Z _________________ TestTensorBoardSummary.test_histogram_doane __________________
2023-09-15T03:47:02.9430250Z Traceback (most recent call last):
2023-09-15T03:47:02.9430520Z   File "/Users/ec2-user/runner/_work/pytorch/pytorch/test/test_tensorboard.py", line 433, in test_histogram_doane
2023-09-15T03:47:02.9430850Z     self.assertTrue(compare_proto(summary.histogram('dummy', tensor_N(shape=(1024,)), bins='doane', max_bins=5), self))
2023-09-15T03:47:02.9431180Z   File "/Users/ec2-user/runner/_work/_temp/conda_environment_6193576371/lib/python3.9/unittest/case.py", line 688, in assertTrue
2023-09-15T03:47:02.9431390Z     raise self.failureException(msg)
2023-09-15T03:47:02.9431550Z AssertionError: False is not true
2023-09-15T03:47:02.9431640Z
2023-09-15T03:47:02.9431730Z To execute this test, run the following from the base repo dir:
2023-09-15T03:47:02.9432000Z      python test/test_tensorboard.py -k test_histogram_doane
2023-09-15T03:47:02.9432120Z
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109349
Approved by: https://github.com/huydhn

(cherry picked from commit 1cc0921eb62089392595264610cfa863d42fe9bc)

Co-authored-by: Catherine Lee <csl@fb.com>
2023-09-21 16:25:09 -06:00
1841d54370 [CI] Add torch.compile works without numpy test (#109624) (#109818)
Fixes https://github.com/pytorch/pytorch/issues/109387

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109624
Approved by: https://github.com/albanD

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2023-09-21 15:27:52 -06:00
fca42334be Fix the parameter error in test_device_mesh.py (#108758) (#109826)
Fix the parameter error in test_device_mesh.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108758
Approved by: https://github.com/awgu

(cherry picked from commit 03bf745e1d6a050a9e322d41c2b7e75db8cdbedc)

Co-authored-by: humingxue <humingxue1@huawei.com>
2023-09-21 15:13:09 -06:00
539a971161 [Release-2.1]Add finfo properties for float8 dtypes (#109808)
Add float8 finfo checks to `test_type_info.py`
Fixes https://github.com/pytorch/pytorch/issues/109737
Cherry-pick of https://github.com/pytorch/pytorch/pull/109744 into release/2.1 branch
Approved by: https://github.com/drisspg

(cherry picked from commit cddd0db241a3b8df930284fd29523da9d28b1f2c)
2023-09-21 11:51:09 -07:00
9287a0cf59 [Release/2.1][JIT] Fix typed enum handling in 3.11 (#109807)
In Python-3.11+ typed enums (such as `enum.IntEnum`) retain `__new__`,`__str__` and so on method of the base class via `__init__subclass__()` method (see https://docs.python.org/3/whatsnew/3.11.html#enum ), i.e. following code
```python
import sys
import inspect
from enum import Enum

class IntColor(int, Enum):
    RED = 1
    GREEN = 2

class Color(Enum):
    RED = 1
    GREEN = 2

def get_methods(cls):
    def predicate(m):
        if not inspect.isfunction(m) and not inspect.ismethod(m):
            return False
        return m.__name__ in cls.__dict__
    return inspect.getmembers(cls, predicate=predicate)

if __name__ == "__main__":
    print(sys.version)
    print(f"IntColor methods {get_methods(IntColor)}")
    print(f"Color methods {get_methods(Color)}")
```

Returns empty list for both cases for older Python, but on Python-3.11+ it returns list contains of enum constructors and others:
```shell
% conda run -n py310 python bar.py
3.10.12 | packaged by conda-forge | (main, Jun 23 2023, 22:41:52) [Clang 15.0.7 ]
IntColor methods []
Color methods []
% conda run -n py311 python bar.py
3.11.0 | packaged by conda-forge | (main, Oct 25 2022, 06:21:25) [Clang 14.0.4 ]
IntColor methods [('__format__', <function Enum.__format__ at 0x105006ac0>), ('__new__', <function Enum.__new__ at 0x105006660>), ('__repr__', <function Enum.__repr__ at 0x1050068e0>)]
Color methods []
```

This change allows typed enums to be scriptable on 3.11, by explicitly marking several `enum.Enum` method to be dropped by jit script and adds test that typed enums are jit-scriptable.

Fixes https://github.com/pytorch/pytorch/issues/108933

Cherry-pick of https://github.com/pytorch/pytorch/pull/109717 into release/2.1 branch.
Approved by: https://github.com/atalman, https://github.com/davidberard98

(cherry picked from commit 55685d57c004f250118fcccc4e99ae883e037e2d)
2023-09-21 11:49:54 -07:00
c464075d5d [release only] Docker build - Setup release specific variables (#109809) 2023-09-21 12:24:45 -06:00
1b4161c686 [Release/2.1] [Docs] Fix compiler.list_backends invocation (#109800)
s/torch.compile.list_backends/torch.compiler.list_backends`

Fixes https://github.com/pytorch/pytorch/issues/109451

Cherry-pick of  https://github.com/pytorch/pytorch/pull/109568 into release/2.1 branch
Approved by: https://github.com/msaroufim, https://github.com/svekars

(cherry picked from commit af867c2d140c2ab071447219738e59df4ac927b9)
2023-09-21 10:54:37 -07:00
28220534de [Release/2.1] [Docs] Fix typo in torch.unflatten (#109801)
Fixes https://github.com/pytorch/pytorch/issues/109559

Cherry-pick of https://github.com/pytorch/pytorch/pull/109588 into release/2.1 branch
Approved by: https://github.com/lezcano

(cherry picked from commit 2f53bca0fc84d1829cc571d76c58ea411f4fc288)
2023-09-21 10:48:05 -07:00
da9639c752 Remove torchtext from Build Official Docker images (#109799) (#109803)
Fixes nightly official Docker image build.
Failures: https://hud.pytorch.org/hud/pytorch/pytorch/nightly/1?per_page=50&name_filter=Build%20Official

<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 8671bfc</samp>

Remove `torchtext` installation from `Dockerfile` for arm64. This fixes the arm64 build of the PyTorch Docker image.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109799
Approved by: https://github.com/seemethere
2023-09-21 11:31:22 -06:00
e534243ec2 Add docs for torch.compile(numpy) (#109789)
ghstack-source-id: 3e29b38d0bc574ab5f35eee34ebb37fa6238de7e
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109710
2023-09-21 10:31:36 -06:00
01fa8c140a Update dynamic shapes documentation (#109787)
Signed-off-by: Edward Z. Yang <ezyangmeta.com>

ghstack-source-id: 6da57e6a83233b9404734279df3883aeeb23feb7
Pull Request resolved: https://github.com/pytorch/pytorch/pull/109764
2023-09-21 09:16:04 -06:00
5aae979614 [release-2.1] Make numpy dependency optional for torch.compile (#109608)
Cherrypick of 4ee179c952  a9bf1031d4 and fb58a72d96 into release/2.1 branch

Test plan: `python3 -c "import torch;torch.compile(lambda x:print(x))('Hello World')"`

Fixes #109387 to the release branch

* Fix `ConstantVariable` init method if NumPy is missing

By adding `np is not None` check before `isinstance(value, np.number)`

Partially addresses https://github.com/pytorch/pytorch/issues/109387

* [BE] Do not use `numpy` in `torch._inductor.codegen.cpp` (#109324)

`s/numpy.iinfo(numpy.int32)/torch.iinfo(torch.int32)/` as those two are interchangeable

Partially addresses https://github.com/pytorch/pytorch/issues/109387

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109324
Approved by: https://github.com/albanD

* Use `torch.cumsum` instead of numpy one (#109400)

`s/list(numpy.cumsum(foo))/torch.cumsum(torch.tensor(foo), 0).tolist()/`

Test plan: ` python3 ../test/inductor/test_split_cat_fx_passes.py -v`

Partially addresses https://github.com/pytorch/pytorch/issues/109387

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109400
Approved by: https://github.com/ezyang

---------

Co-authored-by: Nikita Shulga <nshulga@meta.com>
2023-09-19 10:59:18 -07:00
ced78cc2a7 [fx][split] Copy node metadata for placeholders (#107981) (#109297)
- Follow-up to #107248 which copies metadata for placeholder nodes in the top-level FX graph
- Currently, top-level placeholders do not have their metadata copied over, causing loss of `TensorMetadata` in some `torch.compile` backends

Fixes https://github.com/pytorch/TensorRT/issues/2258
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107981
Approved by: https://github.com/angelayi

Co-authored-by: gs-olive <113141689+gs-olive@users.noreply.github.com>
2023-09-14 14:03:48 -04:00
d8db5808ce Fix CUDA-12 wheel loading on AmazonLinux (#109291)
Or any other distro that have different purelib and platlib paths Regression was introduced, when small wheel base dependency was migrated from CUDA-11 to CUDA-12

Not sure why, but minor version of the package is no longer shipped with following CUDA-12:
 - nvidia_cuda_nvrtc_cu12-12.1.105
 - nvidia-cuda-cupti-cu12-12.1.105
 - nvidia-cuda-cupti-cu12-12.1.105

But those were present in CUDA-11 release, i.e:
``` shell
bash-5.2# curl -OL 922c5996aa/nvidia_cuda_nvrtc_cu11-11.7.99-2-py3-none-manylinux1_x86_64.whl; unzip -t nvidia_cuda_nvrtc_cu11-11.7.99-2-py3-none-manylinux1_x86_64.whl |grep \.so
    testing: nvidia/cuda_nvrtc/lib/libnvrtc-builtins.so.11.7   OK
    testing: nvidia/cuda_nvrtc/lib/libnvrtc.so.11.2   OK
bash-5.2# curl -OL c64c03f49d/nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl; unzip -t nvidia_cuda_nvrtc_cu12-12.1.105-py3-none-manylinux1_x86_64.whl|grep \.so
    testing: nvidia/cuda_nvrtc/lib/libnvrtc-builtins.so.12.1   OK
    testing: nvidia/cuda_nvrtc/lib/libnvrtc.so.12   OK
```

Fixes https://github.com/pytorch/pytorch/issues/109221

This is a cherry-pick of  https://github.com/pytorch/pytorch/pull/109244 into release/2.1 branch
2023-09-14 07:16:17 -07:00
889811ab5b [ONNX] bump submodule to onnx==1.14.1 (#108895) (#109114)
Bump the pip and submodule ONNX dependencies to official stable 1.14.1; there were no code changes between 1.14.1rc2 and 1.14.1.

Also bump ORT to run tests against ort-nightly==1.16.0.dev20230908001.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108895
Approved by: https://github.com/justinchuby, https://github.com/thiagocrepaldi

Co-authored-by: Aaron Bockover <abock@microsoft.com>
2023-09-12 12:09:59 -04:00
1191449343 Prerequisite of ATen/native/utils header for C++ extension (#109013) (#109106)
# Motivate
Without this PR, if we would like to include the header file like ```#include <ATen/native/ForeachUtils.h>``` in our C++ extension, it will raise a Error ```/home/xxx/torch/include/ATen/native/ForeachUtils.h:7:10: fatal error: 'ATen/native/utils/ParamsHash.h' file not found```. We should fix it.

# Solution
Add the ATen/native/utils header file in the build.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109013
Approved by: https://github.com/ezyang

Co-authored-by: Yu, Guangye <guangye.yu@intel.com>
2023-09-12 11:37:29 -04:00
6d9fad8474 [ONNX] Bump onnx submodule to 1.14.1; ONNX Runtime 1.16 (#106984) (#109045)
Bump dependencies:

- ort-nightly 1.16.0.dev20230824005
- onnx 1.14.1rc2
- onnxscript 0.1.0.dev20230825
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106984
Approved by: https://github.com/BowenBao, https://github.com/thiagocrepaldi

Co-authored-by: Aaron Bockover <abock@microsoft.com>
2023-09-12 07:38:07 -04:00
ed62318bea [export] Fix export arg type declaration (#109060) (#109064)
Summary: Its a arbitrary length tuple of anything. Tuple[Any] means 1 element.

Test Plan: ci

Differential Revision: D49161625

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109060
Approved by: https://github.com/angelayi

Co-authored-by: Jacob Szwejbka <jakeszwe@fb.com>
2023-09-11 17:57:36 -07:00
ee67c4dd6a Refactor ios-build-test workflow to support binary release (#108322) (#109069)
This refactors the logic from CircleCI iOS [build](https://github.com/pytorch/pytorch/blob/main/.circleci/config.yml#L1323-L1344) and [upload](https://github.com/pytorch/pytorch/blob/main/.circleci/config.yml#L1369-L1377) jobs to GHA.

* Nightly artifacts will be available again on `ossci-ios-build` S3 bucket, for example `libtorch_lite_ios_nightly_2.1.0.20230517.zip`.  The last one there was s3://ossci-ios-build/libtorch_lite_ios_nightly_2.1.0.20230517.zip from May 17th
  * [LibTorch-Lite-Nightly](https://github.com/CocoaPods/Specs/blob/master/Specs/c/3/1/LibTorch-Lite-Nightly/1.14.0.20221109/LibTorch-Lite-Nightly.podspec.json) on cocoapods
* Release artifacts will be on `ossci-ios` S3 bucket, for example `s3://ossci-ios/libtorch_lite_ios_1.13.0.zip` from Nov 3rd 2022
  * [LibTorch-Lite](https://github.com/CocoaPods/Specs/blob/master/Specs/c/c/3/LibTorch-Lite/1.13.0.1/LibTorch-Lite.podspec.json) on cocoapods
  * [LibTorch](https://github.com/CocoaPods/Specs/blob/master/Specs/1/3/c/LibTorch/1.13.0.1/LibTorch.podspec.json) on cocoapods

I will clean up Circle CI code in another PR.

### Testing

Generate new release artifacts for testing from main branch.  Simulator testing have all passed.

* With lite interpreter https://github.com/pytorch/pytorch/actions/runs/6093860118
  * https://ossci-ios.s3.amazonaws.com/libtorch_lite_ios_2.1.0.zip
  * https://ossci-ios.s3.amazonaws.com/LibTorch-Lite-2.1.0.podspec

* LibTorch binary can be built without lite interpreter https://github.com/pytorch/pytorch/actions/runs/6103616035 and uses TorchScript, but it has been long dead from my understanding.  The binary can still be built and tested though.
  * https://ossci-ios.s3.amazonaws.com/libtorch_ios_2.1.0.zip
  * https://ossci-ios.s3.amazonaws.com/LibTorch-2.1.0.podspec

### Next step for release

* Once the PR is committed.  I plan to use the workflow dispatch to build the binaries manually on `release/2.1` branch.  Once they looks good, we can publish them on cocoapods.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108322
Approved by: https://github.com/atalman
2023-09-11 17:10:22 -07:00
5529b81631 Add torch_lazy_enable_device_data_cache to disable lazy device data cache (#109051)
* Add logic to enable and disable the lazy device tensor cache without modifying it

* Remove as yet unused compilation cache enable/disable global

* Lint fixes
2023-09-11 18:25:05 -04:00
7e23b4907d [quant][pt2] Fix and rename move_model_to_eval (#108891) (#109027)
Summary:
This commit fixes two silent correctness problems with
the current implementation of `move_model_to_eval`:

(1) Previously the user had to manually call `eliminate_dead_code`
before calling `move_model_to_eval`, otherwise the dropout pattern
won't actually get eliminated. This is because subgraph rewriter
complains the match is not self-contained, and so silently does
not do the replacement.

(2) We wish to error when the user calls `model.train()` or
`model.eval()` on an exported model. This error is raised
correctly immediately after export today, but no longer raised
after the user calls prepare or convert.

We fix (1) by moving the `eliminate_dead_code` call into
`move_model_to_eval`, and fix (2) by ensuring the respective
errors are thrown after prepare and convert as well.

Additionally, this commit renames `move_model_to_eval` to
`move_exported_model_to_eval` to be more explicit.

bypass-github-export-checks

Test Plan:
python test/test_quantization.py TestQuantizePT2E.test_disallow_eval_train
python test/test_quantization.py TestQuantizePT2E.test_move_exported_model_to_eval

Imported from OSS

Differential Revision: D49097293

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108891
Approved by: https://github.com/jerryzh168
2023-09-11 18:14:49 -04:00
71c9d5c3a6 Refactor torch.onnx documentation (#109026)
* Refactor torch.onnx documentation (#108379)

* Distinguish both TorchScript-based exporter (`torch.onnx.export`) and the TorchDynamo-based exporter (`torch.onnx.dynamo_export`) exporters
* Merge ONNX diagnostics page with the exporter page
* Add initial version of a quick overview on the new exporter
* Updates `torch.compiler.html` with the right page for the ONNX Runtime backend for `torch.compile`
* Renamed doc files to clearly identify files belonging to the legacy and newer onnx exporters

Fixes #108274

https://docs-preview.pytorch.org/pytorch/pytorch/108379/index.html
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108379
Approved by: https://github.com/justinchuby, https://github.com/wschin, https://github.com/malfet

* Follow-up #108379 (#108905)

Fixes #108379

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108905
Approved by: https://github.com/abock
2023-09-11 14:27:26 -07:00
91e414957b fix documentation typo (#109054) 2023-09-11 17:04:48 -04:00
ce3ed7f293 [docs] Properly link register_post_accumulate_grad_hook docs (#108157) (#109047)
it shows up now

![image](https://github.com/pytorch/pytorch/assets/31798555/0aa86839-b9c5-4b4b-b1b1-aa1c0c0abbab)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108157
Approved by: https://github.com/soulitzer, https://github.com/albanD
2023-09-11 17:03:39 -04:00
bd372d460b [ONNX] Add initial support for FP8 ONNX export (#107962) (#108939)
This PR resurrects @tcherckez-nvidia's #106379 with changes to resolve conflicts against newer `main` and defines our own constants for the new ONNX types to [avoid breaking Meta's internal usage of an old ONNX](https://github.com/pytorch/pytorch/pull/106379#issuecomment-1675189340).

- `::torch::onnx::TensorProto_DataType_FLOAT8E4M3FN=17`
- `::torch::onnx::TensorProto_DataType_FLOAT8E5M2=19`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107962
Approved by: https://github.com/justinchuby, https://github.com/titaiwangms

Co-authored-by: Aaron Bockover <abock@microsoft.com>
2023-09-11 14:58:24 -04:00
12b8c26f35 [export] torch.export landing page (#108783) (#108962)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108783
Approved by: https://github.com/avikchaudhuri, https://github.com/gmagogsfm
2023-09-11 10:13:24 -04:00
7397cf324c Don't fastpath conj copy when conj/neg bit mismatch (#108881) (#108961)
Fixes https://github.com/pytorch/pytorch/issues/106051

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108881
Approved by: https://github.com/soulitzer
2023-09-11 10:06:42 -04:00
fa8259db8d Revert and reland fix clang-tidy warnings in torch/csrc (#108825)
* Revert "[1/N] fix clang-tidy warnings in torch/csrc (#107648)"

This reverts commit 49eeca00d1e76dd0158758f2c29da6b1d06bf54a.

Reverted https://github.com/pytorch/pytorch/pull/107648 on behalf of https://github.com/osalpekar due to This causes breakages due to underspecified type ([comment](https://github.com/pytorch/pytorch/pull/107648#issuecomment-1696372588))

* [Reland] [1/N] fix clang-tidy warnings in torch/csrc (#108114)

Reland of PR #107648 with auto replaced with Py_ssize_t in eval_frame.c. This PR applies fixes to some found issues by clang-tidy in torch/csrc.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108114
Approved by: https://github.com/Skylion007

---------

Co-authored-by: PyTorch MergeBot <pytorchmergebot@users.noreply.github.com>
Co-authored-by: cyy <cyyever@outlook.com>
2023-09-08 17:55:48 -04:00
d83c8287ea Use contiguous() to handle noncontiguous outputs during elementwise decomposition (#108140) (#108555)
Fixes https://github.com/pytorch/pytorch/issues/108218

Use contiguous() API to handle noncontiguous outputs during elementwise decomp

With this change, ops is decomposing properly (testcase from the bug):
```
graph():
    %arg0_1 : [#users=3] = placeholder[target=arg0_1]
    %abs_1 : [#users=1] = call_function[target=torch.ops.aten.abs.default](args = (%arg0_1,), kwargs = {})
    %floor : [#users=1] = call_function[target=torch.ops.aten.floor.default](args = (%abs_1,), kwargs = {})
    %sign : [#users=1] = call_function[target=torch.ops.aten.sign.default](args = (%arg0_1,), kwargs = {})
    %mul : [#users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%floor, %sign), kwargs = {})
    %sub : [#users=1] = call_function[target=torch.ops.aten.sub.Tensor](args = (%arg0_1, %mul), kwargs = {})
    return (sub,)
```
Output:
```
tensor([[ 0.2871,  0.7189,  0.7297],
        [ 0.8782, -0.4899,  0.7055]], device='hpu:0')
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108140
Approved by: https://github.com/ezyang
2023-09-07 13:44:04 -04:00
ba19c52e31 Fix multi output layout error in indexing dtype calculation (#108085) (#108693)
Differential Revision: [D48757829](https://our.internmc.facebook.com/intern/diff/D48757829)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108085
Approved by: https://github.com/yanboliang, https://github.com/davidberard98, https://github.com/jansel, https://github.com/peterbell10
2023-09-07 13:29:06 -04:00
c5c9536aa7 move IPEX backend to training/inference category (#108737) 2023-09-07 13:24:20 -04:00
6b7a777661 [dtensor] fix two more requires_grad callsite (#108358) (#108738)
redistribute return a new DTensor and those returned DTensors should
follow the input DTensor requires_grad instead of the input tensor local
tensor's requires_grad
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108358
Approved by: https://github.com/fduwjj
2023-09-07 13:10:15 -04:00
ebd3224303 add torch_api (#108617) 2023-09-07 13:08:29 -04:00
6e4ae13657 Release only change, test against test channel (#108688) 2023-09-06 17:56:41 -04:00
265e46e193 Revert "docs: Match open bracket with close bracket in unsqueeze (#95215)" (#108680)
This reverts commit 9d04d376d81be2f01e5ea6b68943390346f2494c.

Reverted https://github.com/pytorch/pytorch/pull/95215 on behalf of https://github.com/kit1980 due to Incorrect assumptions ([comment](https://github.com/pytorch/pytorch/pull/95215#issuecomment-1708852420))

Co-authored-by: PyTorch MergeBot <pytorchmergebot@users.noreply.github.com>
2023-09-06 17:55:59 -04:00
da7290dfbd [ONNX] Show sarif_report_path (#108398) (#108679)
`sarif_report_path` was not formatted correctly in the error message

@BowenBao

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108398
Approved by: https://github.com/thiagocrepaldi
2023-09-06 17:53:54 -04:00
828992cf13 Inductor cpp wrapper: fix codegen of positional args with default value (#108652)
* Inductor cpp wrapper: fix codegen of positional args with default value (#108552)

Fixes https://github.com/pytorch/pytorch/issues/108323.
Cpp wrapper has functionality regression on `llama` and `tnt_s_patch16_224` due to recent support of scaled dot product flash attention in inductor.

The schema of this OP is as follows:
```
- func: _scaled_dot_product_flash_attention(Tensor query, Tensor key, Tensor value, float dropout_p=0.0, bool is_causal=False, bool return_debug_mask=False, *, float? scale=None) -> (Tensor output, Tensor logsumexp, Tensor cum_seq_q, Tensor cum_seq_k, int max_q, int max_k, Tensor philox_seed, Tensor philox_offset, Tensor debug_attn_mask)
```

For `llama` and `tnt_s_patch16_224`, the OP is called in the below way, where the three positional args with default values are not passed (`float dropout_p=0.0, bool is_causal=False, bool return_debug_mask=False`).
```python
y = torch.ops.aten._scaled_dot_product_flash_attention.default(x0, x1, x2, scale = 0.125)
```

This PR fixes the cpp wrapper support for this case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108552
Approved by: https://github.com/jgong5, https://github.com/desertfire, https://github.com/jansel

* ut: update function name on release branch
2023-09-06 13:41:25 -04:00
48246f3dfb Add check for out of range pointer. (#107510) (#108649)
### Summary

Hi! We've been fuzzing pytorch with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz) and found an error of accessing arbitary address while parsing flatbuffer format using `torch::load` function.

pytorch version: 18bcf62bbcf7ffd47e3bcf2596f72aa07a07d65f (the last commit at the moment of reporting the issue)

### Details
The vulnerability appears while loading arbitrary user input using `torch::load` function. To detect the error the input must correspond to FlatbufferFileFormat, so the part of parsing flatbuffer in `import_ir_module` function must be executed.

Firstly error can occur in `GetMutableRoot` in `module.h`, where we add pointer to input data buffer with the value, got from dereference of this pointer (which data fully depends on the user input and can be arbitrary). so the resulting `flatbuffer_module` address can be corrupted.

Moreover, we can get the arbitrary address later at `flatbuffer_loader.cpp:305`, when we get `ival` pointer with `Get` method.
There in `IndirectHelper::Read` function we add pointer with the offset got from the dereference of this pointer, so the address can be corrupted again.

The corrupted `ival` pointer is dereferenced at `table.h` in flatbuffers project, where is used to get another address, which is later dereferenced again at `table.h` in flatbuffers project. The resulting corrupted address is written to `func` pointer at `flatbuffer_loader.cpp:274`, which is then used in `parseFunction`, where write access to the address occurs.

To fix the problem we can compute the end of memory area in `parse_and_initialize_mobile_module` function like this:
```
auto* end = static_cast<char*>(data) + size;
```
And then pass it to all the callees and insert corresponding checks.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107510
Approved by: https://github.com/albanD

Co-authored-by: Eli Kobrin <kobrineli@ispras.ru>
2023-09-06 13:21:04 -04:00
7d6971dcee [dtensor] fix new_empty_strided op (#107835) (#108600)
This PR fixes the new_empty_strided op to become replicate from sharding
when necessary, this is a quick fix to resolve https://github.com/pytorch/pytorch/issues/107661

We'll need to think more about the behavior of this op when it comes to
sharding, one possibility is to follow the input sharding, but given the
output shape of this op might not be the same as the input, it's hard to
say we should follow the input sharding, further improvement needed once
we figure out the op syntax
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107835
Approved by: https://github.com/fduwjj
2023-09-06 09:28:20 -04:00
5417e23ba8 torch.compile-functorch interaction: update docs (#108130) (#108628)
Doc Preview: https://docs-preview.pytorch.org/pytorch/pytorch/108130/torch.compiler_faq.html#torch-func-works-with-torch-compile-for-grad-and-vmap-transforms

Will also cherry-pick this for release branch.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108130
Approved by: https://github.com/zou3519
2023-09-06 08:25:19 -04:00
7a9101951d Improve docs for torch.unique dim argument (#108292) (#108596)
Fixes #103142

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108292
Approved by: https://github.com/albanD

Co-authored-by: Kurt Mohler <kmohler@quansight.com>
2023-09-05 17:47:48 -04:00
03e7f0b99d [Inductor] Add fused_attention pattern matcher with additional clone (#108141) (#108327)
A previous PR https://github.com/pytorch/pytorch/pull/106274 decomposes `aten.dropout` and would create a `clone()` when `eval()` or `p=0`. This makes many SDPA-related models fail to match fused_attention pattern matchers.

This PR adds new fused_attention pattern matchers with an additional clone to re-enable the SDPA op matching.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108141
Approved by: https://github.com/jgong5, https://github.com/eellison
2023-09-05 17:09:39 -04:00
c0e7239f43 Pin pandas version for inductor Docker image (#108355) (#108593)
Building docker in trunk is failing atm https://github.com/pytorch/pytorch/actions/runs/6033657019/job/16370683676 with the following error:

```
+ conda_reinstall numpy=1.24.4
+ as_jenkins conda install -q -n py_3.10 -y --force-reinstall numpy=1.24.4
+ sudo -E -H -u jenkins env -u SUDO_UID -u SUDO_GID -u SUDO_COMMAND -u SUDO_USER env PATH=/opt/conda/envs/py_3.10/bin:/opt/conda/bin:/usr/local/nvidia/bin:/usr/local/cuda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64 conda install -q -n py_3.10 -y --force-reinstall numpy=1.24.4
Collecting package metadata (current_repodata.json): ...working... done
Solving environment: ...working... unsuccessful initial attempt using frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... unsuccessful initial attempt using frozen solve. Retrying with flexible solve.

PackagesNotFoundError: The following packages are not available from current channels:

  - numpy=1.24.4

Current channels:

  - https://repo.anaconda.com/pkgs/main/linux-64
  - https://repo.anaconda.com/pkgs/main/noarch
  - https://repo.anaconda.com/pkgs/r/linux-64
  - https://repo.anaconda.com/pkgs/r/noarch
```

This was pulled in by pandas 2.1.0 released yesterday https://pypi.org/project/pandas/2.1.0
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108355
Approved by: https://github.com/kit1980, https://github.com/atalman, https://github.com/malfet
2023-09-05 17:05:54 -04:00
04c1e07fd7 [quant] Move dropout replacement to move_model_to_eval (#108184) (#108255)
Summary: This commit adds a public facing
`torch.ao.quantization.move_model_to_eval` util function
for QAT users. Instead of calling model.eval() on an exported
model (which doesn't work, see
https://github.com/pytorch/pytorch/issues/103681), the user
would call this new util function instead. This ensures special
ops such as dropout and batchnorm (not supported yet) will have
the right behavior when the graph is later used for inference.

Note: Support for an equivalent `move_model_to_train` will be
added in the future. This is difficult to do for dropout
currently because the eval pattern of dropout is simply a clone
op, which we cannot just match and replace with a dropout op.

Test Plan:
python test/test_quantization.py TestQuantizePT2E.test_move_model_to_eval

Reviewers: jerryzh168, kimishpatel

Subscribers: jerryzh168, kimishpatel, supriyar

Differential Revision: [D48814735](https://our.internmc.facebook.com/intern/diff/D48814735)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108184
Approved by: https://github.com/jerryzh168
2023-09-05 13:42:37 -07:00
cb4362ba5f Error when someone calls train/eval on pre_autograd graph (#108143) (#108258)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108143
Approved by: https://github.com/andrewor14

Co-authored-by: Tugsbayasgalan Manlaibaatar <tmanlaibaatar@fb.com>
2023-09-05 13:41:16 -07:00
bddd30ca7a [inductor] Fix inputs with existing offsets (#108259)
Cherry pick of #108168
2023-09-05 16:24:48 -04:00
9cc99906e9 When byteorder record is missing load as little endian by default (#108523)
* When byteorder record is missing load as little endian by default

Fixes #101688

* Add test for warning

Also change warning type from DeprecationWarning
to UserWarning to make it visible by default.
2023-09-05 16:06:22 -04:00
a49fca4dd4 inductor change needed to update triton pin (#108129)
ghstack-source-id: 5d421f734d5d7d9428b5fed54388cc95e559cd95
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107722
2023-09-05 14:40:23 -04:00
83964c761e [inductor] Add aten.multinomial to disallowed cudagraphs ops (#108122)
Cherry pick of #108105
2023-09-05 14:37:58 -04:00
085bd1da62 [dynamo] Fix setattr nn.Module with new attribute (#108121)
Cherry pick of #108098
2023-09-05 14:36:40 -04:00
90452f41e3 [dynamo] Graph break on pack_padded_sequence (#108120)
Release branch cherrypick of #108096
2023-09-05 14:34:47 -04:00
35c3d5a080 [inductor] Fix constant_to_device issue with ir.Constant (#108119)
Cherry pick of #108087
2023-09-05 14:33:32 -04:00
d07ac50e26 Only add triton dependency to CUDA and ROCm binaries if it hasn't been set as an installation requirement yet (#108424) (#108471)
The dependency was added twice before in CUDA and ROCm binaries, one as an installation dependency from builder and the later as an extra dependency for dynamo, for example:

```
Requires-Python: >=3.8.0
Description-Content-Type: text/markdown
License-File: LICENSE
License-File: NOTICE
Requires-Dist: filelock
Requires-Dist: typing-extensions
Requires-Dist: sympy
Requires-Dist: networkx
Requires-Dist: jinja2
Requires-Dist: fsspec
Requires-Dist: pytorch-triton (==2.1.0+e6216047b8)
Provides-Extra: dynamo
Requires-Dist: pytorch-triton (==2.1.0+e6216047b8) ; extra == 'dynamo'
Requires-Dist: jinja2 ; extra == 'dynamo'
Provides-Extra: opt-einsum
Requires-Dist: opt-einsum (>=3.3) ; extra == 'opt-einsum'
```

In the previous release, we needed to remove this part from `setup.py` to build release binaries https://github.com/pytorch/pytorch/pull/96010.  With this, that step isn't needed anymore because the dependency will come from builder.

### Testing

Using the draft https://github.com/pytorch/pytorch/pull/108374 for testing and manually inspect the wheels artifact at https://github.com/pytorch/pytorch/actions/runs/6045878399 (don't want to go through all `ciflow/binaries` again)

* torch-2.1.0.dev20230901+cu121-cp39-cp39-linux_x86_64
```
Requires-Python: >=3.8.0
Description-Content-Type: text/markdown
Requires-Dist: filelock
Requires-Dist: typing-extensions
Requires-Dist: sympy
Requires-Dist: networkx
Requires-Dist: jinja2
Requires-Dist: fsspec
Requires-Dist: pytorch-triton (==2.1.0+e6216047b8) <-- This will be 2.1.0 on the release branch after https://github.com/pytorch/builder/pull/1515
Provides-Extra: dynamo
Requires-Dist: jinja2 ; extra == 'dynamo'
Provides-Extra: opt-einsum
Requires-Dist: opt-einsum (>=3.3) ; extra == 'opt-einsum'
```

* torch-2.1.0.dev20230901+cu121.with.pypi.cudnn-cp39-cp39-linux_x86_64
```
Requires-Python: >=3.8.0
Description-Content-Type: text/markdown
Requires-Dist: filelock
Requires-Dist: typing-extensions
Requires-Dist: sympy
Requires-Dist: networkx
Requires-Dist: jinja2
Requires-Dist: fsspec
Requires-Dist: pytorch-triton (==2.1.0+e6216047b8)
Requires-Dist: nvidia-cuda-nvrtc-cu12 (==12.1.105) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cuda-runtime-cu12 (==12.1.105) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cuda-cupti-cu12 (==12.1.105) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cudnn-cu12 (==8.9.2.26) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cublas-cu12 (==12.1.3.1) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cufft-cu12 (==11.0.2.54) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-curand-cu12 (==10.3.2.106) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cusolver-cu12 (==11.4.5.107) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-cusparse-cu12 (==12.1.0.106) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-nccl-cu12 (==2.18.1) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: nvidia-nvtx-cu12 (==12.1.105) ; platform_system == "Linux" and platform_machine == "x86_64"
Requires-Dist: triton (==2.1.0) ; platform_system == "Linux" and platform_machine == "x86_64" <--This is 2.1.0 because it already has https://github.com/pytorch/pytorch/pull/108423, but the package doesn't exist yet atm
Provides-Extra: dynamo
Requires-Dist: jinja2 ; extra == 'dynamo'
Provides-Extra: opt-einsum
Requires-Dist: opt-einsum (>=3.3) ; extra == 'opt-einsum'
```

* torch-2.1.0.dev20230901+rocm5.6-cp38-cp38-linux_x86_64
```
Requires-Python: >=3.8.0
Description-Content-Type: text/markdown
Requires-Dist: filelock
Requires-Dist: typing-extensions
Requires-Dist: sympy
Requires-Dist: networkx
Requires-Dist: jinja2
Requires-Dist: fsspec
Requires-Dist: pytorch-triton-rocm (==2.1.0+34f8189eae) <-- This will be 2.1.0 on the release branch after https://github.com/pytorch/builder/pull/1515
Provides-Extra: dynamo
Requires-Dist: jinja2 ; extra == 'dynamo'
Provides-Extra: opt-einsum
Requires-Dist: opt-einsum (>=3.3) ; extra == 'opt-einsum'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108424
Approved by: https://github.com/atalman
2023-09-05 09:22:31 -04:00
8a3b017769 Add triton dependency to PyPI PyTorch package (#108423) 2023-09-01 16:51:10 -04:00
a82894b0d3 Added info for each artifact option, added a help option to TORCH_LOGS, and changed the error message (#107758) (#108365)
New message when invalid option is provided
<img width="1551" alt="image" src="https://github.com/pytorch/pytorch/assets/6355099/8b61534a-ee55-431e-94fe-2ffa25b7fd5c">

TORCH_LOGS="help"
<img width="1558" alt="image" src="https://github.com/pytorch/pytorch/assets/6355099/72e8939c-92fa-4141-8114-79db71451d42">

TORCH_LOGS="+help"
<img width="1551" alt="image" src="https://github.com/pytorch/pytorch/assets/6355099/2cdc94ac-505a-478c-aa58-0175526075d2">

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107758
Approved by: https://github.com/ezyang, https://github.com/mlazos
ghstack dependencies: #106192
2023-09-01 16:11:59 -04:00
050fc31538 [MPS] Fix .item() for multi-dim scalar (#107913) (#108410)
By refactoring `_local_scalar_dense_mps` to use `_empty_like` to allocate CPU tensor.
Also, print a more reasonable error message when dst dim is less than src in mps_copy_

This fixes regression introduced by https://github.com/pytorch/pytorch/pull/105617 and adds regression test.

<!--
copilot:poem
-->
### <samp>🤖 Generated by Copilot at abd06e6</samp>

> _Sing, O Muse, of the valiant deeds of the PyTorch developers_
> _Who strive to improve the performance and usability of tensors_
> _And who, with skill and wisdom, fixed a bug in the MPS backend_
> _That caused confusion and dismay to many a user of `item()`_

Fixes https://github.com/pytorch/pytorch/issues/107867

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107913
Approved by: https://github.com/albanD

Co-authored-by: Nikita Shulga <nikita.shulga@gmail.com>
2023-09-01 11:58:26 -04:00
b3cb05b396 Update to RNN documentation (issue #106085) (#106222) (#108385)
Addresses [issue #106085](https://github.com/pytorch/pytorch/issues/106085).

In `torch/nn/modules/rnn.py`:
- Adds documentation string to RNNBase class.
- Adds parameters to __init__ methods for RNN, LSTM, and GRU, classes.
- Adds type annotations to __init__ methods for RNN, LSTM, and GRU.

In `torch/ao/nn/quantized/dynamic/modules/rnn.py`:
- Adds type specifications to `_FLOAT_MODULE` attributes in RNNBase, RNN, LSTM, and GRU classes.
> This resolves a `mypy` assignment error `Incompatible types in assignment (expression has type "Type[LSTM]", base class "RNNBase" defined the type as "Type[RNNBase]")` that seemed to be a result of fully specified type annotations in `torch/nn/modules/rnn.py`).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106222
Approved by: https://github.com/mikaylagawarecki
2023-09-01 11:46:16 -04:00
fec68a2799 Add channels_last3d support for mkldnn conv and mkldnn deconv (#95271) (#108216)
### Motivation

- Add channels_last3d support for mkldnn conv and mkldnn deconv.
- Use `ideep::convolution_transpose_forward::compute_v3` instead of `ideep::convolution_transpose_forward::compute`.  compute_v3 uses `is_channels_last` to notify ideep whether to go CL or not to align with the memory format check of PyTorch.

### Testing
1 socket (28 cores):

- memory format: torch.contiguous_format

module | shape | forward / ms | backward / ms
-- | -- | -- | --
conv3d | input size: (32, 32, 10, 100, 100), weight size: (32, 32, 3, 3, 3) | 64.56885 | 150.1796
conv3d | input size: (32, 16, 10, 200, 200), weight size: (16, 16, 3, 3, 3) | 100.6754 | 231.8883
conv3d | input size: (16, 4, 5, 300, 300), weight size: (4, 4, 3, 3, 3) | 19.31751 | 68.31131

module | shape | forward / ms | backward / ms
-- | -- | -- | --
ConvTranspose3d | input size: (32, 32, 10, 100, 100), weight size: (32, 32, 3, 3, 3) | 122.7646 | 207.5125
ConvTranspose3d | input size: (32, 16, 10, 200, 200), weight size: (16, 16, 3, 3, 3) | 202.4542 | 368.5492
ConvTranspose3d | input size: (16, 4, 5, 300, 300), weight size: (4, 4, 3, 3, 3) | 122.959 | 84.62577

- memory format: torch.channels_last_3d

module | shape | forward / ms | backward / ms
-- | -- | -- | --
conv3d | input size: (32, 32, 10, 100, 100), weight size: (32, 32, 3, 3, 3) | 40.06993 | 114.317
conv3d | input size: (32, 16, 10, 200, 200), weight size: (16, 16, 3, 3, 3 | 49.08249 | 133.4079
conv3d | input size: (16, 4, 5, 300, 300), weight size: (4, 4, 3, 3, 3) | 5.873911 | 17.58647

module | shape | forward / ms | backward / ms
-- | -- | -- | --
ConvTranspose3d | input size: (32, 32, 10, 100, 100), weight size: (32, 32, 3, 3, 3) | 88.4246 | 208.2269
ConvTranspose3d | input size: (32, 16, 10, 200, 200), weight size: (16, 16, 3, 3, 3 | 140.0725 | 270.4172
ConvTranspose3d | input size: (16, 4, 5, 300, 300), weight size: (4, 4, 3, 3, 3) | 23.0223 | 37.16972

Pull Request resolved: https://github.com/pytorch/pytorch/pull/95271
Approved by: https://github.com/jgong5, https://github.com/cpuhrsch
2023-09-01 11:44:48 -04:00
f139dda1cc [functorch] make torch.compile support opt-in (#108134) 2023-09-01 10:38:41 -04:00
5252dfb762 Fix triton upload channel detection (#108291) (#108311)
This should be nightly for nightly and test for release candidates.  There are 2 bugs:

* The shell needs to set to `bash` explicitly, otherwise, GHA uses `sh` which doesn't recognized `[[` as shown in https://github.com/pytorch/pytorch/actions/runs/6030476858/job/16362717792#step:6:10
*`${GITHUB_REF_NAME}` is un-quoted.  This is basically https://www.shellcheck.net/wiki/SC2248 but this wasn't captured by actionlint, and shellcheck doesn't work with workflow YAML file.  I will think about how to add a lint rule for this later then.

### Testing

https://github.com/pytorch/pytorch/actions/runs/6031330411 to confirm that setting the channel is performed correctly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108291
Approved by: https://github.com/osalpekar, https://github.com/atalman
2023-09-01 09:41:27 -04:00
da1ccca830 Remove commit hash when building triton wheel and conda in release mode (#108203) (#108251)
This is the follow-up of https://github.com/pytorch/pytorch/pull/108187 to set the correct release version without commit hash for triton wheel and conda binaries when building them in release mode.

### Testing

* With commit hash (nightly): https://github.com/pytorch/pytorch/actions/runs/6019021716
* Without commit hash https://github.com/pytorch/pytorch/actions/runs/6019378616 (by adding `--release` into the PR)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108203
Approved by: https://github.com/atalman
2023-08-30 14:27:06 -04:00
c9cbdaf24f [ROCm] Update ROCm pin to fix triton wheel lib issue (#108229)
main PR already merged: https://github.com/pytorch/pytorch/pull/108137
2023-08-30 09:39:59 -04:00
f187e42a54 Fix various issues on build-triton-wheel workflow (#108187) (#108200)
There are more issues that I expect at the beginning:

* Triton was uploaded on `main` instead of `nightly` and release branch
* The environment `conda-aws-upload` wasn't used correctly in both wheel and conda upload
* Conda update wasn't run in a separate ephemeral runner
* Duplicated upload logic, should have just use `bash .circleci/scripts/binary_upload.sh` instead
* Handle `CONDA_PYTORCHBOT_TOKEN` and `CONDA_PYTORCHBOT_TOKEN_TEST` tokens in a similar way as https://github.com/pytorch/test-infra/pull/4530

Part of https://github.com/pytorch/pytorch/issues/108154
2023-08-30 09:37:49 -04:00
9175987fcc Fix the use of inputs.build_environment in #107868 (#108075) (#108177)
It should be `${{ inputs.build_environment }}`, although I wonder why not just clean up the artifacts directory for all build instead of just `aarch64`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108075
Approved by: https://github.com/atalman, https://github.com/seemethere
2023-08-30 09:36:31 -04:00
d8e6594fb8 skip dynamic shape test for test_conv_bn_fuse (#108113) (#108139)
For test_conv_bn_fuse dynamic case, we always fuse bn with convolution and there only a external convolution call, not loops, so it will failed when we do a dynamic loop vars check. This PR will skip this case.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108113
Approved by: https://github.com/huydhn
2023-08-30 09:35:08 -04:00
f82c027774 Fix LayerNorm(bias=False) error (#108078)
ghstack-source-id: 613c4f3608b1a375013fc9da64545c1084025650
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108060
2023-08-30 09:30:22 -04:00
6d20b39d3f [CI] Release only chnages use anaconda token for test env (#108064) 2023-08-28 12:41:57 -04:00
17f400404f [CI] Release only changes for 2.1 release (#108053)
* [CI] Release only changes for 2.1 release

* include circle script

* release only changes for test-infra

* More test-infra related
2023-08-28 11:55:58 -04:00
194 changed files with 4381 additions and 2099 deletions

View File

@ -1 +1 @@
05d67b9418cacda0d356c2102d7c1a887948b013
34f8189eae57a23cc15b4b4f032fe25757e0db8e

View File

@ -7,18 +7,14 @@ source "$(dirname "${BASH_SOURCE[0]}")/common_utils.sh"
function install_huggingface() {
local version
version=$(get_pinned_commit huggingface)
pip_install pandas
pip_install scipy
pip_install z3-solver
pip_install pandas==2.0.3
pip_install "transformers==${version}"
}
function install_timm() {
local commit
commit=$(get_pinned_commit timm)
pip_install pandas
pip_install scipy
pip_install z3-solver
pip_install pandas==2.0.3
pip_install "git+https://github.com/rwightman/pytorch-image-models@${commit}"
}

17
.ci/docker/common/install_onnx.sh Normal file → Executable file
View File

@ -4,6 +4,10 @@ set -ex
source "$(dirname "${BASH_SOURCE[0]}")/common_utils.sh"
retry () {
"$@" || (sleep 10 && "$@") || (sleep 20 && "$@") || (sleep 40 && "$@")
}
# A bunch of custom pip dependencies for ONNX
pip_install \
beartype==0.10.4 \
@ -18,22 +22,17 @@ pip_install \
# onnx-weekly. Otherwise, onnx-weekly could be
# overwritten by onnx.
pip_install \
onnxruntime==1.15.1 \
parameterized==0.8.1 \
pytest-cov==4.0.0 \
pytest-subtests==0.10.0 \
tabulate==0.9.0 \
transformers==4.31.0
# Using 1.15dev branch for the following not yet released features and fixes.
# - Segfault fix for shape inference.
# - Inliner to workaround ORT segfault.
pip_install onnx-weekly==1.15.0.dev20230717
pip_install coloredlogs packaging
retry pip_install -i https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/ORT-Nightly/pypi/simple/ --no-cache-dir --no-input ort-nightly==1.16.0.dev20230908001
# TODO: change this when onnx-script is on testPypi
# pip_install onnxscript-preview==0.1.0.dev20230809 --no-deps
# NOTE: temp change for CI to run on unpublished onnxscript PR.
pip_install "onnxscript@git+https://github.com/microsoft/onnxscript@f69be19ebd3f2e0d7efe64b0c7be3329cbab3822" --no-deps
pip_install onnx==1.14.1
pip_install onnxscript-preview==0.1.0.dev20230828 --no-deps
# Cache the transformers model to be used later by ONNX tests. We need to run the transformers
# package to download the model. By default, the model is cached at ~/.cache/huggingface/hub/

View File

@ -271,7 +271,12 @@ pytest-cpp==2.3.0
#Pinned versions: 2.3.0
#test that import:
z3-solver
z3-solver==4.12.2.0
#Description: The Z3 Theorem Prover Project
#Pinned versions:
#test that import:
tensorboard==2.13.0
#Description: Also included in .ci/docker/requirements-docs.txt
#Pinned versions:
#test that import: test_tensorboard

View File

@ -180,7 +180,7 @@ function install_numpy_pytorch_interop() {
function clone_pytorch_xla() {
if [[ ! -d ./xla ]]; then
git clone --recursive --quiet https://github.com/pytorch/xla.git
git clone --recursive -b r2.1 https://github.com/pytorch/xla.git
pushd xla
# pin the xla hash so that we don't get broken by changes to xla
git checkout "$(cat ../.github/ci_commit_pins/xla.txt)"

View File

@ -544,6 +544,10 @@ test_without_numpy() {
python -c "import sys;sys.path.insert(0, 'fake_numpy');from unittest import TestCase;import torch;x=torch.randn(3,3);TestCase().assertRaises(RuntimeError, lambda: x.numpy())"
# Regression test for https://github.com/pytorch/pytorch/issues/66353
python -c "import sys;sys.path.insert(0, 'fake_numpy');import torch;print(torch.tensor([torch.tensor(0.), torch.tensor(1.)]))"
# Regression test for https://github.com/pytorch/pytorch/issues/109387
if [[ "${TEST_CONFIG}" == *dynamo* ]]; then
python -c "import sys;sys.path.insert(0, 'fake_numpy');import torch;torch.compile(lambda x:print(x))('Hello World')"
fi
popd
}

View File

@ -35,7 +35,7 @@ if [[ "$BUILD_ENVIRONMENT" == *cuda* ]]; then
fi
# TODO: Move both of them to Windows AMI
python -m pip install pytest-rerunfailures==10.3 pytest-cpp==2.3.0
python -m pip install pytest-rerunfailures==10.3 pytest-cpp==2.3.0 tensorboard==2.13.0
# Install Z3 optional dependency for Windows builds.
python -m pip install z3-solver

View File

@ -62,7 +62,7 @@ git --no-pager log --max-count 1
popd
# Clone the Builder main repo
retry git clone -q https://github.com/pytorch/builder.git "$BUILDER_ROOT"
retry git clone -q https://github.com/pytorch/builder.git -b release/2.1 "$BUILDER_ROOT"
pushd "$BUILDER_ROOT"
echo "Using builder from "
git --no-pager log --max-count 1

View File

@ -90,7 +90,7 @@ if [[ "$PACKAGE_TYPE" == conda ]]; then
if [[ "\${TORCH_CONDA_BUILD_FOLDER}" == "pytorch-nightly" ]]; then
PYTORCH_CHANNEL="pytorch-nightly"
fi
retry conda install \${EXTRA_CONDA_FLAGS} -yq -c nvidia -c "\${PYTORCH_CHANNEL}" "pytorch-cuda=\${cu_ver}"
retry conda install \${EXTRA_CONDA_FLAGS} -yq -c nvidia -c pytorch-test "pytorch-cuda=\${cu_ver}"
fi
conda install \${EXTRA_CONDA_FLAGS} -y "\$pkg" --offline
)
@ -98,9 +98,9 @@ elif [[ "$PACKAGE_TYPE" != libtorch ]]; then
if [[ "$(uname -m)" == aarch64 ]]; then
# Using "extra-index-url" until all needed aarch64 dependencies are
# added to "https://download.pytorch.org/whl/nightly/"
pip install "\$pkg" --extra-index-url "https://download.pytorch.org/whl/nightly/${DESIRED_CUDA}"
pip install "\$pkg" --extra-index-url "https://download.pytorch.org/whl/test/${DESIRED_CUDA}"
else
pip install "\$pkg" --index-url "https://download.pytorch.org/whl/nightly/${DESIRED_CUDA}"
pip install "\$pkg" --index-url "https://download.pytorch.org/whl/test/${DESIRED_CUDA}"
fi
retry pip install -q numpy protobuf typing-extensions
fi

View File

@ -11,7 +11,7 @@ PKG_DIR=${PKG_DIR:-/tmp/workspace/final_pkgs}
# currently set within `designate_upload_channel`
UPLOAD_CHANNEL=${UPLOAD_CHANNEL:-nightly}
# Designates what subfolder to put packages into
UPLOAD_SUBFOLDER=${UPLOAD_SUBFOLDER:-cpu}
UPLOAD_SUBFOLDER=${UPLOAD_SUBFOLDER:-}
UPLOAD_BUCKET="s3://pytorch"
BACKUP_BUCKET="s3://pytorch-backup"
BUILD_NAME=${BUILD_NAME:-}
@ -64,12 +64,17 @@ s3_upload() {
local pkg_type
extension="$1"
pkg_type="$2"
s3_dir="${UPLOAD_BUCKET}/${pkg_type}/${UPLOAD_CHANNEL}/${UPLOAD_SUBFOLDER}/"
s3_root_dir="${UPLOAD_BUCKET}/${pkg_type}/${UPLOAD_CHANNEL}"
if [[ -z ${UPLOAD_SUBFOLDER:-} ]]; then
s3_upload_dir="${s3_root_dir}/"
else
s3_upload_dir="${s3_root_dir}/${UPLOAD_SUBFOLDER}/"
fi
(
for pkg in ${PKG_DIR}/*.${extension}; do
(
set -x
${AWS_S3_CP} --no-progress --acl public-read "${pkg}" "${s3_dir}"
${AWS_S3_CP} --no-progress --acl public-read "${pkg}" "${s3_upload_dir}"
)
done
)
@ -82,15 +87,17 @@ pip install -q awscli
case "${PACKAGE_TYPE}" in
conda)
conda_upload
# Fetch platform (eg. win-64, linux-64, etc.) from index file
# Because there's no actual conda command to read this
subdir=$(\
tar -xOf ${PKG_DIR}/*.bz2 info/index.json \
| grep subdir \
| cut -d ':' -f2 \
| sed -e 's/[[:space:]]//' -e 's/"//g' -e 's/,//' \
)
BACKUP_DIR="conda/${subdir}"
for conda_archive in ${PKG_DIR}/*.tar.bz2; do
# Fetch platform (eg. win-64, linux-64, etc.) from index file because
# there's no actual conda command to read this
subdir=$(\
tar -xOf "${conda_archive}" info/index.json \
| grep subdir \
| cut -d ':' -f2 \
| sed -e 's/[[:space:]]//' -e 's/"//g' -e 's/,//' \
)
BACKUP_DIR="conda/${subdir}"
done
;;
libtorch)
s3_upload "zip" "libtorch"

View File

@ -1 +1 @@
e1ee592d9806216d7ac0bb711cae6307b0c5b68a
r2.1

View File

@ -7,6 +7,7 @@
- docs/source/onnx.rst
- docs/source/onnx*
- docs/source/scripts/onnx/**
- docs/source/_static/img/onnx/**
- scripts/onnx/**
- test/onnx/**
- tools/onnx/**

View File

@ -25,3 +25,4 @@ sympy==1.11.1
pytest-cpp==2.3.0
rockset==1.0.3
z3-solver==4.12.2.0
tensorboard==2.13.0

View File

@ -60,12 +60,18 @@ def build_triton(
build_conda: bool = False,
build_rocm: bool = False,
py_version: Optional[str] = None,
release: bool = False,
) -> Path:
env = os.environ.copy()
if "MAX_JOBS" not in env:
max_jobs = os.cpu_count() or 1
env["MAX_JOBS"] = str(max_jobs)
if not release:
# Nightly binaries include the triton commit hash, i.e. 2.1.0+e6216047b8
# while release build should only include the version, i.e. 2.1.0
version = f"{version}+{commit_hash[:10]}"
with TemporaryDirectory() as tmpdir:
triton_basedir = Path(tmpdir) / "triton"
triton_pythondir = triton_basedir / "python"
@ -80,7 +86,7 @@ def build_triton(
if build_conda:
with open(triton_basedir / "meta.yaml", "w") as meta:
print(
f"package:\n name: torchtriton\n version: {version}+{commit_hash[:10]}\n",
f"package:\n name: torchtriton\n version: {version}\n",
file=meta,
)
print("source:\n path: .\n", file=meta)
@ -103,7 +109,7 @@ def build_triton(
patch_init_py(
triton_pythondir / "triton" / "__init__.py",
version=f"{version}+{commit_hash[:10]}",
version=f"{version}",
)
if py_version is None:
py_version = f"{sys.version_info.major}.{sys.version_info.minor}"
@ -129,11 +135,11 @@ def build_triton(
patch_setup_py(
triton_pythondir / "setup.py",
name=triton_pkg_name,
version=f"{version}+{commit_hash[:10]}",
version=f"{version}",
)
patch_init_py(
triton_pythondir / "triton" / "__init__.py",
version=f"{version}+{commit_hash[:10]}",
version=f"{version}",
)
if build_rocm:
@ -157,12 +163,14 @@ def main() -> None:
from argparse import ArgumentParser
parser = ArgumentParser("Build Triton binaries")
parser.add_argument("--release", action="store_true")
parser.add_argument("--build-conda", action="store_true")
parser.add_argument("--build-rocm", action="store_true")
parser.add_argument("--py-version", type=str)
parser.add_argument("--commit-hash", type=str)
parser.add_argument("--triton-version", type=str, default=read_triton_version())
args = parser.parse_args()
build_triton(
build_rocm=args.build_rocm,
commit_hash=args.commit_hash
@ -171,6 +179,7 @@ def main() -> None:
version=args.triton_version,
build_conda=args.build_conda,
py_version=args.py_version,
release=args.release,
)

View File

@ -248,7 +248,8 @@ def generate_wheels_matrix(
"nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'",
"nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | "
"triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'",
"build_name": f"{package_type}-py{python_version}-{gpu_arch_type}{gpu_arch_version}-with-pypi-cudnn".replace( # noqa: B950
".", "_"
),

View File

@ -8,7 +8,7 @@
# NOTE: If testing pytorch/builder changes you can change this variable to change what pytorch/builder reference
# the binary builds will check out
{%- set builder_repo = "pytorch/builder" -%}
{%- set builder_branch = "main" -%}
{%- set builder_branch = "release/2.1" -%}
{%- macro concurrency(build_environment) -%}
concurrency:

View File

@ -97,13 +97,13 @@ jobs:
with:
name: !{{ config["build_name"] }}
path: "${{ runner.temp }}/artifacts/"
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: ROCm set GPU_FLAG
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: !{{ config["container_image"] }}
- name: Test Pytorch binary

View File

@ -74,8 +74,8 @@ jobs:
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
echo "DEVELOPER_DIR=/Applications/Xcode_13.3.1.app/Contents/Developer" >> "${GITHUB_ENV}"
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Install sccache (only for non-forked PRs, and pushes to trunk)
uses: nick-fields/retry@v2.8.2
if: ${{ github.event_name == 'push' || github.event.pull_request.head.repo.full_name == github.repository }}

View File

@ -67,6 +67,6 @@
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
{%- endmacro %}

View File

@ -62,8 +62,8 @@ jobs:
steps:
!{{ common.setup_ec2_windows() }}
!{{ set_runner_specific_vars() }}
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Populate binary env
shell: bash
run: |
@ -102,8 +102,8 @@ jobs:
with:
name: !{{ config["build_name"] }}
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
!{{ common.checkout(deep_clone=False, directory="pytorch") }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch) }}
!{{ common.checkout(deep_clone=False, directory="pytorch", checkout_pr_head=False) }}
!{{ common.checkout(deep_clone=False, directory="builder", repository=common.builder_repo, branch=common.builder_branch, checkout_pr_head=False) }}
- name: Populate binary env
shell: bash
run: |

View File

@ -36,7 +36,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -58,25 +58,25 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -140,5 +140,5 @@ jobs:
if: always()
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -36,7 +36,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -58,25 +58,25 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -185,5 +185,5 @@ jobs:
if: always()
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -41,7 +41,7 @@ jobs:
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -63,30 +63,30 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.1
if: ${{ inputs.cuda-version != 'cpu' }}
- name: Output disk space left
@ -197,5 +197,5 @@ jobs:
file-suffix: bazel-${{ github.job }}_${{ steps.get-job-id.outputs.job-id }}
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -139,12 +139,12 @@ jobs:
run: env
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.github-token }}
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }}
@ -159,10 +159,12 @@ jobs:
- name: Clean workspace
shell: bash
run: |
set -eux
rm -rf "${GITHUB_WORKSPACE}"
mkdir "${GITHUB_WORKSPACE}"
if [[ inputs.build_environment == 'linux-aarch64-binary-manywheel' ]]; then
if [[ ${{ inputs.build_environment }} == 'linux-aarch64-binary-manywheel' ]]; then
rm -rf "${RUNNER_TEMP}/artifacts"
mkdir "${RUNNER_TEMP}/artifacts"
fi
@ -170,7 +172,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -184,7 +185,7 @@ jobs:
- name: Checkout pytorch/builder to builder dir
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -210,7 +211,7 @@ jobs:
- name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ inputs.DOCKER_IMAGE }}
@ -267,7 +268,7 @@ jobs:
- name: Teardown Linux
if: always()
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
- name: Chown workspace
if: always()

View File

@ -127,13 +127,13 @@ jobs:
} >> "${GITHUB_ENV} }}"
- name: "[FB EMPLOYEES] Enable SSH (Click me for login details)"
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.github-token }}
# Setup the environment
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: ${{ inputs.build_environment == 'linux-aarch64-binary-manywheel' }}
@ -154,7 +154,6 @@ jobs:
- name: Checkout PyTorch to pytorch dir
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
@ -167,7 +166,7 @@ jobs:
- name: Checkout pytorch/builder to builder dir
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -198,12 +197,12 @@ jobs:
path: "${{ runner.temp }}/artifacts/"
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.1
if: ${{ inputs.GPU_ARCH_TYPE == 'cuda' && steps.filter.outputs.is-test-matrix-empty == 'False' }}
- name: Pull Docker image
if: ${{ steps.filter.outputs.is-test-matrix-empty == 'False' }}
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ inputs.DOCKER_IMAGE }}
@ -213,7 +212,7 @@ jobs:
- name: Teardown Linux
if: always()
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
- name: Chown workspace
if: always()

View File

@ -97,7 +97,7 @@ jobs:
SHA1: ${{ github.event.pull_request.head.sha || github.sha }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: true
@ -121,7 +121,7 @@ jobs:
shell: bash -e -l {0}
run: |
# reference ends with an RC suffix
if [[ ${GITHUB_REF_NAME} = *-rc[0-9]* ]]; then
if [[ "${GITHUB_REF_NAME}" = *-rc[0-9]* ]]; then
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
fi

View File

@ -22,7 +22,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -43,7 +43,7 @@ jobs:
runs-on: ${{ matrix.runner }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Set up JDK 8
uses: actions/setup-java@v3
@ -52,7 +52,7 @@ jobs:
distribution: 'temurin'
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: 3.8
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}

View File

@ -66,7 +66,7 @@ jobs:
name: build-docs-${{ matrix.docs_type }}-${{ inputs.push }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: |
@ -77,19 +77,19 @@ jobs:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -187,5 +187,5 @@ jobs:
s3-prefix: pytorch/pytorch/${{ github.event.pull_request.number }}/functorchdocs
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -7,14 +7,6 @@ on:
required: true
type: string
description: Top-level label for what's being built/tested.
ios-platform:
required: true
type: string
description: Which iOS platform to build for.
ios-arch:
required: true
type: string
description: Which iOS arch to build for.
sync-tag:
required: false
type: string
@ -31,8 +23,6 @@ on:
env:
GIT_DEFAULT_BRANCH: ${{ github.event.repository.default_branch }}
BUILD_ENVIRONMENT: ${{ inputs.build-environment }}
IOS_PLATFORM: ${{ inputs.ios-platform }}
IOS_ARCH: ${{ inputs.ios-arch }}
jobs:
filter:
@ -43,7 +33,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -63,33 +53,30 @@ jobs:
matrix: ${{ fromJSON(needs.filter.outputs.test-matrix) }}
fail-fast: false
runs-on: ${{ matrix.runner }}
env:
IOS_PLATFORM: ${{ matrix.ios_platform }}
IOS_ARCH: ${{ matrix.ios_arch }}
BUILD_LITE_INTERPRETER: ${{ matrix.use_lite_interpreter }}
USE_PYTORCH_METAL: ${{ matrix.use_metal }}
USE_COREML_DELEGATE: ${{ matrix.use_coreml }}
CUSTOM_OP_LIST: ${{ matrix.use_custom_op_list }}
# TODO: Bump it to 2.2.0 after cherry pick this or figure out a better way
# to get this version instead of hard coding it here
PYTORCH_VERSION: 2.1.0
timeout-minutes: 240
steps:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Populate CI build options
shell: bash
run: |
# Most builds use the lite interpreter, if certain builds shouldn't
# build the lite interpreter this env variable should get over-written
# in the following case statement
echo "BUILD_LITE_INTERPRETER=1" >> "${GITHUB_ENV}"
set -ex
case ${BUILD_ENVIRONMENT} in
*metal*)
echo "USE_PYTORCH_METAL=1" >> "${GITHUB_ENV}"
;;
*full_jit*)
echo "BUILD_LITE_INTERPRETER=0" >> "${GITHUB_ENV}"
;;
*custom*)
echo "SELECTED_OP_LIST=${GITHUB_WORKSPACE}/ios/TestApp/custom_build/mobilenetv2.yaml" >> "${GITHUB_ENV}"
;;
*coreml*)
echo "USE_COREML_DELEGATE=1" >> "${GITHUB_ENV}"
;;
esac
if [ -n "${CUSTOM_OP_LIST:-}" ]; then
echo "SELECTED_OP_LIST=${GITHUB_WORKSPACE}/ios/TestApp/custom_build/${CUSTOM_OP_LIST}" >> "${GITHUB_ENV}"
fi
- name: Install brew dependencies
uses: nick-fields/retry@v2.8.2
@ -102,7 +89,7 @@ jobs:
brew install libtool
- name: Setup miniconda for iOS
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: "3.9"
environment-file: .github/requirements/conda-env-iOS
@ -116,54 +103,67 @@ jobs:
retry_wait_seconds: 90
command: |
set -x
cd ios/TestApp
# install fastlane
pushd ios/TestApp
# Install fastlane
sudo gem install bundler && bundle install
bundle update fastlane
popd
- name: Build PyTorch Mobile Runtime
- name: Build PyTorch mobile runtime
shell: bash
run: |
set -eux
# shellcheck disable=SC1091
export TCLLIBPATH="/usr/local/lib"
python -VV
${CONDA_RUN} scripts/build_ios.sh
- name: Build TestApp
if: inputs.ios-platform == 'SIMULATOR'
if: matrix.ios_platform == 'SIMULATOR'
timeout-minutes: 15
run: |
# run the ruby build script
# Run the ruby build script
if ! [ -x "$(command -v xcodebuild)" ]; then
echo 'Error: xcodebuild is not installed.'
exit 1
fi
ruby scripts/xcode_build.rb -i build_ios/install -x ios/TestApp/TestApp.xcodeproj -p "${IOS_PLATFORM}"
- name: Run Simulator Tests
if: inputs.ios-platform == 'SIMULATOR'
- name: Run simulator tests
if: matrix.ios_platform == 'SIMULATOR'
shell: bash
run: |
set -eux
# shellcheck disable=SC1091
# use the pytorch nightly build to generate models
${CONDA_RUN} pip3 install --pre torch torchvision torchaudio -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
# generate models for differnet backends
cd "${GITHUB_WORKSPACE}/ios/TestApp/benchmark"
# Use the pytorch nightly build to generate models
${CONDA_RUN} pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
# Generate models for differnet backends
pushd "${GITHUB_WORKSPACE}/ios/TestApp/benchmark"
mkdir -p ../models
# NB: Both of the following scripts only export models with lite interpreter
if [ "${USE_COREML_DELEGATE}" == 1 ]; then
${CONDA_RUN} python coreml_backend.py
else
cd "${GITHUB_WORKSPACE}"
pushd "${GITHUB_WORKSPACE}"
${CONDA_RUN} python test/mobile/model_test/gen_test_model.py ios-test
popd
fi
cd "${GITHUB_WORKSPACE}/ios/TestApp/benchmark"
if [ "${BUILD_LITE_INTERPRETER}" == 1 ]; then
echo "Setting up the TestApp for LiteInterpreter"
ruby setup.rb --lite 1
else
# Generate some models for JIT without lite interpreter
${CONDA_RUN} python trace_model.py
echo "Setting up the TestApp for Full JIT"
ruby setup.rb
fi
cd "${GITHUB_WORKSPACE}/ios/TestApp"
# instruments -s -devices
popd
pushd "${GITHUB_WORKSPACE}/ios/TestApp"
# Instruments -s -devices
if [ "${BUILD_LITE_INTERPRETER}" == 1 ]; then
if [ "${USE_COREML_DELEGATE}" == 1 ]; then
bundle exec fastlane scan --only_testing TestAppTests/TestAppTests/testCoreML
@ -173,9 +173,282 @@ jobs:
else
bundle exec fastlane scan --only_testing TestAppTests/TestAppTests/testFullJIT
fi
popd
- name: Dump Simulator Tests On a Failure
if: failure() && inputs.ios-platform == 'SIMULATOR'
- name: Dump simulator tests on failure
if: failure() && matrix.ios_platform == 'SIMULATOR'
run: |
echo "Simulator Tests Logs:"
cat /Users/runner/Library/Logs/scan/*.log
- name: Prepare the build artifacts for upload
shell: bash
run: |
set -eux
# The structure of the folder is as follows:
#
# RUNNER_TEMP/
# └── IOS_ARCH/
# ├── LICENSE
# ├── install
# │ ├── include
# │ │ └── headers
# │ └── lib
# │ ├── libXNNPACK.a
# │ ├── libc10.a
# │ ├── libclog.a
# │ ├── libcpuinfo.a
# │ ├── libeigen_blas.a
# │ ├── libpthreadpool.a
# │ ├── libpytorch_qnnpack.a
# │ ├── libtorch.a
# │ └── libtorch_cpu.a
# ├── src
# │ └── LibTorch-Lite.h
# └── version.txt
SETUP_DIR="${RUNNER_TEMP}/${IOS_ARCH}"
mkdir -p "${SETUP_DIR}/src"
cp -R "${GITHUB_WORKSPACE}/build_ios/install" "${SETUP_DIR}"
# Copy the umbrella header and license
if [ "${BUILD_LITE_INTERPRETER}" == 1 ]; then
cp "${GITHUB_WORKSPACE}/ios/LibTorch-Lite.h" "${SETUP_DIR}/src"
else
cp "${GITHUB_WORKSPACE}/ios/LibTorch.h" "${SETUP_DIR}/src"
fi
# Copy license and version
cp "${GITHUB_WORKSPACE}/LICENSE" "${SETUP_DIR}"
echo "${PYTORCH_VERSION}" > "${SETUP_DIR}"/version.txt
# Save the podspec for the upload job later
if [ "${BUILD_LITE_INTERPRETER}" == "1" ]; then
DATE=$(date -u +%Y%m%d)
cp "${GITHUB_WORKSPACE}"/ios/LibTorch-Lite-Nightly.podspec.template "${SETUP_DIR}"/LibTorch-Lite-Nightly.podspec
sed -i '' -e "s/IOS_NIGHTLY_BUILD_VERSION/${PYTORCH_VERSION}.${DATE}/g" "${SETUP_DIR}"/LibTorch-Lite-Nightly.podspec
cp "${GITHUB_WORKSPACE}"/ios/LibTorch-Lite.podspec.template "${SETUP_DIR}"/LibTorch-Lite.podspec
sed -i '' -e "s/IOS_BUILD_VERSION/${PYTORCH_VERSION}/g" "${SETUP_DIR}"/LibTorch-Lite.podspec
else
# NB: There is no nightly build without lite interpreter atm
cp "${GITHUB_WORKSPACE}"/ios/LibTorch.podspec.template "${SETUP_DIR}"/LibTorch.podspec
sed -i '' -e "s/IOS_BUILD_VERSION/${PYTORCH_VERSION}/g" "${SETUP_DIR}"/LibTorch.podspec
fi
pushd "${SETUP_DIR}"
# NB: It's important to zip all the files before uploading because the GHA will upload
# all files sequentially which is both slow and has too many requests. More info is at
# https://github.com/actions/upload-artifact#too-many-uploads-resulting-in-429-responses
zip -r "${IOS_ARCH}.zip" install src version.txt LICENSE ./*.podspec
popd
- uses: actions/upload-artifact@v3
with:
name: pytorch-ios-build-artifacts-${{ matrix.ios_arch }}
if-no-files-found: error
path: ${{ runner.temp }}/${{ matrix.ios_arch }}/${{ matrix.ios_arch }}.zip
upload-ios-artifacts:
# NB: this job run on GitHub MacOS ephemeral runner so that it can use lipo
# to create the fat iOS binaries for both x86_64 and arm64
runs-on: macos-12
needs: build
# NB: Only upload release build, if we need it, we could also turn on nightly here
environment: ${{ (github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/v'))) && 'ios-upload' || '' }}
steps:
- uses: actions/checkout@v3
# For awscli S3 upload
- uses: actions/setup-python@v4
with:
python-version: '3.10'
cache: pip
# For cocoapods pod upload
- uses: ruby/setup-ruby@v1
with:
ruby-version: '3.2'
bundler-cache: true
- name: Download arm64 artifacts
uses: actions/download-artifact@v3
with:
name: pytorch-ios-build-artifacts-arm64
- name: Download x86_64 artifacts
uses: actions/download-artifact@v3
with:
name: pytorch-ios-build-artifacts-x86_64
- name: Unzip arm64 and x86_64 artifacts
shell: bash
run: |
set -eux
for ARCH in "arm64" "x86_64"; do
TMP_DIR="${RUNNER_TEMP}/${ARCH}"
mkdir -p "${TMP_DIR}"
cp "${ARCH}.zip" "${TMP_DIR}"
pushd "${TMP_DIR}"
unzip -o "${ARCH}.zip"
popd
done
- name: Prepare the artifact
env:
IS_NIGHTLY: ${{ github.event.ref == 'refs/heads/nightly' }}
shell: bash
working-directory: ${{ runner.temp }}/arm64
run: |
set -eux
DEST_DIR="${RUNNER_TEMP}"/ios
echo "DEST_DIR=${DEST_DIR}" >> "$GITHUB_ENV"
# Prepare all the sub directories
mkdir -p "${DEST_DIR}"/install/lib
# Copy header and share files, arm64 or x86_64 both work
cp -R install/include "${DEST_DIR}"/install
cp -R install/share "${DEST_DIR}"/install
# The last dash is important to copy only files under src
cp -R src "${DEST_DIR}"
cp LICENSE "${DEST_DIR}"
if [ "${IS_NIGHTLY}" == true ]; then
PYTORCH_VERSION=$(cat version.txt)
DATE=$(date -u +%Y%m%d)
echo "${PYTORCH_VERSION}.${DATE}" > "${DEST_DIR}"/version.txt
else
cp version.txt "${DEST_DIR}"
fi
PYTORCH_VERSION=$(cat "${DEST_DIR}"/version.txt)
echo "PYTORCH_VERSION=${PYTORCH_VERSION}" >> "$GITHUB_ENV"
pushd install/lib
# shellcheck disable=SC2207
LIBRARIES=($(ls ./*.a))
popd
for LIB in "${LIBRARIES[@]}"; do
FROM_LIBS=("${RUNNER_TEMP}"/arm64/install/lib/"${LIB}" "${RUNNER_TEMP}"/x86_64/install/lib/"${LIB}")
# Create a fat binary for both arm64 and x86_64
lipo -create "${FROM_LIBS[@]}" -o "${DEST_DIR}"/install/lib/"${LIB}"
# Print the info
lipo -i "${DEST_DIR}"/install/lib/"${LIB}"
done
BUILD_LITE_INTERPRETER=1
if [ -f "${RUNNER_TEMP}"/arm64/LibTorch.podspec ]; then
# If LibTorch.podspec is used instead of LibTorch-Lite.podspec, the artifact is built
# without lite interpreter
BUILD_LITE_INTERPRETER=0
fi
echo "BUILD_LITE_INTERPRETER=${BUILD_LITE_INTERPRETER}" >> "$GITHUB_ENV"
- name: Prepare the podspec
env:
IS_NIGHTLY: ${{ github.event.ref == 'refs/heads/nightly' }}
shell: bash
working-directory: ${{ env.DEST_DIR }}
run: |
set -eux
ARTIFACT_NAME=libtorch
SPEC_NAME=LibTorch
if [ "${BUILD_LITE_INTERPRETER}" == "1" ]; then
ARTIFACT_NAME="${ARTIFACT_NAME}_lite_ios"
SPEC_NAME="${SPEC_NAME}-Lite"
else
ARTIFACT_NAME="${ARTIFACT_NAME}_ios"
fi
if [ "${IS_NIGHTLY}" == true ]; then
ARTIFACT_NAME="${ARTIFACT_NAME}_nightly_${PYTORCH_VERSION}.zip"
SPEC_NAME="${SPEC_NAME}-Nightly"
else
ARTIFACT_NAME="${ARTIFACT_NAME}_${PYTORCH_VERSION}.zip"
fi
SPEC_NAME_WITH_VERSION="${SPEC_NAME}-${PYTORCH_VERSION}.podspec"
SPEC_NAME="${SPEC_NAME}.podspec"
# Also copy the spec file
cp "${RUNNER_TEMP}"/arm64/"${SPEC_NAME}" "${SPEC_NAME_WITH_VERSION}"
# NB: It's important to zip all the files before uploading because the GHA will upload
# all files sequentially which is both slow and has too many requests. More info is at
# https://github.com/actions/upload-artifact#too-many-uploads-resulting-in-429-responses
zip -r "${ARTIFACT_NAME}" install src version.txt LICENSE
{
echo "ARTIFACT_NAME=${ARTIFACT_NAME}"
echo "SPEC_NAME_WITH_VERSION=${SPEC_NAME_WITH_VERSION}"
echo "SPEC_NAME=${SPEC_NAME}"
} >> "$GITHUB_ENV"
- uses: actions/upload-artifact@v3
with:
name: pytorch-ios-artifacts
if-no-files-found: error
path: ${{ env.DEST_DIR }}/${{ env.ARTIFACT_NAME }}
- uses: actions/upload-artifact@v3
with:
name: pytorch-ios-podspec
if-no-files-found: error
path: ${{ env.DEST_DIR }}/${{ env.SPEC_NAME_WITH_VERSION }}
- name: Set DRY_RUN
if: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || (startsWith(github.event.ref, 'refs/tags/v'))) }}
shell: bash
run: |
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
- name: Upload the artifact to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
IS_NIGHTLY: ${{ github.event.ref == 'refs/heads/nightly' }}
shell: bash
working-directory: ${{ env.DEST_DIR }}
run: |
set -eux
pip install -q awscli==1.29.40
DRY_RUN=${DRY_RUN:-enabled}
AWS_S3_CP="aws s3 cp --dryrun"
if [ "${DRY_RUN}" == "disabled" ]; then
AWS_S3_CP="aws s3 cp"
fi
if [ "${IS_NIGHTLY}" == true ]; then
BUCKET_NAME="ossci-ios-build"
else
BUCKET_NAME="ossci-ios"
fi
${AWS_S3_CP} "${ARTIFACT_NAME}" "s3://${BUCKET_NAME}/" --acl public-read
${AWS_S3_CP} "${SPEC_NAME_WITH_VERSION}" "s3://${BUCKET_NAME}/" --acl public-read
- name: Upload the artifact to cocoapods (nightly only)
env:
# We need to set this secret to upload to cocoapods. However, we might want
# to NOT set this for PROD release so that we can upload the artifacts manually
COCOAPODS_TRUNK_TOKEN: ${{ secrets.COCOAPODS_TRUNK_TOKEN || '' }}
if: ${{ github.event_name == 'push' && github.event.ref == 'refs/heads/nightly' && env.COCOAPODS_TRUNK_TOKEN != '' }}
shell: bash
working-directory: ${{ runner.temp }}/arm64
run: |
set -eux
gem install cocoapods
pod trunk me
# Upload the spec to cocoapods
pod trunk push --verbose --allow-warnings --use-libraries --skip-import-validation "${SPEC_NAME}"

View File

@ -73,7 +73,7 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -82,19 +82,19 @@ jobs:
# checkout because when we run this action we don't *have* a local
# checkout. In other cases you should prefer a local checkout.
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image-name }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
@ -192,5 +192,5 @@ jobs:
path: sccache-stats-*.json
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -57,7 +57,7 @@ jobs:
timeout-minutes: ${{ inputs.timeout-minutes }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
if: ${{ !contains(matrix.runner, 'gcp.a100') }}
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
@ -66,25 +66,25 @@ jobs:
docker exec -it $(docker container ps --format '{{.ID}}') bash
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Install nvidia driver, nvidia-docker runtime, set GPU_FLAG
id: install-nvidia-driver
uses: pytorch/test-infra/.github/actions/setup-nvidia@main
uses: pytorch/test-infra/.github/actions/setup-nvidia@release/2.1
if: contains(inputs.build-environment, 'cuda') && !contains(matrix.config, 'nogpu')
- name: Lock NVIDIA A100 40GB Frequency
@ -292,7 +292,7 @@ jobs:
path: ./**/core.[1-9]*
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()
# NB: We are currently having an intermittent GPU-related issue on G5 runners with

View File

@ -71,11 +71,11 @@ jobs:
test-matrix: ${{ steps.filter.outputs.test-matrix }}
steps:
- name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Set xcode version
env:
@ -87,7 +87,7 @@ jobs:
- name: Setup miniconda
if: inputs.environment-file == ''
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -97,7 +97,7 @@ jobs:
# environment even though the arch is x86-64
- name: Setup miniconda using the provided environment file
if: inputs.environment-file != ''
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: ${{ inputs.python-version }}
environment-file: ${{ inputs.environment-file }}
@ -206,4 +206,4 @@ jobs:
- name: Clean up disk space
if: always()
continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1

View File

@ -41,7 +41,7 @@ jobs:
reenabled-issues: ${{ steps.filter.outputs.reenabled-issues }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -85,7 +85,7 @@ jobs:
use-gha: true
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}

View File

@ -69,11 +69,11 @@ jobs:
done
- name: Clean up disk space before running MacOS workflow
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Download build artifacts
uses: ./.github/actions/download-build-artifacts
@ -82,7 +82,7 @@ jobs:
use-gha: true
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: ${{ inputs.python-version }}
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}
@ -205,4 +205,4 @@ jobs:
- name: Clean up disk space
if: always()
continue-on-error: true
uses: pytorch/test-infra/.github/actions/check-disk-space@main
uses: pytorch/test-infra/.github/actions/check-disk-space@release/2.1

View File

@ -48,7 +48,7 @@ jobs:
steps:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: true
@ -57,12 +57,12 @@ jobs:
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ inputs.docker-image }}
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}

View File

@ -22,7 +22,7 @@ jobs:
keep-going: ${{ steps.filter.outputs.keep-going }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false
@ -45,10 +45,10 @@ jobs:
steps:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup miniconda
uses: pytorch/test-infra/.github/actions/setup-miniconda@main
uses: pytorch/test-infra/.github/actions/setup-miniconda@release/2.1
with:
python-version: 3.8
environment-file: .github/requirements/conda-env-${{ runner.os }}-${{ runner.arch }}

View File

@ -60,10 +60,10 @@ jobs:
git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main
uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.1
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: |
@ -78,7 +78,7 @@ jobs:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: true

View File

@ -48,10 +48,10 @@ jobs:
git config --global core.fsmonitor false
- name: Clean up leftover processes on non-ephemeral Windows runner
uses: pytorch/test-infra/.github/actions/cleanup-runner@main
uses: pytorch/test-infra/.github/actions/cleanup-runner@release/2.1
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
instructions: |
@ -67,7 +67,7 @@ jobs:
# [see note: pytorch repo ref]
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
no-sudo: true

View File

@ -0,0 +1,70 @@
name: Build iOS binaries
on:
push:
branches:
- nightly
tags:
# NOTE: Binary build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
paths:
- .github/workflows/build-ios-binaries.yml
- .github/workflows/_ios-build-test.yml
pull_request:
paths:
- .github/workflows/build-ios-binaries.yml
- .github/workflows/_ios-build-test.yml
# NB: We can use this workflow dispatch to test and build iOS binaries manually
workflow_dispatch:
inputs:
use_lite_interpreter:
description: "Use PyTorch lite interpreter?"
type: string
default: 1
use_coreml:
description: "Use Apple Core ML?"
type: string
default: 1
use_custom_op_list:
description: "Specify the custom ops list to include in the binaries"
type: string
default: ""
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
jobs:
# TODO: Figure out how to migrate this job to M1 runner
ios-build-test:
name: ios-build-test
uses: ./.github/workflows/_ios-build-test.yml
with:
build-environment: ios-build-test
sync-tag: ios-build-test
test-matrix: |
{ include: [
{ config: "default",
shard: 1,
num_shards: 1,
runner: "macos-12",
ios_platform: "SIMULATOR",
ios_arch: "x86_64",
use_lite_interpreter: ${{ inputs.use_lite_interpreter || 1 }},
use_metal: 0,
use_coreml: ${{ inputs.use_coreml || 1 }},
use_custom_op_list: ${{ inputs.use_custom_op_list || '' }}
},
{ config: "default",
shard: 1,
num_shards: 1,
runner: "macos-12",
ios_platform: "OS",
ios_arch: "arm64",
use_lite_interpreter: ${{ inputs.use_lite_interpreter || 1 }},
use_metal: 1,
use_coreml: ${{ inputs.use_coreml || 1 }},
use_custom_op_list: ${{ inputs.use_custom_op_list || '' }}
}
]}

View File

@ -3,7 +3,11 @@ name: Build Triton wheels
on:
push:
branches:
- main
- release/2.1
tags:
# NOTE: Binary build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
paths:
- .github/workflows/build-triton-wheel.yml
- .github/scripts/build_triton_wheel.py
@ -43,12 +47,12 @@ jobs:
BUILD_DEVICE: ${{ matrix.device }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
@ -56,11 +60,13 @@ jobs:
uses: ./.github/actions/setup-linux
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ env.DOCKER_IMAGE }}
- name: Build Triton wheel
env:
IS_RELEASE_TAG: ${{ startsWith(github.event.ref, 'refs/tags/v') }}
run: |
set -x
mkdir -p "${RUNNER_TEMP}/artifacts/"
@ -98,64 +104,75 @@ jobs:
BUILD_ROCM="--build-rocm"
fi
RELEASE=""
if [[ "${IS_RELEASE_TAG}" == true ]]; then
RELEASE="--release"
fi
docker exec -t "${container_name}" yum install -y zlib-devel zip
docker exec -t "${container_name}" "${PYTHON_EXECUTABLE}" -m pip install -U setuptools==67.4.0
docker exec -t "${container_name}" "${PYTHON_EXECUTABLE}" /pytorch/.github/scripts/build_triton_wheel.py $BUILD_ROCM
docker exec -t "${container_name}" "${PYTHON_EXECUTABLE}" /pytorch/.github/scripts/build_triton_wheel.py $BUILD_ROCM $RELEASE
docker exec -t "${container_name}" chown -R 1000.1000 /artifacts
- uses: actions/upload-artifact@v3
with:
name: "pytorch-triton-wheel-${{ matrix.py_vers }}"
# NB: Use the same name here and all wheels can be downloaded by referring to the same artifact
name: pytorch-triton-wheel
if-no-files-found: error
path:
${{ runner.temp }}/artifacts/*
path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()
upload-wheel:
runs-on: linux.20_04.4x
runs-on: ubuntu-22.04
needs: build-wheel
container:
image: continuumio/miniconda3:4.12.0
env:
GITHUB_TOKEN: ${{ secrets.github-token }}
environment: ${{ (github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/v'))) && 'conda-aws-upload' || '' }}
steps:
- name: Download Build Artifacts (3.8)
- uses: actions/checkout@v3
- name: Download Build Artifacts
uses: actions/download-artifact@v3
with:
name: "pytorch-triton-wheel-3.8"
path: "${{ runner.temp }}/artifacts/"
- name: Download Build Artifacts (3.9)
uses: actions/download-artifact@v3
with:
name: "pytorch-triton-wheel-3.9"
path: "${{ runner.temp }}/artifacts/"
- name: Download Build Artifacts (3.10)
uses: actions/download-artifact@v3
with:
name: "pytorch-triton-wheel-3.10"
path: "${{ runner.temp }}/artifacts/"
- name: Download Build Artifacts (3.11)
uses: actions/download-artifact@v3
with:
name: "pytorch-triton-wheel-3.11"
path: "${{ runner.temp }}/artifacts/"
- name: Upload binaries
if: ${{ github.event_name == 'push' && github.event.ref == 'refs/heads/main' }}
env:
PKG_DIR: "${{ runner.temp }}/artifacts"
# When running these on pull_request events these should be blank
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_S3_UPDATE_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_S3_UPDATE_SECRET_ACCESS_KEY }}
UPLOAD_BUCKET: "s3://pytorch"
name: pytorch-triton-wheel
path: ${{ runner.temp }}/artifacts/
- name: Set DRY_RUN (only for tagged pushes)
if: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || (startsWith(github.event.ref, 'refs/tags/v'))) }}
shell: bash
run: |
set -ex
pip install -q awscli
s3_dir="${UPLOAD_BUCKET}/whl/nightly/"
for pkg in "${PKG_DIR}/"*.whl; do
aws s3 cp --no-progress --acl public-read "${pkg}" "${s3_dir}"
done
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/v') }}
shell: bash
run: |
set -ex
# reference ends with an RC suffix
if [[ "${GITHUB_REF_NAME}" = *-rc[0-9]* ]]; then
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
fi
# NB: This step is gated by DRY_RUN, which is enabled everywhere except nightly and release branches
- name: Upload binaries
env:
PACKAGE_TYPE: wheel
# The UPLOAD_SUBFOLDER needs to be empty here so that triton wheels are uploaded
# to nightly or test
UPLOAD_SUBFOLDER: ""
PKG_DIR: ${{ runner.temp }}/artifacts
# When running these on pull_request events these should be blank
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
shell: bash
run: |
set -ex
bash .circleci/scripts/binary_upload.sh
build-conda:
name: "Build Triton Conda"
runs-on: [self-hosted, linux.2xlarge]
@ -164,19 +181,17 @@ jobs:
matrix:
py_vers: [ "3.8", "3.9", "3.10", "3.11" ]
timeout-minutes: 40
environment: ${{ (github.event_name == 'push' && github.event.ref == 'refs/heads/main') && 'conda-aws-upload' || '' }}
env:
DOCKER_IMAGE: pytorch/conda-builder:cpu
PY_VERS: ${{ matrix.py_vers }}
ANACONDA_API_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
@ -184,11 +199,13 @@ jobs:
uses: ./.github/actions/setup-linux
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ env.DOCKER_IMAGE }}
- name: Build Triton conda package
env:
IS_RELEASE_TAG: ${{ startsWith(github.event.ref, 'refs/tags/v') }}
run: |
set -x
mkdir -p "${RUNNER_TEMP}/artifacts/"
@ -198,31 +215,76 @@ jobs:
-v "${GITHUB_WORKSPACE}:/pytorch" \
-v "${RUNNER_TEMP}/artifacts:/artifacts" \
-w /artifacts/ \
-e ANACONDA_API_TOKEN \
"${DOCKER_IMAGE}" \
)
RELEASE=""
if [[ "${IS_RELEASE_TAG}" == true ]]; then
RELEASE="--release"
fi
docker exec -t "${container_name}" yum install -y llvm11 llvm11-devel llvm11-static llvm11-libs zlib-devel
docker exec -t "${container_name}" python /pytorch/.github/scripts/build_triton_wheel.py --build-conda --py-version="${PY_VERS}"
- name: Upload artifacts to Anaconda
if: ${{ github.event_name == 'push' && github.event.ref == 'refs/heads/main' }}
run: |
container_name=$(docker container ps --format '{{.ID}}')
docker exec -t "${container_name}" sh -c "anaconda upload /artifacts/torch*.tar.bz2 -u pytorch-nightly --label main --no-progress --force"
- name: Chown artifacts
run: |
container_name=$(docker container ps --format '{{.ID}}')
docker exec -t "${container_name}" python /pytorch/.github/scripts/build_triton_wheel.py --build-conda --py-version="${PY_VERS}" $RELEASE
docker exec -t "${container_name}" chown -R 1000.1000 /artifacts
- uses: actions/upload-artifact@v3
with:
name: "pytorch-triton-conda-${{ matrix.py_vers }}"
# NB: Use the same name here and all wheels can be downloaded by referring to the same artifact
name: pytorch-triton-conda
if-no-files-found: error
path:
${{ runner.temp }}/artifacts/*
path: ${{ runner.temp }}/artifacts/*
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()
upload-conda:
runs-on: ubuntu-22.04
needs: build-conda
container:
image: continuumio/miniconda3:4.12.0
environment: ${{ (github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/v'))) && 'conda-aws-upload' || '' }}
steps:
- uses: actions/checkout@v3
- name: Download Build Artifacts
uses: actions/download-artifact@v3
with:
name: pytorch-triton-conda
path: ${{ runner.temp }}/artifacts/
- name: Set DRY_RUN (only for tagged pushes)
if: ${{ github.event_name == 'push' && (github.event.ref == 'refs/heads/nightly' || (startsWith(github.event.ref, 'refs/tags/v'))) }}
shell: bash
run: |
echo "DRY_RUN=disabled" >> "$GITHUB_ENV"
- name: Set UPLOAD_CHANNEL (only for tagged pushes)
if: ${{ github.event_name == 'push' && startsWith(github.event.ref, 'refs/tags/v') }}
shell: bash
run: |
set -ex
# reference ends with an RC suffix
if [[ "${GITHUB_REF_NAME}" = *-rc[0-9]* ]]; then
echo "UPLOAD_CHANNEL=test" >> "$GITHUB_ENV"
fi
# NB: This step is gated by DRY_RUN, which is enabled everywhere except nightly and release branches
- name: Upload binaries to Anaconda
env:
PACKAGE_TYPE: conda
PKG_DIR: ${{ runner.temp }}/artifacts
# When running these on pull_request events these should be blank
CONDA_PYTORCHBOT_TOKEN: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
CONDA_PYTORCHBOT_TOKEN_TEST: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
shell: bash
run: |
set -ex
if [[ "${UPLOAD_CHANNEL}" = "nightly" ]]; then
export ANACONDA_API_TOKEN="${CONDA_PYTORCHBOT_TOKEN}"
else
export ANACONDA_API_TOKEN="${CONDA_PYTORCHBOT_TOKEN_TEST}"
fi
bash .circleci/scripts/binary_upload.sh

View File

@ -29,7 +29,7 @@ jobs:
runs-on: linux.20_04.4x
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
fetch-depth: 1

View File

@ -10,7 +10,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Run close_nonexistent_disable_issues.py
env:

View File

@ -61,21 +61,21 @@ jobs:
# [see note: pytorch repo ref]
# deep clone (fetch-depth 0) required for git merge-base
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- name: Setup Linux
uses: ./.github/actions/setup-linux
- name: Build docker image
id: build-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
uses: pytorch/test-infra/.github/actions/calculate-docker-image@release/2.1
with:
docker-image-name: ${{ matrix.docker-image-name }}
always-rebuild: true
push: true
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: ${{ steps.build-docker-image.outputs.docker-image }}
@ -105,5 +105,5 @@ jobs:
if: always()
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -47,7 +47,7 @@ jobs:
BUILD_PLATFORMS: ${{ matrix.platform }}
steps:
- name: Setup SSH (Click me for login details)
uses: pytorch/test-infra/.github/actions/setup-ssh@main
uses: pytorch/test-infra/.github/actions/setup-ssh@release/2.1
with:
github-secret: ${{ secrets.GITHUB_TOKEN }}
# [see note: pytorch repo ref]
@ -82,12 +82,10 @@ jobs:
echo "${RUNNER_TEMP}/bin" >> "${GITHUB_PATH}"
# Generate PyTorch version to use
echo "PYTORCH_VERSION=$(python3 .github/scripts/generate_pytorch_version.py)" >> "${GITHUB_ENV}"
- name: Setup nightly specific variables
if: ${{ github.event.ref == 'refs/heads/nightly' || startsWith(github.event.ref, 'refs/tags/ciflow/nightly/') }}
- name: Setup release specific variables
run: |
{
echo "DOCKER_IMAGE=pytorch-nightly";
echo "INSTALL_CHANNEL=pytorch-nightly";
echo "INSTALL_CHANNEL=pytorch-test";
echo "TRITON_VERSION=$(cut -f 1 .ci/docker/triton_version.txt)+$(cut -c -10 .ci/docker/ci_commit_pins/triton.txt)";
} >> "${GITHUB_ENV}"
- name: Run docker build / push
@ -109,5 +107,5 @@ jobs:
ghcr.io/pytorch/pytorch-nightly:latest
docker push ghcr.io/pytorch/pytorch-nightly:latest
- name: Teardown Linux
uses: pytorch/test-infra/.github/actions/teardown-linux@main
uses: pytorch/test-infra/.github/actions/teardown-linux@release/2.1
if: always()

View File

@ -93,7 +93,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_9-cpu-aarch64-build:
@ -153,7 +153,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_10-cpu-aarch64-build:
@ -213,7 +213,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_11-cpu-aarch64-build:
@ -273,5 +273,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -90,7 +90,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_8-cuda11_8-build:
@ -150,7 +150,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_8-cuda12_1-build:
@ -210,7 +210,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cpu-build:
@ -267,7 +267,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cuda11_8-build:
@ -327,7 +327,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cuda12_1-build:
@ -387,7 +387,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cpu-build:
@ -444,7 +444,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cuda11_8-build:
@ -504,7 +504,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cuda12_1-build:
@ -564,7 +564,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cpu-build:
@ -621,7 +621,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cuda11_8-build:
@ -681,7 +681,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cuda12_1-build:
@ -741,5 +741,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -93,7 +93,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-shared-without-deps-cxx11-abi-build:
@ -153,7 +153,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-with-deps-cxx11-abi-build:
@ -213,7 +213,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-without-deps-cxx11-abi-build:
@ -273,7 +273,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-with-deps-cxx11-abi-build:
@ -336,7 +336,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-without-deps-cxx11-abi-build:
@ -399,7 +399,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-with-deps-cxx11-abi-build:
@ -462,7 +462,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-without-deps-cxx11-abi-build:
@ -525,7 +525,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-with-deps-cxx11-abi-build:
@ -588,7 +588,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-without-deps-cxx11-abi-build:
@ -651,7 +651,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-with-deps-cxx11-abi-build:
@ -714,7 +714,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-without-deps-cxx11-abi-build:
@ -777,7 +777,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_5-shared-with-deps-cxx11-abi-build:
@ -828,7 +828,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -840,7 +839,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -854,7 +853,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/libtorch-cxx11-builder:rocm5.5
- name: Test Pytorch binary
@ -881,7 +880,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_5-static-with-deps-cxx11-abi-build:
@ -932,7 +931,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -944,7 +942,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -958,7 +956,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/libtorch-cxx11-builder:rocm5.5
- name: Test Pytorch binary
@ -985,7 +983,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_6-shared-with-deps-cxx11-abi-build:
@ -1036,7 +1034,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1048,7 +1045,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1062,7 +1059,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/libtorch-cxx11-builder:rocm5.6
- name: Test Pytorch binary
@ -1089,7 +1086,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_6-static-with-deps-cxx11-abi-build:
@ -1140,7 +1137,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1152,7 +1148,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1166,7 +1162,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/libtorch-cxx11-builder:rocm5.6
- name: Test Pytorch binary
@ -1193,5 +1189,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -93,7 +93,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-shared-without-deps-pre-cxx11-build:
@ -153,7 +153,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-with-deps-pre-cxx11-build:
@ -213,7 +213,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-without-deps-pre-cxx11-build:
@ -273,7 +273,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-with-deps-pre-cxx11-build:
@ -336,7 +336,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-without-deps-pre-cxx11-build:
@ -399,7 +399,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-with-deps-pre-cxx11-build:
@ -462,7 +462,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-without-deps-pre-cxx11-build:
@ -525,7 +525,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-with-deps-pre-cxx11-build:
@ -588,7 +588,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-without-deps-pre-cxx11-build:
@ -651,7 +651,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-with-deps-pre-cxx11-build:
@ -714,7 +714,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-without-deps-pre-cxx11-build:
@ -777,7 +777,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_5-shared-with-deps-pre-cxx11-build:
@ -828,7 +828,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -840,7 +839,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -854,7 +853,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.5
- name: Test Pytorch binary
@ -881,7 +880,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_5-static-with-deps-pre-cxx11-build:
@ -932,7 +931,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -944,7 +942,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -958,7 +956,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.5
- name: Test Pytorch binary
@ -985,7 +983,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_6-shared-with-deps-pre-cxx11-build:
@ -1036,7 +1034,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1048,7 +1045,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1062,7 +1059,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.6
- name: Test Pytorch binary
@ -1089,7 +1086,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-rocm5_6-static-with-deps-pre-cxx11-build:
@ -1140,7 +1137,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1152,7 +1148,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1166,7 +1162,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.6
- name: Test Pytorch binary
@ -1193,5 +1189,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -86,7 +86,7 @@ jobs:
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1-with-pypi-cudnn
build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_8-cuda12_1-with-pypi-cudnn-test: # Testing

View File

@ -90,7 +90,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_8-cpu-cxx11-abi-build:
@ -150,7 +150,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_8-cuda11_8-build:
@ -210,7 +210,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_8-cuda12_1-with-pypi-cudnn-build:
@ -229,7 +229,7 @@ jobs:
DESIRED_PYTHON: "3.8"
build_name: manywheel-py3_8-cuda12_1-with-pypi-cudnn
build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_8-cuda12_1-with-pypi-cudnn-test: # Testing
@ -271,7 +271,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_8-cuda12_1-build:
@ -331,7 +331,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_8-rocm5_5-build:
@ -380,7 +380,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -392,7 +391,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -406,7 +405,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.5
- name: Test Pytorch binary
@ -432,7 +431,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_8-rocm5_6-build:
@ -481,7 +480,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -493,7 +491,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -507,7 +505,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.6
- name: Test Pytorch binary
@ -533,7 +531,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_9-cpu-build:
@ -590,7 +588,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_9-cpu-cxx11-abi-build:
@ -650,7 +648,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_9-cuda11_8-build:
@ -710,7 +708,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_9-cuda12_1-with-pypi-cudnn-build:
@ -729,7 +727,7 @@ jobs:
DESIRED_PYTHON: "3.9"
build_name: manywheel-py3_9-cuda12_1-with-pypi-cudnn
build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_9-cuda12_1-with-pypi-cudnn-test: # Testing
@ -771,7 +769,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_9-cuda12_1-build:
@ -831,7 +829,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_9-rocm5_5-build:
@ -880,7 +878,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -892,7 +889,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -906,7 +903,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.5
- name: Test Pytorch binary
@ -932,7 +929,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_9-rocm5_6-build:
@ -981,7 +978,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -993,7 +989,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1007,7 +1003,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.6
- name: Test Pytorch binary
@ -1033,7 +1029,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_10-cpu-build:
@ -1090,7 +1086,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_10-cpu-cxx11-abi-build:
@ -1150,7 +1146,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_10-cuda11_8-build:
@ -1210,7 +1206,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_10-cuda12_1-with-pypi-cudnn-build:
@ -1229,7 +1225,7 @@ jobs:
DESIRED_PYTHON: "3.10"
build_name: manywheel-py3_10-cuda12_1-with-pypi-cudnn
build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_10-cuda12_1-with-pypi-cudnn-test: # Testing
@ -1271,7 +1267,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_10-cuda12_1-build:
@ -1331,7 +1327,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_10-rocm5_5-build:
@ -1380,7 +1376,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1392,7 +1387,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1406,7 +1401,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.5
- name: Test Pytorch binary
@ -1432,7 +1427,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_10-rocm5_6-build:
@ -1481,7 +1476,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1493,7 +1487,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1507,7 +1501,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.6
- name: Test Pytorch binary
@ -1533,7 +1527,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_11-cpu-build:
@ -1590,7 +1584,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_11-cpu-cxx11-abi-build:
@ -1650,7 +1644,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_11-cuda11_8-build:
@ -1710,7 +1704,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_11-cuda12_1-with-pypi-cudnn-build:
@ -1729,7 +1723,7 @@ jobs:
DESIRED_PYTHON: "3.11"
build_name: manywheel-py3_11-cuda12_1-with-pypi-cudnn
build_environment: linux-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64'
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==8.9.2.26; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.1.3.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.0.2.54; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.2.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.4.5.107; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.1.0.106; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.18.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.1.105; platform_system == 'Linux' and platform_machine == 'x86_64' | triton==2.1.0; platform_system == 'Linux' and platform_machine == 'x86_64'
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_11-cuda12_1-with-pypi-cudnn-test: # Testing
@ -1771,7 +1765,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_11-cuda12_1-build:
@ -1831,7 +1825,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_11-rocm5_5-build:
@ -1880,7 +1874,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1892,7 +1885,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1906,7 +1899,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.5
- name: Test Pytorch binary
@ -1932,7 +1925,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_11-rocm5_6-build:
@ -1981,7 +1974,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1993,7 +1985,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2007,7 +1999,7 @@ jobs:
run: |
echo "GPU_FLAG=--device=/dev/mem --device=/dev/kfd --device=/dev/dri --group-add video --group-add daemon" >> "${GITHUB_ENV}"
- name: Pull Docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
uses: pytorch/test-infra/.github/actions/pull-docker-image@release/2.1
with:
docker-image: pytorch/manylinux-builder:rocm5.6
- name: Test Pytorch binary
@ -2033,5 +2025,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -75,7 +75,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -87,7 +86,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -144,7 +143,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -187,7 +186,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -199,7 +197,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -256,7 +254,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -299,7 +297,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -311,7 +308,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -368,7 +365,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -411,7 +408,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -423,7 +419,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -480,5 +476,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -75,7 +75,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -87,7 +86,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -144,7 +143,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -187,7 +186,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -199,7 +197,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -256,7 +254,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -299,7 +297,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -311,7 +308,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -368,7 +365,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -411,7 +408,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -423,7 +419,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -480,5 +476,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -73,7 +73,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -85,7 +84,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -142,7 +141,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -185,7 +184,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -197,7 +195,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -254,7 +252,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -297,7 +295,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -309,7 +306,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -366,7 +363,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -409,7 +406,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -421,7 +417,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -478,5 +474,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -77,7 +77,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -89,7 +88,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -147,7 +146,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-shared-without-deps-cxx11-abi-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -194,7 +193,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -206,7 +204,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -264,7 +262,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-with-deps-cxx11-abi-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -311,7 +309,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -323,7 +320,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -381,7 +378,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-without-deps-cxx11-abi-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -428,7 +425,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -440,7 +436,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -498,5 +494,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -73,7 +73,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -85,7 +84,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -142,7 +141,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -185,7 +184,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -197,7 +195,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -254,7 +252,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -297,7 +295,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -309,7 +306,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -366,7 +363,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -409,7 +406,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -421,7 +417,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -478,5 +474,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -92,7 +92,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -104,7 +103,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -208,7 +207,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -220,7 +218,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -268,7 +266,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_8-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -331,7 +329,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -343,7 +340,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -448,7 +445,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -460,7 +456,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -509,7 +505,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_8-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -572,7 +568,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -584,7 +579,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -689,7 +684,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -701,7 +695,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -750,7 +744,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -812,7 +806,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -824,7 +817,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -928,7 +921,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -940,7 +932,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -988,7 +980,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1051,7 +1043,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1063,7 +1054,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1168,7 +1159,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1180,7 +1170,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1229,7 +1219,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_9-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1292,7 +1282,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1304,7 +1293,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1409,7 +1398,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1421,7 +1409,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1470,7 +1458,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1532,7 +1520,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1544,7 +1531,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1648,7 +1635,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1660,7 +1646,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1708,7 +1694,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1771,7 +1757,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1783,7 +1768,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1888,7 +1873,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1900,7 +1884,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1949,7 +1933,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_10-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2012,7 +1996,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2024,7 +2007,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2129,7 +2112,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2141,7 +2123,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2190,7 +2172,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2252,7 +2234,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2264,7 +2245,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2368,7 +2349,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2380,7 +2360,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2428,7 +2408,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2491,7 +2471,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2503,7 +2482,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2608,7 +2587,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2620,7 +2598,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2669,7 +2647,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
conda-py3_11-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2732,7 +2710,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2744,7 +2721,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2849,7 +2826,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2861,7 +2837,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2910,5 +2886,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -89,7 +89,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -101,7 +100,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -209,7 +208,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -221,7 +219,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder

View File

@ -96,7 +96,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -108,7 +107,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -216,7 +215,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -228,7 +226,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -280,7 +278,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-shared-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -346,7 +344,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -358,7 +355,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -466,7 +463,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -478,7 +474,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -530,7 +526,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-with-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -596,7 +592,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -608,7 +603,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -716,7 +711,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -728,7 +722,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -780,7 +774,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -846,7 +840,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -858,7 +851,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -966,7 +959,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -978,7 +970,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1030,7 +1022,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-with-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1097,7 +1089,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1109,7 +1100,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1218,7 +1209,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1230,7 +1220,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1283,7 +1273,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1350,7 +1340,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1362,7 +1351,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1471,7 +1460,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1483,7 +1471,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1536,7 +1524,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-with-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1603,7 +1591,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1615,7 +1602,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1724,7 +1711,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1736,7 +1722,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1789,7 +1775,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1856,7 +1842,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1868,7 +1853,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1977,7 +1962,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1989,7 +1973,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2042,7 +2026,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-with-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2109,7 +2093,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2121,7 +2104,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2230,7 +2213,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2242,7 +2224,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2295,7 +2277,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2362,7 +2344,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2374,7 +2355,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2483,7 +2464,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2495,7 +2475,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2548,7 +2528,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-with-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2615,7 +2595,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2627,7 +2606,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2736,7 +2715,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2748,7 +2726,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2801,7 +2779,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-without-deps-debug-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2868,7 +2846,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2880,7 +2857,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2989,7 +2966,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3001,7 +2977,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -3054,5 +3030,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -89,7 +89,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -101,7 +100,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -209,7 +208,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -221,7 +219,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder

View File

@ -96,7 +96,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -108,7 +107,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -216,7 +215,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -228,7 +226,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -280,7 +278,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-shared-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -346,7 +344,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -358,7 +355,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -466,7 +463,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -478,7 +474,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -530,7 +526,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-with-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -596,7 +592,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -608,7 +603,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -716,7 +711,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -728,7 +722,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -780,7 +774,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cpu-static-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -846,7 +840,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -858,7 +851,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -966,7 +959,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -978,7 +970,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1030,7 +1022,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-with-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1097,7 +1089,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1109,7 +1100,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1218,7 +1209,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1230,7 +1220,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1283,7 +1273,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-shared-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1350,7 +1340,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1362,7 +1351,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1471,7 +1460,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1483,7 +1471,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1536,7 +1524,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-with-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1603,7 +1591,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1615,7 +1602,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1724,7 +1711,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1736,7 +1722,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1789,7 +1775,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda11_8-static-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1856,7 +1842,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1868,7 +1853,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1977,7 +1962,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1989,7 +1973,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2042,7 +2026,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-with-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2109,7 +2093,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2121,7 +2104,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2230,7 +2213,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2242,7 +2224,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2295,7 +2277,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-shared-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2362,7 +2344,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2374,7 +2355,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2483,7 +2464,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2495,7 +2475,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2548,7 +2528,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-with-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2615,7 +2595,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2627,7 +2606,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2736,7 +2715,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2748,7 +2726,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2801,7 +2779,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
libtorch-cuda12_1-static-without-deps-release-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2868,7 +2846,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2880,7 +2857,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2989,7 +2966,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -3001,7 +2977,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -3054,5 +3030,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -92,7 +92,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -104,7 +103,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -208,7 +207,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -220,7 +218,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -268,7 +266,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_8-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -331,7 +329,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -343,7 +340,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -448,7 +445,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -460,7 +456,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -509,7 +505,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_8-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -572,7 +568,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -584,7 +579,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -689,7 +684,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -701,7 +695,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -750,7 +744,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_9-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -812,7 +806,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -824,7 +817,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -928,7 +921,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -940,7 +932,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -988,7 +980,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_9-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1051,7 +1043,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1063,7 +1054,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1168,7 +1159,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1180,7 +1170,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1229,7 +1219,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_9-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1292,7 +1282,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1304,7 +1293,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1409,7 +1398,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1421,7 +1409,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1470,7 +1458,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_10-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1532,7 +1520,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1544,7 +1531,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1648,7 +1635,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1660,7 +1646,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1708,7 +1694,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_10-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -1771,7 +1757,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1783,7 +1768,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1888,7 +1873,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -1900,7 +1884,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -1949,7 +1933,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_10-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2012,7 +1996,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2024,7 +2007,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2129,7 +2112,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2141,7 +2123,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2190,7 +2172,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_11-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2252,7 +2234,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2264,7 +2245,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2368,7 +2349,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2380,7 +2360,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2428,7 +2408,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_11-cuda11_8-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2491,7 +2471,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2503,7 +2482,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2608,7 +2587,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2620,7 +2598,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2669,7 +2647,7 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_11-cuda12_1-build:
if: ${{ github.repository_owner == 'pytorch' }}
@ -2732,7 +2710,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2744,7 +2721,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2849,7 +2826,6 @@ jobs:
- name: Checkout PyTorch
uses: malfet/checkout@silent-checkout
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
quiet-checkout: true
@ -2861,7 +2837,7 @@ jobs:
- name: Checkout pytorch/builder
uses: malfet/checkout@silent-checkout
with:
ref: main
ref: release/2.1
submodules: recursive
repository: pytorch/builder
path: builder
@ -2910,5 +2886,5 @@ jobs:
github-token: ${{ secrets.GITHUB_TOKEN }}
aws-pytorch-uploader-access-key-id: ${{ secrets.AWS_PYTORCH_UPLOADER_ACCESS_KEY_ID }}
aws-pytorch-uploader-secret-access-key: ${{ secrets.AWS_PYTORCH_UPLOADER_SECRET_ACCESS_KEY }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN }}
conda-pytorchbot-token: ${{ secrets.CONDA_PYTORCHBOT_TOKEN_TEST }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -26,7 +26,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- name: Run BC Lint Action
uses: pytorch/test-infra/.github/actions/bc-lint@main
uses: pytorch/test-infra/.github/actions/bc-lint@release/2.1
with:
repo: ${{ github.event.pull_request.head.repo.full_name }}
base_sha: ${{ github.event.pull_request.base.sha }}

View File

@ -15,7 +15,7 @@ on:
# When any other step fails, it's job will be retried once by retryBot.
jobs:
lintrunner:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
@ -62,7 +62,7 @@ jobs:
exit $RC
quick-checks:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
@ -103,7 +103,7 @@ jobs:
if: github.event_name == 'pull_request' && !contains(github.event.pull_request.labels.*.name, 'skip-pr-sanity-checks')
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
fetch-depth: -1
@ -116,7 +116,7 @@ jobs:
bash .github/scripts/pr-sanity-check.sh
workflow-checks:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
@ -151,7 +151,7 @@ jobs:
exit $RC
toc:
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
@ -189,7 +189,7 @@ jobs:
test-tools:
name: Test tools
if: ${{ github.repository == 'pytorch/pytorch' }}
uses: pytorch/test-infra/.github/workflows/linux_job.yml@main
uses: pytorch/test-infra/.github/workflows/linux_job.yml@release/2.1
with:
runner: linux.2xlarge
docker-image: pytorch-linux-focal-linter
@ -210,7 +210,7 @@ jobs:
runs-on: linux.20_04.4x
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
fetch-depth: 1
@ -240,7 +240,7 @@ jobs:
# [see note: pytorch repo ref]
# deep clone (fetch-depth 0) required, to allow us to use git log
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
fetch-depth: 1

View File

@ -21,7 +21,7 @@ jobs:
environment: upload-stats
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false

View File

@ -112,30 +112,38 @@ jobs:
cuda-version: "11.8"
test-matrix: ${{ needs.win-vs2019-cuda11_8-py3-build.outputs.test-matrix }}
ios-12-5-1-x86-64-coreml:
name: ios-12-5-1-x86-64-coreml
# TODO: Figure out how to migrate this job to M1 runner
ios-build-test:
name: ios-build-test
if: github.event_name != 'schedule' || github.event.schedule == '45 0,8,16 * * 1-5' || github.event.schedule == '45 4 * * 0,6'
uses: ./.github/workflows/_ios-build-test.yml
with:
build-environment: ios-12-5-1-x86-64-coreml
ios-platform: SIMULATOR
ios-arch: x86_64
build-environment: ios-build-test
sync-tag: ios-build-test
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 1, runner: "macos-12" },
]}
ios-12-5-1-arm64-custom-ops:
name: ios-12-5-1-arm64-custom-ops
if: github.event_name != 'schedule' || github.event.schedule == '45 0,8,16 * * 1-5' || github.event.schedule == '45 4 * * 0,6'
uses: ./.github/workflows/_ios-build-test.yml
with:
build-environment: ios-12-5-1-arm64-custom-ops
ios-platform: OS
ios-arch: arm64
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 1, runner: "macos-12" },
{ config: "default",
shard: 1,
num_shards: 1,
runner: "macos-12",
ios_platform: "SIMULATOR",
ios_arch: "x86_64",
use_lite_interpreter: 1,
use_metal: 0,
use_coreml: 1,
use_custom_op_list: ""
},
{ config: "default",
shard: 1,
num_shards: 1,
runner: "macos-12",
ios_platform: "OS",
ios_arch: "arm64",
use_lite_interpreter: 1,
use_metal: 1,
use_coreml: 1,
use_custom_op_list: "mobilenetv2.yaml"
}
]}
buck-build-test:

View File

@ -14,7 +14,7 @@ jobs:
if: ${{ github.repository == 'pytorch/pytorch' }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
fetch-depth: 1
submodules: false

View File

@ -44,7 +44,7 @@ jobs:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
uses: pytorch/test-infra/.github/actions/upload-alerts@main
uses: pytorch/test-infra/.github/actions/upload-alerts@release/2.1
with:
alerts: '${{ steps.alert_creation_step.outputs.script-output }}'
organization: "pytorch"

View File

@ -37,7 +37,7 @@ jobs:
run: echo "${TRIGGERING_WORKFLOW}"
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
- uses: actions/setup-python@v4
with:

View File

@ -29,7 +29,7 @@ jobs:
name: Upload dynamo performance stats for ${{ github.event.workflow_run.id }}, attempt ${{ github.event.workflow_run.run_attempt }}
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
uses: pytorch/pytorch/.github/actions/checkout-pytorch@release/2.1
with:
submodules: false
fetch-depth: 1

View File

@ -73,8 +73,8 @@ ARG TARGETPLATFORM
# On arm64 we can only install wheel packages.
RUN case ${TARGETPLATFORM} in \
"linux/arm64") pip install --extra-index-url https://download.pytorch.org/whl/cpu/ torch torchvision torchaudio torchtext ;; \
*) /opt/conda/bin/conda install -c "${INSTALL_CHANNEL}" -c "${CUDA_CHANNEL}" -y "python=${PYTHON_VERSION}" pytorch torchvision torchaudio torchtext "pytorch-cuda=$(echo $CUDA_VERSION | cut -d'.' -f 1-2)" ;; \
"linux/arm64") pip install --extra-index-url https://download.pytorch.org/whl/cpu/ torch torchvision torchaudio ;; \
*) /opt/conda/bin/conda install -c "${INSTALL_CHANNEL}" -c "${CUDA_CHANNEL}" -y "python=${PYTHON_VERSION}" pytorch torchvision torchaudio "pytorch-cuda=$(echo $CUDA_VERSION | cut -d'.' -f 1-2)" ;; \
esac && \
/opt/conda/bin/conda clean -ya
RUN /opt/conda/bin/pip install torchelastic

View File

@ -125,6 +125,7 @@ file(GLOB native_ao_sparse_h
"native/ao_sparse/quantized/cpu/*.h")
file(GLOB native_quantized_h "native/quantized/*.h" "native/quantized/cpu/*.h" "native/quantized/cudnn/*.h")
file(GLOB native_cpu_h "native/cpu/*.h")
file(GLOB native_utils_h "native/utils/*.h")
file(GLOB native_cuda_cu "native/cuda/*.cu")
file(GLOB native_cuda_cpp "native/cuda/*.cpp")
@ -540,7 +541,7 @@ install(FILES "${CMAKE_CURRENT_BINARY_DIR}/cmake-exports/ATenConfig.cmake"
set(INSTALL_HEADERS ${base_h} ${ATen_CORE_HEADERS})
if(NOT INTERN_BUILD_MOBILE)
list(APPEND INSTALL_HEADERS ${native_h} ${native_cpu_h} ${native_ao_sparse_h} ${native_quantized_h} ${cuda_h} ${native_cuda_h} ${native_hip_h} ${cudnn_h} ${hip_h} ${mps_h} ${native_mps_h} ${miopen_h})
list(APPEND INSTALL_HEADERS ${native_h} ${native_cpu_h} ${native_ao_sparse_h} ${native_quantized_h} ${cuda_h} ${native_cuda_h} ${native_hip_h} ${cudnn_h} ${hip_h} ${mps_h} ${native_mps_h} ${native_utils_h} ${miopen_h})
# Metal
if(USE_PYTORCH_METAL_EXPORT)
# Add files needed from exporting metal models(optimized_for_mobile)

View File

@ -371,6 +371,22 @@ inline void deprecated_AT_DISPATCH_ALL_TYPES_AND_HALF_AND_COMPLEX() {}
AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES_AND3( \
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, __VA_ARGS__))
#define AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES_AND4( \
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, SCALARTYPE4, ...) \
AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES(__VA_ARGS__) \
AT_DISPATCH_CASE(SCALARTYPE1, __VA_ARGS__) \
AT_DISPATCH_CASE(SCALARTYPE2, __VA_ARGS__) \
AT_DISPATCH_CASE(SCALARTYPE3, __VA_ARGS__) \
AT_DISPATCH_CASE(SCALARTYPE4, __VA_ARGS__)
#define AT_DISPATCH_FLOATING_AND_COMPLEX_TYPES_AND4( \
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, SCALARTYPE4, TYPE, NAME, ...) \
AT_DISPATCH_SWITCH( \
TYPE, \
NAME, \
AT_DISPATCH_CASE_FLOATING_AND_COMPLEX_TYPES_AND4( \
SCALARTYPE1, SCALARTYPE2, SCALARTYPE3, SCALARTYPE4, __VA_ARGS__))
#define AT_DISPATCH_CASE_INTEGRAL_TYPES(...) \
AT_DISPATCH_CASE(at::ScalarType::Byte, __VA_ARGS__) \
AT_DISPATCH_CASE(at::ScalarType::Char, __VA_ARGS__) \

View File

@ -389,8 +389,9 @@ static inline bool mkldnn_conv_use_channels_last(const at::Tensor& input, const
(input_memory_format == at::MemoryFormat::ChannelsLast) ||
(weight_memory_format == at::MemoryFormat::ChannelsLast);
// TODO: add channels last 3d support
bool can_use_mkldnn_channels_last_3d = false;
bool can_use_mkldnn_channels_last_3d =
(input_memory_format == at::MemoryFormat::ChannelsLast3d) ||
(weight_memory_format == at::MemoryFormat::ChannelsLast3d);
return can_use_mkldnn_channels_last_2d || can_use_mkldnn_channels_last_3d;
}

View File

@ -508,9 +508,6 @@ struct ConvParams {
if (transposed && is_output_padding_big()) {
return false;
}
if (transposed && groups > 1 && at::symint::size<T>(input, 1) == groups) {
return false;
}
if (input.device().is_cpu() && input.scalar_type() == kBFloat16 && mkldnn_bf16_device_check()) {
return true;
}

View File

@ -253,7 +253,9 @@ static Tensor & copy_impl(Tensor & self, const Tensor & src, bool non_blocking)
self.storage_offset() == src.storage_offset() &&
self.strides().equals(src.strides()) &&
self.sizes().equals(src.sizes()) &&
self.scalar_type() == src.scalar_type()
self.scalar_type() == src.scalar_type() &&
self.is_conj() == src.is_conj() &&
self.is_neg() == src.is_neg()
);
if (is_same_data) {
return self;

View File

@ -727,7 +727,7 @@ Tensor _mkldnn_convolution_transpose(
if (bias.defined()) {
const ideep::tensor b = itensor_from_tensor(bias);
ideep::convolution_transpose_forward::compute(
ideep::convolution_transpose_forward::compute_v3(
x,
w,
b,
@ -738,9 +738,10 @@ Tensor _mkldnn_convolution_transpose(
padding_r(padding_expanded, output_padding_expanded),
dilation.vec(),
groups,
use_channels_last,
op_attr);
} else {
ideep::convolution_transpose_forward::compute(
ideep::convolution_transpose_forward::compute_v3(
x,
w,
output_sizes,
@ -750,6 +751,7 @@ Tensor _mkldnn_convolution_transpose(
padding_r(padding_expanded, output_padding_expanded),
dilation.vec(),
groups,
use_channels_last,
op_attr);
}
if (input.is_mkldnn()) {
@ -988,7 +990,7 @@ Tensor mkldnn_convolution_transpose_backward_input(
grad_input.resize_(input_size, memory_format);
grad_x = itensor_from_tensor(grad_input);
}
ideep::convolution_transpose_backward_data::compute(
ideep::convolution_transpose_backward_data::compute_v3(
grad_y,
w,
input_size.vec(),
@ -997,7 +999,8 @@ Tensor mkldnn_convolution_transpose_backward_input(
padding.vec(),
padding_r(padding, output_padding),
dilation.vec(),
groups);
groups,
is_channels_last);
if (grad_output.is_mkldnn()) {
return MKLDNNTensor(grad_x, grad_output.options());
@ -1024,7 +1027,7 @@ std::tuple<Tensor,Tensor> mkldnn_convolution_transpose_backward_weights(
ideep::tensor grad_w, grad_b;
if (bias_defined) {
ideep::convolution_transpose_backward_weights::compute(
ideep::convolution_transpose_backward_weights::compute_v3(
x,
grad_y,
weight_size.vec(),
@ -1034,9 +1037,10 @@ std::tuple<Tensor,Tensor> mkldnn_convolution_transpose_backward_weights(
padding.vec(),
padding_r(padding, output_padding),
dilation.vec(),
groups);
groups,
is_channels_last);
} else {
ideep::convolution_transpose_backward_weights::compute(
ideep::convolution_transpose_backward_weights::compute_v3(
x,
grad_y,
weight_size.vec(),
@ -1045,7 +1049,8 @@ std::tuple<Tensor,Tensor> mkldnn_convolution_transpose_backward_weights(
padding.vec(),
padding_r(padding, output_padding),
dilation.vec(),
groups);
groups,
is_channels_last);
}
if (!is_channels_last) {
@ -1061,18 +1066,21 @@ std::tuple<Tensor,Tensor> mkldnn_convolution_transpose_backward_weights(
}
std::tuple<Tensor, Tensor, Tensor> mkldnn_convolution_transpose_backward(
const Tensor& input, const Tensor& grad_output_t, const Tensor& weight,
const Tensor& input_t, const Tensor& grad_output_t, const Tensor& weight_t,
IntArrayRef padding, IntArrayRef output_padding, IntArrayRef stride, IntArrayRef dilation, int64_t groups,
std::array<bool,3> output_mask)
{
bool is_channels_last = mkldnn_conv_use_channels_last(input, weight);
auto memory_format = mkldnn_convolution_memory_format(input.ndimension(), is_channels_last);
bool is_channels_last = mkldnn_conv_use_channels_last(input_t, weight_t);
auto memory_format = mkldnn_convolution_memory_format(input_t.ndimension(), is_channels_last);
Tensor grad_output = grad_output_t.is_mkldnn() ? grad_output_t : grad_output_t.contiguous(memory_format);
auto input = input_t.is_mkldnn() ? input_t : input_t.contiguous(memory_format);
auto weight = weight_t.is_mkldnn() ? weight_t : weight_t.contiguous(memory_format);
int64_t dim = input.ndimension() - 2;
const auto padding_expanded = expand_param_if_needed(padding, "padding", dim);
const auto stride_expanded = expand_param_if_needed(stride, "stride", dim);
const auto dilation_expanded = expand_param_if_needed(dilation, "dilation", dim);
const auto output_padding_expanded = expand_param_if_needed(output_padding, "output_padding", dim);
Tensor grad_input, grad_weight, grad_bias;
if (output_mask[0]) {
grad_input = mkldnn_convolution_transpose_backward_input(

View File

@ -293,7 +293,8 @@ at::Tensor& mps_copy_(at::Tensor& dst, const at::Tensor& src, bool non_blocking)
dst.resize_as_(src);
}
TORCH_CHECK(dst.dim() >= src.dim());
TORCH_CHECK(
dst.dim() >= src.dim(), "Destination ", dst.sym_sizes(), " doesn't match the broadcast shape ", src.sym_sizes());
if (dst.dim() > src.dim()) {
needs_broadcasting = true;
} else {

View File

@ -16,15 +16,14 @@ namespace at::native {
Scalar _local_scalar_dense_mps(const Tensor& self) {
Scalar r;
auto output = at::empty_like(self, TensorOptions(kCPU));
mps::mps_copy_(output, self, false);
AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3(at::ScalarType::Half,
at::ScalarType::Bool,
at::ScalarType::BFloat16,
self.scalar_type(),
"_local_scalar_dense_mps",
[&] {
Tensor output = at::empty({1}, TensorOptions(at::CPU(self.scalar_type())));
mps::mps_copy_(output, self, false);
scalar_t value = *output.data_ptr<scalar_t>();
r = Scalar(value);
});

View File

@ -53,9 +53,9 @@ set(CMAKE_RANLIB ranlib CACHE FILEPATH "" FORCE)
set(PKG_CONFIG_EXECUTABLE pkg-config CACHE FILEPATH "" FORCE)
# Setup iOS platform unless specified manually with IOS_PLATFORM
if(NOT DEFINED IOS_PLATFORM)
if(NOT IOS_PLATFORM)
set(IOS_PLATFORM "OS")
endif(NOT DEFINED IOS_PLATFORM)
endif(NOT IOS_PLATFORM)
set(IOS_PLATFORM ${IOS_PLATFORM} CACHE STRING "Type of iOS Platform")
# Check the platform selection and setup for developer root
@ -118,9 +118,9 @@ set(CMAKE_FIND_LIBRARY_SUFFIXES ".dylib" ".so" ".a")
# (where install_name_tool was hardcoded) and where CMAKE_INSTALL_NAME_TOOL isn't in the cache
# and still cmake didn't fail in CMakeFindBinUtils.cmake (because it isn't rerun)
# hardcode CMAKE_INSTALL_NAME_TOOL here to install_name_tool, so it behaves as it did before, Alex
if(NOT DEFINED CMAKE_INSTALL_NAME_TOOL)
if(NOT CMAKE_INSTALL_NAME_TOOL)
find_program(CMAKE_INSTALL_NAME_TOOL install_name_tool)
endif(NOT DEFINED CMAKE_INSTALL_NAME_TOOL)
endif(NOT CMAKE_INSTALL_NAME_TOOL)
# Setup iOS deployment target
set(IOS_DEPLOYMENT_TARGET ${IOS_DEPLOYMENT_TARGET} CACHE STRING "Minimum iOS version")
@ -130,17 +130,17 @@ set(IOS_DEPLOYMENT_TARGET ${IOS_DEPLOYMENT_TARGET} CACHE STRING "Minimum iOS ver
exec_program(/usr/bin/xcode-select ARGS -print-path OUTPUT_VARIABLE CMAKE_XCODE_DEVELOPER_DIR)
set(XCODE_POST_43_ROOT "${CMAKE_XCODE_DEVELOPER_DIR}/Platforms/${IOS_PLATFORM_LOCATION}/Developer")
set(XCODE_PRE_43_ROOT "/Developer/Platforms/${IOS_PLATFORM_LOCATION}/Developer")
if(NOT DEFINED CMAKE_IOS_DEVELOPER_ROOT)
if(NOT CMAKE_IOS_DEVELOPER_ROOT)
if(EXISTS ${XCODE_POST_43_ROOT})
set(CMAKE_IOS_DEVELOPER_ROOT ${XCODE_POST_43_ROOT})
elseif(EXISTS ${XCODE_PRE_43_ROOT})
set(CMAKE_IOS_DEVELOPER_ROOT ${XCODE_PRE_43_ROOT})
endif(EXISTS ${XCODE_POST_43_ROOT})
endif(NOT DEFINED CMAKE_IOS_DEVELOPER_ROOT)
endif(NOT CMAKE_IOS_DEVELOPER_ROOT)
set(CMAKE_IOS_DEVELOPER_ROOT ${CMAKE_IOS_DEVELOPER_ROOT} CACHE PATH "Location of iOS Platform")
# Find and use the most recent iOS sdk unless specified manually with CMAKE_IOS_SDK_ROOT
if(NOT DEFINED CMAKE_IOS_SDK_ROOT)
if(NOT CMAKE_IOS_SDK_ROOT)
file(GLOB _CMAKE_IOS_SDKS "${CMAKE_IOS_DEVELOPER_ROOT}/SDKs/*")
if(_CMAKE_IOS_SDKS)
list(SORT _CMAKE_IOS_SDKS)
@ -150,7 +150,7 @@ if(NOT DEFINED CMAKE_IOS_SDK_ROOT)
message(FATAL_ERROR "No iOS SDK's found in default search path ${CMAKE_IOS_DEVELOPER_ROOT}. Manually set CMAKE_IOS_SDK_ROOT or install the iOS SDK.")
endif(_CMAKE_IOS_SDKS)
message(STATUS "Toolchain using default iOS SDK: ${CMAKE_IOS_SDK_ROOT}")
endif(NOT DEFINED CMAKE_IOS_SDK_ROOT)
endif(NOT CMAKE_IOS_SDK_ROOT)
set(CMAKE_IOS_SDK_ROOT ${CMAKE_IOS_SDK_ROOT} CACHE PATH "Location of the selected iOS SDK")
# Set the sysroot default to the most recent SDK

View File

@ -18,8 +18,8 @@ figures:
@$(PYCMD) source/scripts/build_quantization_configs.py
onnx:
@$(PYCMD) source/scripts/onnx/build_onnx_supported_aten_op_csv_table.py
@$(PYCMD) source/scripts/onnx/build_onnx_diagnostics_rules_md.py $(SOURCEDIR)/generated/onnx_diagnostics_rules
@$(PYCMD) source/scripts/onnx/build_onnx_torchscript_supported_aten_op_csv_table.py
@$(PYCMD) source/scripts/onnx/build_onnx_dynamo_diagnostics_rules_md.py $(SOURCEDIR)/generated/onnx_dynamo_diagnostics_rules
opset:
@$(PYCMD) source/scripts/build_opsets.py

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

View File

@ -179,6 +179,7 @@ Tensor autograd functions
torch.Tensor.detach
torch.Tensor.detach_
torch.Tensor.register_hook
torch.Tensor.register_post_accumulate_grad_hook
torch.Tensor.retain_grad
:hidden:`Function`

View File

@ -1,10 +1,551 @@
.. _torch.export:
torch.export
=====================
.. TODO: Add torch.export() tutorial here.
.. warning::
This feature is a prototype and may have compatibility breaking changes in the future.
This feature is a prototype under active development and there WILL BE
BREAKING CHANGES in the future.
Overview
--------
:func:`torch.export.export` takes an arbitrary Python callable (a
:class:`torch.nn.Module`, a function or a method) and produces a traced graph
representing only the Tensor computation of the function in an Ahead-of-Time
(AOT) fashion, which can subsequently be executed with different outputs or
serialized.
::
import torch
from torch.export import export
def f(x: torch.Tensor, y: torch.Tensor) -> torch.Tensor:
a = torch.sin(x)
b = torch.cos(y)
return a + b
example_args = (torch.randn(10, 10), torch.randn(10, 10))
exported_program: torch.export.ExportedProgram = export(
f, args=example_args
)
print(exported_program)
.. code-block::
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, arg0_1: f32[10, 10], arg1_1: f32[10, 10]):
# code: a = torch.sin(x)
sin: f32[10, 10] = torch.ops.aten.sin.default(arg0_1);
# code: b = torch.cos(y)
cos: f32[10, 10] = torch.ops.aten.cos.default(arg1_1);
# code: return a + b
add: f32[10, 10] = torch.ops.aten.add.Tensor(sin, cos);
return (add,)
Graph signature: ExportGraphSignature(
parameters=[],
buffers=[],
user_inputs=['arg0_1', 'arg1_1'],
user_outputs=['add'],
inputs_to_parameters={},
inputs_to_buffers={},
buffers_to_mutate={},
backward_signature=None,
assertion_dep_token=None,
)
Range constraints: {}
Equality constraints: []
``torch.export`` produces a clean intermediate representation (IR) with the
following invariants. More specifications about the IR can be found here (coming
soon!).
* **Soundness**: It is guaranteed to be a sound representation of the original
program, and maintains the same calling conventions of the original program.
* **Normalized**: There are no Python semantics within the graph. Submodules
from the original programs are inlined to form one fully flattened
computational graph.
* **Defined Operator Set**: The graph produced contains only a small defined
:ref:`Core ATen IR <torch.compiler_ir>` opset and registered custom
operators.
* **Graph properties**: The graph is purely functional, meaning it does not
contain operations with side effects such as mutations or aliasing. It does
not mutate any intermediate values, parameters, or buffers.
* **Metadata**: The graph contains metadata captured during tracing, such as a
stacktrace from user's code.
Under the hood, ``torch.export`` leverages the following latest technologies:
* **TorchDynamo (torch._dynamo)** is an internal API that uses a CPython feature
called the Frame Evaluation API to safely trace PyTorch graphs. This
provides a massively improved graph capturing experience, with much fewer
rewrites needed in order to fully trace the PyTorch code.
* **AOT Autograd** provides a functionalized PyTorch graph and ensures the graph
is decomposed/lowered to the small defined Core ATen operator set.
* **Torch FX (torch.fx)** is the underlying representation of the graph,
allowing flexible Python-based transformations.
Existing frameworks
^^^^^^^^^^^^^^^^^^^
:func:`torch.compile` also utilizes the same PT2 stack as ``torch.export``, but
is slightly different:
* **JIT vs. AOT**: :func:`torch.compile` is a JIT compiler whereas
which is not intended to be used to produce compiled artifacts outside of
deployment.
* **Partial vs. Full Graph Capture**: When :func:`torch.compile` runs into an
untraceable part of a model, it will "graph break" and fall back to running
the program in the eager Python runtime. In comparison, ``torch.export`` aims
to get a full graph representation of a PyTorch model, so it will error out
when something untraceable is reached. Since ``torch.export`` produces a full
graph disjoint from any Python features or runtime, this graph can then be
saved, loaded, and run in different environments and languages.
* **Usability tradeoff**: Since :func:`torch.compile` is able to fallback to the
Python runtime whenever it reaches something untraceable, it is a lot more
flexible. ``torch.export`` will instead require users to provide more
information or rewrite their code to make it traceable.
Compared to :func:`torch.fx.symbolic_trace`, ``torch.export`` traces using
TorchDynamo which operates at the Python bytecode level, giving it the ability
to trace arbitrary Python constructs not limited by what Python operator
overloading supports. Additionally, ``torch.export`` keeps fine-grained track of
tensor metadata, so that conditionals on things like tensor shapes do not
fail tracing. In general, ``torch.export`` is expected to work on more user
programs, and produce lower-level graphs (at the ``torch.ops.aten`` operator
level). Note that users can still use :func:`torch.fx.symbolic_trace` as a
preprocessing step before ``torch.export``.
Compared to :func:`torch.jit.script`, ``torch.export`` does not capture Python
control flow or data structures, but it supports more Python language features
than TorchScript (as it is easier to have comprehensive coverage over Python
bytecodes). The resulting graphs are simpler and only have straight line control
flow (except for explicit control flow operators).
Compared to :func:`torch.jit.trace`, ``torch.export`` is sound: it is able to
trace code that performs integer computation on sizes and records all of the
side-conditions necessary to show that a particular trace is valid for other
inputs.
Exporting a PyTorch Model
-------------------------
An Example
^^^^^^^^^^
The main entrypoint is through :func:`torch.export.export`, which takes a
callable (:class:`torch.nn.Module`, function, or method) and sample inputs, and
captures the computation graph into an :class:`torch.export.ExportedProgram`. An
example:
::
import torch
from torch.export import export
# Simple module for demonstration
class M(torch.nn.Module):
def __init__(self) -> None:
super().__init__()
self.conv = torch.nn.Conv2d(
in_channels=3, out_channels=16, kernel_size=3, padding=1
)
self.relu = torch.nn.ReLU()
self.maxpool = torch.nn.MaxPool2d(kernel_size=3)
def forward(self, x: torch.Tensor, *, constant=None) -> torch.Tensor:
a = self.conv(x)
a.add_(constant)
return self.maxpool(self.relu(a))
example_args = (torch.randn(1, 3, 256, 256),)
example_kwargs = {"constant": torch.ones(1, 16, 256, 256)}
exported_program: torch.export.ExportedProgram = export(
M(), args=example_args, kwargs=example_kwargs
)
print(exported_program)
.. code-block::
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, arg0_1: f32[16, 3, 3, 3], arg1_1: f32[16], arg2_1: f32[1, 3, 256, 256], arg3_1: f32[1, 16, 256, 256]):
# code: a = self.conv(x)
convolution: f32[1, 16, 256, 256] = torch.ops.aten.convolution.default(
arg2_1, arg0_1, arg1_1, [1, 1], [1, 1], [1, 1], False, [0, 0], 1
);
# code: a.add_(constant)
add: f32[1, 16, 256, 256] = torch.ops.aten.add.Tensor(convolution, arg3_1);
# code: return self.maxpool(self.relu(a))
relu: f32[1, 16, 256, 256] = torch.ops.aten.relu.default(add);
max_pool2d_with_indices = torch.ops.aten.max_pool2d_with_indices.default(
relu, [3, 3], [3, 3]
);
getitem: f32[1, 16, 85, 85] = max_pool2d_with_indices[0];
return (getitem,)
Graph signature: ExportGraphSignature(
parameters=['L__self___conv.weight', 'L__self___conv.bias'],
buffers=[],
user_inputs=['arg2_1', 'arg3_1'],
user_outputs=['getitem'],
inputs_to_parameters={
'arg0_1': 'L__self___conv.weight',
'arg1_1': 'L__self___conv.bias',
},
inputs_to_buffers={},
buffers_to_mutate={},
backward_signature=None,
assertion_dep_token=None,
)
Range constraints: {}
Equality constraints: []
Inspecting the ``ExportedProgram``, we can note the following:
* The :class:`torch.fx.Graph` contains the computation graph of the original
program, along with records of the original code for easy debugging.
* The graph contains only ``torch.ops.aten`` operators found in the
:ref:`Core ATen IR <torch.compiler_ir>` opset and custom operators, and is
fully functional, without any inplace operators such as ``torch.add_``.
* The parameters (weight and bias to conv) are lifted as inputs to the graph,
resulting in no ``get_attr`` nodes in the graph, which previously existed in
the result of :func:`torch.fx.symbolic_trace`.
* The :class:`torch.export.ExportGraphSignature` models the input and output
signature, along with specifying which inputs are parameters.
* The resulting shape and dtype of tensors produced by each node in the graph is
noted. For example, the ``convolution`` node will result in a tensor of dtype
``torch.float32`` and shape (1, 16, 256, 256).
Expressing Dynamism
^^^^^^^^^^^^^^^^^^^
By default ``torch.export`` will trace the program assuming all input shapes are
**static**, and specializing the exported program to those dimensions. However,
some dimensions, such as a batch dimension, can be dynamic and vary from run to
run. Such dimensions must be marked dynamic using the
:func:`torch.export.dynamic_dim` API, and passed into
:func:`torch.export.export` through the ``constraints`` argument. An example:
::
import torch
from torch.export import export, dynamic_dim
class M(torch.nn.Module):
def __init__(self):
super().__init__()
self.branch1 = torch.nn.Sequential(
torch.nn.Linear(64, 32), torch.nn.ReLU()
)
self.branch2 = torch.nn.Sequential(
torch.nn.Linear(128, 64), torch.nn.ReLU()
)
self.buffer = torch.ones(32)
def forward(self, x1, x2):
out1 = self.branch1(x1)
out2 = self.branch2(x2)
return (out1 + self.buffer, out2)
example_args = (torch.randn(32, 64), torch.randn(32, 128))
constraints = [
# First dimension of each input is a dynamic batch size
dynamic_dim(example_args[0], 0),
dynamic_dim(example_args[1], 0),
# The dynamic batch size between the inputs are equal
dynamic_dim(example_args[0], 0) == dynamic_dim(example_args[1], 0),
]
exported_program: torch.export.ExportedProgram = export(
M(), args=example_args, constraints=constraints
)
print(exported_program)
.. code-block::
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, arg0_1: f32[32, 64], arg1_1: f32[32], arg2_1: f32[64, 128], arg3_1: f32[64], arg4_1: f32[32], arg5_1: f32[s0, 64], arg6_1: f32[s0, 128]):
# code: out1 = self.branch1(x1)
permute: f32[64, 32] = torch.ops.aten.permute.default(arg0_1, [1, 0]);
addmm: f32[s0, 32] = torch.ops.aten.addmm.default(arg1_1, arg5_1, permute);
relu: f32[s0, 32] = torch.ops.aten.relu.default(addmm);
# code: out2 = self.branch2(x2)
permute_1: f32[128, 64] = torch.ops.aten.permute.default(arg2_1, [1, 0]);
addmm_1: f32[s0, 64] = torch.ops.aten.addmm.default(arg3_1, arg6_1, permute_1);
relu_1: f32[s0, 64] = torch.ops.aten.relu.default(addmm_1); addmm_1 = None
# code: return (out1 + self.buffer, out2)
add: f32[s0, 32] = torch.ops.aten.add.Tensor(relu, arg4_1);
return (add, relu_1)
Graph signature: ExportGraphSignature(
parameters=[
'branch1.0.weight',
'branch1.0.bias',
'branch2.0.weight',
'branch2.0.bias',
],
buffers=['L__self___buffer'],
user_inputs=['arg5_1', 'arg6_1'],
user_outputs=['add', 'relu_1'],
inputs_to_parameters={
'arg0_1': 'branch1.0.weight',
'arg1_1': 'branch1.0.bias',
'arg2_1': 'branch2.0.weight',
'arg3_1': 'branch2.0.bias',
},
inputs_to_buffers={'arg4_1': 'L__self___buffer'},
buffers_to_mutate={},
backward_signature=None,
assertion_dep_token=None,
)
Range constraints: {s0: RangeConstraint(min_val=2, max_val=9223372036854775806)}
Equality constraints: [(InputDim(input_name='arg5_1', dim=0), InputDim(input_name='arg6_1', dim=0))]
Some additional things to note:
* Through the :func:`torch.export.dynamic_dim` API, we specified the first
dimension of each input to be dynamic. Looking at the inputs ``arg5_1`` and
``arg6_1``, they have a symbolic shape of (s0, 64) and (s0, 128), instead of
the (32, 64) and (32, 128) shaped tensors that we passed in as example inputs.
``s0`` is a symbol representing that this dimension can be a range
of values.
* ``exported_program.range_constraints`` describes the ranges of each symbol
appearing in the graph. In this case, we see that ``s0`` has the range
[2, inf]. For technical reasons that are difficult to explain here, they are
assumed to be not 0 or 1. This is not a bug, and does not necessarily mean
that the exported program will not work for dimensions 0 or 1. See
`The 0/1 Specialization Problem <https://docs.google.com/document/d/16VPOa3d-Liikf48teAOmxLc92rgvJdfosIy-yoT38Io/edit?fbclid=IwAR3HNwmmexcitV0pbZm_x1a4ykdXZ9th_eJWK-3hBtVgKnrkmemz6Pm5jRQ#heading=h.ez923tomjvyk>`_
for an in-depth discussion of this topic.
* ``exported_program.equality_constraints`` describes which dimensions are
required to be equal. Since we specified in the constraints that the first
dimension of each argument is equivalent,
(``dynamic_dim(example_args[0], 0) == dynamic_dim(example_args[1], 0)``),
we see in the equality constraints the tuple specifying that ``arg5_1``
dimension 0 and ``arg6_1`` dimension 0 are equal.
Serialization
^^^^^^^^^^^^^
To save the ``ExportedProgram``, users can use the :func:`torch.export.save` and
:func:`torch.export.load` APIs. A convention is to save the ``ExportedProgram``
using a ``.pt2`` file extension.
An example:
::
import torch
import io
class MyModule(torch.nn.Module):
def forward(self, x):
return x + 10
exported_program = torch.export.export(MyModule(), torch.randn(5))
torch.export.save(exported_program, 'exported_program.pt2')
saved_exported_program = torch.export.load('exported_program.pt2')
Specialization
^^^^^^^^^^^^^^
Input shapes
~~~~~~~~~~~~
As mentioned before, by default, ``torch.export`` will trace the program
specializing on the input tensors' shapes, unless a dimension is specified as
dynamic via the :func:`torch.export.dynamic_dim` API. This means that if there
exists shape-dependent control flow, ``torch.export`` will specialize on the
branch that is being taken with the given sample inputs. For example:
::
import torch
from torch.export import export
def fn(x):
if x.shape[0] > 5:
return x + 1
else:
return x - 1
example_inputs = (torch.rand(10, 2),)
exported_program = export(fn, example_inputs)
print(exported_program)
.. code-block::
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, arg0_1: f32[10, 2]):
add: f32[10, 2] = torch.ops.aten.add.Tensor(arg0_1, 1);
return (add,)
The conditional of (``x.shape[0] > 5``) does not appear in the
``ExportedProgram`` because the example inputs have the static
shape of (10, 2). Since ``torch.export`` specializes on the inputs' static
shapes, the else branch (``x - 1``) will never be reached. To preserve the dynamic
branching behavior based on the shape of a tensor in the traced graph,
:func:`torch.export.dynamic_dim` will need to be used to specify the dimension
of the input tensor (``x.shape[0]``) to be dynamic, and the source code will
need to be :ref:`rewritten <Data/Shape-Dependent Control Flow>`.
Non-tensor inputs
~~~~~~~~~~~~~~~~~
``torch.export`` also specializes the traced graph based on the values of inputs
that are not ``torch.Tensor``, such as ``int``, ``float``, ``bool``, and ``str``.
However, we will likely change this in the near future to not specialize on
inputs of primitive types.
For example:
::
import torch
from torch.export import export
def fn(x: torch.Tensor, const: int, times: int):
for i in range(times):
x = x + const
return x
example_inputs = (torch.rand(2, 2), 1, 3)
exported_program = export(fn, example_inputs)
print(exported_program)
.. code-block::
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, arg0_1: f32[2, 2], arg1_1, arg2_1):
add: f32[2, 2] = torch.ops.aten.add.Tensor(arg0_1, 1);
add_1: f32[2, 2] = torch.ops.aten.add.Tensor(add, 1);
add_2: f32[2, 2] = torch.ops.aten.add.Tensor(add_1, 1);
return (add_2,)
Because integers are specialized, the ``torch.ops.aten.add.Tensor`` operations
are all computed with the inlined constant ``1``, rather than ``arg1_1``.
Additionally, the ``times`` iterator used in the ``for`` loop is also "inlined"
in the graph through the 3 repeated ``torch.ops.aten.add.Tensor`` calls, and the
input ``arg2_1`` is never used.
Limitations of torch.export
---------------------------
Graph Breaks
^^^^^^^^^^^^
As ``torch.export`` is a one-shot process for capturing a computation graph from
a PyTorch program, it might ultimately run into untraceable parts of programs as
it is nearly impossible to support tracing all PyTorch and Python features. In
the case of ``torch.compile``, an unsupported operation will cause a "graph
break" and the unsupported operation will be run with default Python evaluation.
In contrast, ``torch.export`` will require users to provide additional
information or rewrite parts of their code to make it traceable. As the
tracing is based on TorchDynamo, which evaluates at the Python
bytecode level, there will be significantly fewer rewrites required compared to
previous tracing frameworks.
When a graph break is encountered, :ref:`ExportDB <torch.export_db>` is a great
resource for learning about the kinds of programs that are supported and
unsupported, along with ways to rewrite programs to make them traceable.
.. _Data/Shape-Dependent Control Flow:
Data/Shape-Dependent Control Flow
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Graph breaks can also be encountered on data-dependent control flow (``if
x.shape[0] > 2``) when shapes are not being specialized, as a tracing compiler cannot
possibly deal with without generating code for a combinatorially exploding
number of paths. In such cases, users will need to rewrite their code using
special control flow operators (coming soon!).
Data-Dependent Accesses
^^^^^^^^^^^^^^^^^^^^^^^
Data dependent behavior such as using the value inside of a tensor to construct
another tensor, or using the value of a tensor to slice into another tensor, is
also something the tracer cannot fully determine. Users will need to rewrite
their code using the inline constraint APIs
:func:`torch.export.constrain_as_size` and
:func:`torch.export.constrain_as_value`.
Missing Meta Kernels for Operators
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When tracing, a META implementation (or "meta kernel") is required for all
operators. This is used to reason about the input/output shapes for this
operator.
Note that the official API for registering custom meta kernels for custom ops is
currently undergoing development. While the final API is being refined, you can
refer to the documentation `here <https://docs.google.com/document/d/1GgvOe7C8_NVOMLOCwDaYV1mXXyHMXY7ExoewHqooxrs/edit#heading=h.64r4npvq0w0>`_.
In the unfortunate case where your model uses an ATen operator that is does not
have a meta kernel implementation yet, please file an issue.
Read More
---------
.. toctree::
:caption: Additional Links for Export Users
:maxdepth: 1
torch.compiler_transformations
torch.compiler_ir
generated/exportdb/index
.. toctree::
:caption: Deep Dive for PyTorch Developers
:maxdepth: 1
torch.compiler_deepdive
torch.compiler_dynamic_shapes
torch.compiler_fake_tensor
API Reference
-------------
.. automodule:: torch.export
.. autofunction:: export
@ -24,10 +565,3 @@ torch.export
.. autoclass:: ArgumentSpec
.. autoclass:: ModuleCallSignature
.. autoclass:: ModuleCallEntry
.. toctree::
:glob:
:maxdepth: 1
generated/exportdb/index

View File

@ -94,7 +94,6 @@ Features described in this documentation are classified by release status:
profiler
nn.init
onnx
onnx_diagnostics
optim
complex_numbers
ddp_comm_hooks

View File

@ -185,6 +185,7 @@ If you don't see an operation listed here, but it would help your use case, plea
:meth:`Tensor.reciprocal_`,None
:meth:`Tensor.refine_names`,See documentation
:meth:`Tensor.register_hook`,None
:meth:`Tensor.register_post_accumulate_grad_hook`,None
:meth:`Tensor.rename`,See documentation
:meth:`Tensor.rename_`,See documentation
:attr:`Tensor.requires_grad`,None

View File

@ -1,745 +1,64 @@
torch.onnx
==========
.. contents:: :local:
.. automodule:: torch.onnx
Overview
--------
`Open Neural Network eXchange (ONNX) <https://onnx.ai/>`_ is an open standard
format for representing machine learning models. The torch.onnx module can export
PyTorch models to ONNX. The model can then be consumed by any of the many
`runtimes that support ONNX <https://onnx.ai/supported-tools.html#deployModel>`_.
format for representing machine learning models. The ``torch.onnx`` module captures the computation graph from a
native PyTorch :class:`torch.nn.Module` model and converts it into an
`ONNX graph <https://github.com/onnx/onnx/blob/main/docs/IR.md>`_.
Example: AlexNet from PyTorch to ONNX
-------------------------------------
The exported model can be consumed by any of the many
`runtimes that support ONNX <https://onnx.ai/supported-tools.html#deployModel>`_, including
Microsoft's `ONNX Runtime <https://www.onnxruntime.ai>`_.
Here is a simple script which exports a pretrained AlexNet to an ONNX file named ``alexnet.onnx``.
The call to ``torch.onnx.export`` runs the model once to trace its execution and then exports the
traced model to the specified file::
**There are two flavors of ONNX exporter API that you can use, as listed below:**
import torch
import torchvision
TorchDynamo-based ONNX Exporter
-------------------------------
dummy_input = torch.randn(10, 3, 224, 224, device="cuda")
model = torchvision.models.alexnet(pretrained=True).cuda()
*The TorchDynamo-based ONNX exporter is the newest (and Beta) exporter for PyTorch 2.0 and newer*
# Providing input and output names sets the display names for values
# within the model's graph. Setting these does not change the semantics
# of the graph; it is only for readability.
#
# The inputs to the network consist of the flat list of inputs (i.e.
# the values you would pass to the forward() method) followed by the
# flat list of parameters. You can partially specify names, i.e. provide
# a list here shorter than the number of inputs to the model, and we will
# only set that subset of names, starting from the beginning.
input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
output_names = [ "output1" ]
TorchDynamo engine is leveraged to hook into Python's frame evaluation API and dynamically rewrite its
bytecode into an FX Graph. The resulting FX Graph is then polished before it is finally translated into an
ONNX graph.
torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names)
The main advantage of this approach is that the `FX graph <https://pytorch.org/docs/stable/fx.html>`_ is captured using
bytecode analysis that preserves the dynamic nature of the model instead of using traditional static tracing techniques.
The resulting ``alexnet.onnx`` file contains a binary `protocol buffer <https://developers.google.com/protocol-buffers/>`_
which contains both the network structure and parameters of the model you exported
(in this case, AlexNet). The argument ``verbose=True`` causes the
exporter to print out a human-readable representation of the model::
:doc:`Learn more about the TorchDynamo-based ONNX Exporter <onnx_dynamo>`
# These are the inputs and parameters to the network, which have taken on
# the names we specified earlier.
graph(%actual_input_1 : Float(10, 3, 224, 224)
%learned_0 : Float(64, 3, 11, 11)
%learned_1 : Float(64)
%learned_2 : Float(192, 64, 5, 5)
%learned_3 : Float(192)
# ---- omitted for brevity ----
%learned_14 : Float(1000, 4096)
%learned_15 : Float(1000)) {
# Every statement consists of some output tensors (and their types),
# the operator to be run (with its attributes, e.g., kernels, strides,
# etc.), its input tensors (%actual_input_1, %learned_0, %learned_1)
%17 : Float(10, 64, 55, 55) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[11, 11], pads=[2, 2, 2, 2], strides=[4, 4]](%actual_input_1, %learned_0, %learned_1), scope: AlexNet/Sequential[features]/Conv2d[0]
%18 : Float(10, 64, 55, 55) = onnx::Relu(%17), scope: AlexNet/Sequential[features]/ReLU[1]
%19 : Float(10, 64, 27, 27) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%18), scope: AlexNet/Sequential[features]/MaxPool2d[2]
# ---- omitted for brevity ----
%29 : Float(10, 256, 6, 6) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%28), scope: AlexNet/Sequential[features]/MaxPool2d[12]
# Dynamic means that the shape is not known. This may be because of a
# limitation of our implementation (which we would like to fix in a
# future release) or shapes which are truly dynamic.
%30 : Dynamic = onnx::Shape(%29), scope: AlexNet
%31 : Dynamic = onnx::Slice[axes=[0], ends=[1], starts=[0]](%30), scope: AlexNet
%32 : Long() = onnx::Squeeze[axes=[0]](%31), scope: AlexNet
%33 : Long() = onnx::Constant[value={9216}](), scope: AlexNet
# ---- omitted for brevity ----
%output1 : Float(10, 1000) = onnx::Gemm[alpha=1, beta=1, broadcast=1, transB=1](%45, %learned_14, %learned_15), scope: AlexNet/Sequential[classifier]/Linear[6]
return (%output1);
}
TorchScript-based ONNX Exporter
-------------------------------
You can also verify the output using the `ONNX <https://github.com/onnx/onnx/>`_ library,
which you can install using ``pip``::
*The TorchScript-based ONNX exporter is available since PyTorch 1.2.0*
pip install onnx
`TorchScript <https://pytorch.org/docs/stable/jit.html>`_ is leveraged to trace (through :func:`torch.jit.trace`)
the model and capture a static computation graph.
Then, you can run::
As a consequence, the resulting graph has a couple limitations:
import onnx
* It does not record any control-flow, like if-statements or loops;
* Does not handle nuances between ``training`` and ``eval`` mode;
* Does not truly handle dynamic inputs
# Load the ONNX model
model = onnx.load("alexnet.onnx")
As an attempt to support the static tracing limitations, the exporter also supports TorchScript scripting
(through :func:`torch.jit.script`), which adds support for data-dependent control-flow, for example. However, TorchScript
itself is a subset of the Python language, so not all features in Python are supported, such as in-place operations.
# Check that the model is well formed
onnx.checker.check_model(model)
:doc:`Learn more about the TorchScript-based ONNX Exporter <onnx_torchscript>`
# Print a human readable representation of the graph
print(onnx.helper.printable_graph(model.graph))
You can also run the exported model with one of the many
`runtimes that support ONNX <https://onnx.ai/supported-tools.html#deployModel>`_.
For example after installing `ONNX Runtime <https://www.onnxruntime.ai>`_, you can
load and run the model::
import onnxruntime as ort
import numpy as np
ort_session = ort.InferenceSession("alexnet.onnx")
outputs = ort_session.run(
None,
{"actual_input_1": np.random.randn(10, 3, 224, 224).astype(np.float32)},
)
print(outputs[0])
Here is a more involved `tutorial on exporting a model and running it with ONNX Runtime <https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html>`_.
.. _tracing-vs-scripting:
Tracing vs Scripting
--------------------
Internally, :func:`torch.onnx.export()` requires a :class:`torch.jit.ScriptModule` rather than
a :class:`torch.nn.Module`. If the passed-in model is not already a ``ScriptModule``,
``export()`` will use *tracing* to convert it to one:
.. TODO(justinchuby): Add a word on recommending tracing over scripting for most use cases.
* **Tracing**: If ``torch.onnx.export()`` is called with a Module that is not already a
``ScriptModule``, it first does the equivalent of :func:`torch.jit.trace`, which executes the model
once with the given ``args`` and records all operations that happen during that execution. This
means that if your model is dynamic, e.g., changes behavior depending on input data, the exported
model will *not* capture this dynamic behavior.
We recommend examining the exported model and making sure the operators look
reasonable. Tracing will unroll loops and if statements, exporting a static graph that is exactly
the same as the traced run. If you want to export your model with dynamic control flow, you will
need to use *scripting*.
* **Scripting**: Compiling a model via scripting preserves dynamic control flow and is valid for inputs
of different sizes. To use scripting:
* Use :func:`torch.jit.script` to produce a ``ScriptModule``.
* Call ``torch.onnx.export()`` with the ``ScriptModule`` as the model. The ``args`` are still required,
but they will be used internally only to produce example outputs, so that the types and shapes of the
outputs can be captured. No tracing will be performed.
See `Introduction to TorchScript <https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html>`_
and `TorchScript <jit.html>`_ for more details, including how to compose tracing and scripting to suit the
particular requirements of different models.
Avoiding Pitfalls
-----------------
Avoid NumPy and built-in Python types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PyTorch models can be written using NumPy or Python types and functions, but
during :ref:`tracing<tracing-vs-scripting>`, any variables of NumPy or Python
types (rather than torch.Tensor) are converted to constants, which will produce
the wrong result if those values should change depending on the inputs.
For example, rather than using numpy functions on numpy.ndarrays: ::
# Bad! Will be replaced with constants during tracing.
x, y = np.random.rand(1, 2), np.random.rand(1, 2)
np.concatenate((x, y), axis=1)
Use torch operators on torch.Tensors: ::
# Good! Tensor operations will be captured during tracing.
x, y = torch.randn(1, 2), torch.randn(1, 2)
torch.cat((x, y), dim=1)
And rather than use :func:`torch.Tensor.item` (which converts a Tensor to a Python
built-in number): ::
# Bad! y.item() will be replaced with a constant during tracing.
def forward(self, x, y):
return x.reshape(y.item(), -1)
Use torch's support for implicit casting of single-element tensors: ::
# Good! y will be preserved as a variable during tracing.
def forward(self, x, y):
return x.reshape(y, -1)
Avoid Tensor.data
^^^^^^^^^^^^^^^^^
Using the Tensor.data field can produce an incorrect trace and therefore an incorrect ONNX graph.
Use :func:`torch.Tensor.detach` instead. (Work is ongoing to
`remove Tensor.data entirely <https://github.com/pytorch/pytorch/issues/30987>`_).
Avoid in-place operations when using tensor.shape in tracing mode
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In tracing mode, shapes obtained from ``tensor.shape`` are traced as tensors,
and share the same memory. This might cause a mismatch the final output values.
As a workaround, avoid the use of inplace operations in these scenarios.
For example, in the model::
class Model(torch.nn.Module):
def forward(self, states):
batch_size, seq_length = states.shape[:2]
real_seq_length = seq_length
real_seq_length += 2
return real_seq_length + seq_length
``real_seq_length`` and ``seq_length`` share the same memory in tracing mode.
This could be avoided by rewriting the inplace operation::
real_seq_length = real_seq_length + 2
Limitations
-----------
Types
^^^^^
* Only :class:`torch.Tensors`, numeric types that can be trivially converted to torch.Tensors (e.g. float, int),
and tuples and lists of those types are supported as model inputs or outputs. Dict and str inputs and
outputs are accepted in :ref:`tracing<tracing-vs-scripting>` mode, but:
* Any computation that depends on the value of a dict or a str input **will be replaced with the
constant value** seen during the one traced execution.
* Any output that is a dict will be silently replaced with a **flattened sequence of its values
(keys will be removed)**. E.g. ``{"foo": 1, "bar": 2}`` becomes ``(1, 2)``.
* Any output that is a str will be silently removed.
* Certain operations involving tuples and lists are not supported in
:ref:`scripting<tracing-vs-scripting>` mode due to limited support in ONNX for nested sequences.
In particular appending a tuple to a list is not supported. In tracing mode, the nested sequences
will be flattened automatically during the tracing.
Differences in Operator Implementations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Due to differences in implementations of operators, running the exported model on different runtimes
may produce different results from each other or from PyTorch. Normally these differences are
numerically small, so this should only be a concern if your application is sensitive to these
small differences.
.. _tensor-indexing:
Unsupported Tensor Indexing Patterns
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Tensor indexing patterns that cannot be exported are listed below.
If you are experiencing issues exporting a model that does not include any of
the unsupported patterns below, please double check that you are exporting with
the latest ``opset_version``.
Reads / Gets
~~~~~~~~~~~~
When indexing into a tensor for reading, the following patterns are not supported: ::
# Tensor indices that includes negative values.
data[torch.tensor([[1, 2], [2, -3]]), torch.tensor([-2, 3])]
# Workarounds: use positive index values.
Writes / Sets
~~~~~~~~~~~~~
When indexing into a Tensor for writing, the following patterns are not supported: ::
# Multiple tensor indices if any has rank >= 2
data[torch.tensor([[1, 2], [2, 3]]), torch.tensor([2, 3])] = new_data
# Workarounds: use single tensor index with rank >= 2,
# or multiple consecutive tensor indices with rank == 1.
# Multiple tensor indices that are not consecutive
data[torch.tensor([2, 3]), :, torch.tensor([1, 2])] = new_data
# Workarounds: transpose `data` such that tensor indices are consecutive.
# Tensor indices that includes negative values.
data[torch.tensor([1, -2]), torch.tensor([-2, 3])] = new_data
# Workarounds: use positive index values.
# Implicit broadcasting required for new_data.
data[torch.tensor([[0, 2], [1, 1]]), 1:3] = new_data
# Workarounds: expand new_data explicitly.
# Example:
# data shape: [3, 4, 5]
# new_data shape: [5]
# expected new_data shape after broadcasting: [2, 2, 2, 5]
Adding support for operators
----------------------------
When exporting a model that includes unsupported operators, you'll see an error message like:
.. code-block:: text
RuntimeError: ONNX export failed: Couldn't export operator foo
When that happens, there are a few things you can do:
#. Change the model to not use that operator.
#. Create a symbolic function to convert the operator and register it as a custom symbolic function.
#. Contribute to PyTorch to add the same symbolic function to :mod:`torch.onnx` itself.
If you decided to implement a symbolic function (we hope you will contribute it back to PyTorch!), here is how you can get started:
ONNX exporter internals
^^^^^^^^^^^^^^^^^^^^^^^
A "symbolic function" is a function that decomposes a PyTorch operator into a
composition of a series of ONNX operators.
During export, each node (which contains a PyTorch operator) in the TorchScript
graph is visited by the exporter in topological order.
Upon visiting a node, the exporter looks for a registered symbolic functions for
that operator. Symbolic functions are implemented in Python. A symbolic function for
an op named ``foo`` would look something like::
def foo(
g,
input_0: torch._C.Value,
input_1: torch._C.Value) -> Union[None, torch._C.Value, List[torch._C.Value]]:
"""
Adds the ONNX operations representing this PyTorch function by updating the
graph g with `g.op()` calls.
Args:
g (Graph): graph to write the ONNX representation into.
input_0 (Value): value representing the variables which contain
the first input for this operator.
input_1 (Value): value representing the variables which contain
the second input for this operator.
Returns:
A Value or List of Values specifying the ONNX nodes that compute something
equivalent to the original PyTorch operator with the given inputs.
None if it cannot be converted to ONNX.
"""
...
The ``torch._C`` types are Python wrappers around the types defined in C++ in
`ir.h <https://github.com/pytorch/pytorch/blob/main/torch/csrc/jit/ir/ir.h>`_.
The process for adding a symbolic function depends on the type of operator.
.. _adding-support-aten:
ATen operators
^^^^^^^^^^^^^^
`ATen <https://pytorch.org/cppdocs/#aten>`_ is PyTorch's built-in tensor library.
If the operator is an ATen operator (shows up in the TorchScript graph with the prefix
``aten::``), make sure it is not supported already.
List of supported operators
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Visit the auto generated :doc:`list of supported TorchScript operators <../onnx_supported_aten_ops>`
for details on which operator are supported in each ``opset_version``.
Adding support for an aten or quantized operator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the operator is not in the list above:
* Define the symbolic function in ``torch/onnx/symbolic_opset<version>.py``, for example
`torch/onnx/symbolic_opset9.py <https://github.com/pytorch/pytorch/blob/main/torch/onnx/symbolic_opset9.py>`_.
Make sure the function has the same name as the ATen function, which may be declared in
``torch/_C/_VariableFunctions.pyi`` or ``torch/nn/functional.pyi`` (these files are generated at
build time, so will not appear in your checkout until you build PyTorch).
* By default, the first arg is the ONNX graph.
Other arg names must EXACTLY match the names in the ``.pyi`` file,
because dispatch is done with keyword arguments.
* In the symbolic function, if the operator is in the
`ONNX standard operator set <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_,
we only need to create a node to represent the ONNX operator in the graph.
If not, we can compose several standard operators that have the
equivalent semantics to the ATen operator.
Here is an example of handling missing symbolic function for the ``ELU`` operator.
If we run the following code::
print(
torch.jit.trace(
torch.nn.ELU(), # module
torch.ones(1) # example input
).graph
)
We see something like::
graph(%self : __torch__.torch.nn.modules.activation.___torch_mangle_0.ELU,
%input : Float(1, strides=[1], requires_grad=0, device=cpu)):
%4 : float = prim::Constant[value=1.]()
%5 : int = prim::Constant[value=1]()
%6 : int = prim::Constant[value=1]()
%7 : Float(1, strides=[1], requires_grad=0, device=cpu) = aten::elu(%input, %4, %5, %6)
return (%7)
Since we see ``aten::elu`` in the graph, we know this is an ATen operator.
We check the `ONNX operator list <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_,
and confirm that ``Elu`` is standardized in ONNX.
We find a signature for ``elu`` in ``torch/nn/functional.pyi``::
def elu(input: Tensor, alpha: float = ..., inplace: bool = ...) -> Tensor: ...
We add the following lines to ``symbolic_opset9.py``::
def elu(g, input: torch.Value, alpha: torch.Value, inplace: bool = False):
return g.op("Elu", input, alpha_f=alpha)
Now PyTorch is able to export models containing the ``aten::elu`` operator!
See the ``torch/onnx/symbolic_opset*.py`` files for more examples.
torch.autograd.Functions
^^^^^^^^^^^^^^^^^^^^^^^^
If the operator is a sub-class of :class:`torch.autograd.Function`, there are three ways
to export it.
Static Symbolic Method
~~~~~~~~~~~~~~~~~~~~~~
You can add a static method named ``symbolic`` to your function class. It should return
ONNX operators that represent the function's behavior in ONNX. For example::
class MyRelu(torch.autograd.Function):
@staticmethod
def forward(ctx, input: torch.Tensor) -> torch.Tensor:
ctx.save_for_backward(input)
return input.clamp(min=0)
@staticmethod
def symbolic(g: torch.Graph, input: torch.Value) -> torch.Value:
return g.op("Clip", input, g.op("Constant", value_t=torch.tensor(0, dtype=torch.float)))
.. FIXME(justinchuby): PythonOps are too complicated and the example below
.. uses private methods we do not expose. We are looking to
.. improve the experience. Since SymbolicContext is deprecated, we think
.. defining a symbolic staticmethod is a better way to go for now.
.. PythonOp Symbolic
.. ~~~~~~~~~~~~~~~~~
.. Alternatively, you can register a custom symbolic function.
.. This gives the symbolic function access to more info through the
.. ``torch.onnx.SymbolicContext`` object, which gets passed in as the first
.. argument (before the ``Graph`` object).
.. All autograd ``Function``\ s appear in the TorchScript graph as ``prim::PythonOp`` nodes.
.. In order to differentiate between different ``Function`` subclasses, the
.. symbolic function should use the ``name`` kwarg which gets set to the name of the class.
.. Custom symbolic functions should add type and shape information by calling ``setType(...)``
.. on Value objects before returning them (implemented in C++ by
.. . ``torch::jit::Value::setType``). This is not required, but it can help the exporter's
.. shape and type inference for down-stream nodes. For a non-trivial example of ``setType``, see
.. ``test_aten_embedding_2`` in
.. `test_operators.py <https://github.com/pytorch/pytorch/blob/main/test/onnx/test_operators.py>`_.
.. The example below shows how you can access ``requires_grad`` via the ``Node`` object:
.. class MyClip(torch.autograd.Function):
.. @staticmethod
.. def forward(ctx, input, min):
.. ctx.save_for_backward(input)
.. return input.clamp(min=min)
.. class MyRelu(torch.autograd.Function):
.. @staticmethod
.. def forward(ctx, input):
.. ctx.save_for_backward(input)
.. return input.clamp(min=0)
.. def symbolic_python_op(g: "GraphContext", *args, **kwargs):
.. n = ctx.cur_node
.. print("original node: ", n)
.. for i, out in enumerate(n.outputs()):
.. print("original output {}: {}, requires grad: {}".format(i, out, out.requiresGrad()))
.. import torch.onnx.symbolic_helper as sym_helper
.. for i, arg in enumerate(args):
.. requires_grad = arg.requiresGrad() if sym_helper._is_value(arg) else False
.. print("arg {}: {}, requires grad: {}".format(i, arg, requires_grad))
.. name = kwargs["name"]
.. ret = None
.. if name == "MyClip":
.. ret = g.op("Clip", args[0], args[1])
.. elif name == "MyRelu":
.. ret = g.op("Relu", args[0])
.. else:
.. # Logs a warning and returns None
.. return _unimplemented("prim::PythonOp", "unknown node kind: " + name)
.. # Copy type and shape from original node.
.. ret.setType(n.type())
.. return ret
.. from torch.onnx import register_custom_op_symbolic
.. . register_custom_op_symbolic("prim::PythonOp", symbolic_python_op, 1)
Inline Autograd Function
~~~~~~~~~~~~~~~~~~~~~~~~
In cases where a static symbolic method is not provided for its subsequent :class:`torch.autograd.Function` or
where a function to register ``prim::PythonOp`` as custom symbolic functions is not provided,
:func:`torch.onnx.export` tries to inline the graph that corresponds to that :class:`torch.autograd.Function` such that
this function is broken down into individual operators that were used within the function.
The export should be successful as long as these individual operators are supported. For example::
class MyLogExp(torch.autograd.Function):
@staticmethod
def forward(ctx, input: torch.Tensor) -> torch.Tensor:
ctx.save_for_backward(input)
h = input.exp()
return h.log().log()
There is no static symbolic method present for this model, yet it is exported as follows::
graph(%input : Float(1, strides=[1], requires_grad=0, device=cpu)):
%1 : float = onnx::Exp[](%input)
%2 : float = onnx::Log[](%1)
%3 : float = onnx::Log[](%2)
return (%3)
If you need to avoid inlining of :class:`torch.autograd.Function`, you should export models with
``operator_export_type`` set to ``ONNX_FALLTHROUGH`` or ``ONNX_ATEN_FALLBACK``.
Custom operators
^^^^^^^^^^^^^^^^
You can export your model with custom operators that includes a combination of many standard ONNX ops,
or are driven by self-defined C++ backend.
ONNX-script functions
~~~~~~~~~~~~~~~~~~~~~
If an operator is not a standard ONNX op, but can be composed of multiple existing ONNX ops, you can utilize
`ONNX-script <https://github.com/microsoft/onnx-script>`_ to create an external ONNX function to support the operator.
You can export it by following this example::
import onnxscript
# There are three opset version needed to be aligned
# This is (1) the opset version in ONNX function
from onnxscript.onnx_opset import opset15 as op
opset_version = 15
x = torch.randn(1, 2, 3, 4, requires_grad=True)
model = torch.nn.SELU()
custom_opset = onnxscript.values.Opset(domain="onnx-script", version=1)
@onnxscript.script(custom_opset)
def Selu(X):
alpha = 1.67326 # auto wrapped as Constants
gamma = 1.0507
alphaX = op.CastLike(alpha, X)
gammaX = op.CastLike(gamma, X)
neg = gammaX * (alphaX * op.Exp(X) - alphaX)
pos = gammaX * X
zero = op.CastLike(0, X)
return op.Where(X <= zero, neg, pos)
# setType API provides shape/type to ONNX shape/type inference
def custom_selu(g: jit_utils.GraphContext, X):
return g.onnxscript_op(Selu, X).setType(X.type())
# Register custom symbolic function
# There are three opset version needed to be aligned
# This is (2) the opset version in registry
torch.onnx.register_custom_op_symbolic(
symbolic_name="aten::selu",
symbolic_fn=custom_selu,
opset_version=opset_version,
)
# There are three opset version needed to be aligned
# This is (2) the opset version in exporter
torch.onnx.export(
model,
x,
"model.onnx",
opset_version=opset_version,
# only needed if you want to specify an opset version > 1.
custom_opsets={"onnx-script": 2}
)
The example above exports it as a custom operator in the "onnx-script" opset.
When exporting a custom operator, you can specify the custom domain version using the
``custom_opsets`` dictionary at export. If not specified, the custom opset version defaults to 1.
NOTE: Be careful to align the opset version mentioned in the above example, and make sure they are consumed in exporter step.
The example usage of how to write a onnx-script function is a beta version in terms of the active development on onnx-script.
Please follow the latest `ONNX-script <https://github.com/microsoft/onnx-script>`_
C++ Operators
~~~~~~~~~~~~~
If a model uses a custom operator implemented in C++ as described in
`Extending TorchScript with Custom C++ Operators <https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html>`_,
you can export it by following this example::
from torch.onnx import symbolic_helper
# Define custom symbolic function
@symbolic_helper.parse_args("v", "v", "f", "i")
def symbolic_foo_forward(g, input1, input2, attr1, attr2):
return g.op("custom_domain::Foo", input1, input2, attr1_f=attr1, attr2_i=attr2)
# Register custom symbolic function
torch.onnx.register_custom_op_symbolic("custom_ops::foo_forward", symbolic_foo_forward, 9)
class FooModel(torch.nn.Module):
def __init__(self, attr1, attr2):
super().__init__()
self.attr1 = attr1
self.attr2 = attr2
def forward(self, input1, input2):
# Calling custom op
return torch.ops.custom_ops.foo_forward(input1, input2, self.attr1, self.attr2)
model = FooModel(attr1, attr2)
torch.onnx.export(
model,
(example_input1, example_input1),
"model.onnx",
# only needed if you want to specify an opset version > 1.
custom_opsets={"custom_domain": 2}
)
The example above exports it as a custom operator in the "custom_domain" opset.
When exporting a custom operator, you can specify the custom domain version using the
``custom_opsets`` dictionary at export. If not specified, the custom opset version defaults to 1.
The runtime that consumes the model needs to support the custom op. See
`Caffe2 custom ops <https://caffe2.ai/docs/custom-operators.html>`_,
`ONNX Runtime custom ops <https://onnxruntime.ai/docs/reference/operators/add-custom-op.html>`_,
or your runtime of choice's documentation.
Discovering all unconvertible ATen ops at once
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When export fails due to an unconvertible ATen op, there may in fact be more
than one such op but the error message only mentions the first. To discover
all of the unconvertible ops in one go you can::
# prepare model, args, opset_version
...
torch_script_graph, unconvertible_ops = torch.onnx.utils.unconvertible_ops(
model, args, opset_version=opset_version
)
print(set(unconvertible_ops))
The set is approximated because some ops may be removed during the conversion
process and don't need to be converted. Some other ops may have partial support
that will fail conversion with particular inputs, but this should give you a
general idea of what ops are not supported. Please feel free to open GitHub Issues
for op support requests.
Frequently Asked Questions
--------------------------
Q: I have exported my LSTM model, but its input size seems to be fixed?
The tracer records the shapes of the example inputs. If the model should accept
inputs of dynamic shapes, set ``dynamic_axes`` when calling :func:`torch.onnx.export`.
Q: How to export models containing loops?
See `Tracing vs Scripting`_.
Q: How to export models with primitive type inputs (e.g. int, float)?
Support for primitive numeric type inputs was added in PyTorch 1.9.
However, the exporter does not support models with str inputs.
Q: Does ONNX support implicit scalar datatype casting?
The ONNX standard does not, but the exporter will try to handle that part.
Scalars are exported as constant tensors.
The exporter will figure out the right data type for scalars. In rare cases when it is unable
to do so, you will need to manually specify the datatype with e.g. `dtype=torch.float32`.
If you see any errors, please [create a GitHub issue](https://github.com/pytorch/pytorch/issues).
Q: Are lists of Tensors exportable to ONNX?
Yes, for ``opset_version`` >= 11, since ONNX introduced the Sequence type in opset 11.
Contributing / developing
Contributing / Developing
-------------------------
`Developer docs <https://github.com/pytorch/pytorch/wiki/PyTorch-ONNX-exporter>`_.
Functions
---------
.. autofunction:: export
.. autofunction:: export_to_pretty_string
.. autofunction:: register_custom_op_symbolic
.. autofunction:: unregister_custom_op_symbolic
.. autofunction:: select_model_mode_for_export
.. autofunction:: is_in_onnx_export
.. autofunction:: enable_log
.. autofunction:: disable_log
.. autofunction:: torch.onnx.verification.find_mismatch
The ONNX exporter is a community project and we welcome contributions. We follow the
`PyTorch guidelines for contributions <https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md>`_, but you might
also be interested in reading our `development wiki <https://github.com/pytorch/pytorch/wiki/PyTorch-ONNX-exporter>`_.
Classes
-------
.. toctree::
:hidden:
.. autosummary::
:toctree: generated
:nosignatures:
:template: classtemplate.rst
JitScalarType
torch.onnx.verification.GraphInfo
torch.onnx.verification.VerificationOptions
Preview: torch.onnx TorchDynamo Exporter
----------------------------------------
.. warning::
The ONNX exporter for TorchDynamo is under active development and is
subject to rapid change.
.. autofunction:: torch.onnx.dynamo_export
.. autofunction:: torch.onnx.enable_fake_mode
.. autofunction:: torch.onnx.is_onnxrt_backend_supported
.. autosummary::
:toctree: generated
:nosignatures:
:template: classtemplate.rst
torch.onnx.DiagnosticOptions
torch.onnx.ExportOptions
torch.onnx.ExportOutput
torch.onnx.ExportOutputSerializer
torch.onnx.OnnxExporterError
torch.onnx.OnnxRegistry
onnx_dynamo
onnx_dynamo_onnxruntime_backend
onnx_torchscript

View File

@ -1,26 +0,0 @@
torch.onnx diagnostics
======================
.. contents:: :local:
.. automodule:: torch.onnx._internal.diagnostics
.. currentmodule:: torch.onnx._internal.diagnostics
Overview
--------
NOTE: This feature is underdevelopment and is subject to change.
The goal is to improve the diagnostics to help users debug and improve their model export to ONNX.
- The diagnostics are emitted in machine parsable `Static Analysis Results Interchange Format (SARIF) <https://docs.oasis-open.org/sarif/sarif/v2.1.0/sarif-v2.1.0.html>`__.
- A new clearer, structured way to add new and keep track of diagnostic rules.
- Serve as foundation for more future improvements consuming the diagnostics.
Diagnostic Rules
----------------
.. toctree::
:glob:
generated/onnx_diagnostics_rules/*

156
docs/source/onnx_dynamo.rst Normal file
View File

@ -0,0 +1,156 @@
TorchDynamo-based ONNX Exporter
===============================
.. automodule:: torch.onnx
:noindex:
.. contents:: :local:
:depth: 3
.. warning::
The ONNX exporter for TorchDynamo is a rapidly evolving beta technology.
Overview
--------
The ONNX exporter leverages TorchDynamo engine to hook into Python's frame evaluation API
and dynamically rewrite its bytecode into an FX Graph.
The resulting FX Graph is then polished before it is finally translated into an ONNX graph.
The main advantage of this approach is that the `FX graph <https://pytorch.org/docs/stable/fx.html>`_ is captured using
bytecode analysis that preserves the dynamic nature of the model instead of using traditional static tracing techniques.
The exporter is designed to be modular and extensible. It is composed of the following components:
- **ONNX Exporter**: :class:`Exporter` main class that orchestrates the export process.
- **ONNX Export Options**: :class:`ExportOptions` has a set of options that control the export process.
- **ONNX Registry**: :class:`OnnxRegistry` is the registry of ONNX operators and functions.
- **FX Graph Extractor**: :class:`FXGraphExtractor` extracts the FX graph from the PyTorch model.
- **Fake Mode**: :class:`ONNXFakeContext` is a context manager that enables fake mode for large scale models.
- **ONNX Export Output**: :class:`ExportOutput` is the output of the exporter that contains the exported ONNX graph and diagnostics.
- **ONNX Export Output Serializer**: :class:`ExportOutputSerializer` serializes the exported model to a file.
- **ONNX Diagnostic Options**: :class:`DiagnosticOptions` has a set of options that control the diagnostics emitted by the exporter.
Dependencies
------------
The ONNX exporter depends on extra Python packages:
- `ONNX <https://onnx.ai>`_
- `ONNX Script <https://onnxscript.ai>`_
They can be installed through `pip <https://pypi.org/project/pip/>`_:
.. code-block:: bash
pip install --upgrade onnx onnxscript
A simple example
----------------
See below a demonstration of exporter API in action with a simple Multilayer Perceptron (MLP) as example:
.. code-block:: python
import torch
class MLPModel(nn.Module):
def __init__(self):
super().__init__()
self.fc0 = nn.Linear(8, 8, bias=True)
self.fc1 = nn.Linear(8, 4, bias=True)
self.fc2 = nn.Linear(4, 2, bias=True)
self.fc3 = nn.Linear(2, 2, bias=True)
def forward(self, tensor_x: torch.Tensor):
tensor_x = self.fc0(tensor_x)
tensor_x = torch.sigmoid(tensor_x)
tensor_x = self.fc1(tensor_x)
tensor_x = torch.sigmoid(tensor_x)
tensor_x = self.fc2(tensor_x)
tensor_x = torch.sigmoid(tensor_x)
output = self.fc3(tensor_x)
return output
model = MLPModel()
tensor_x = torch.rand((97, 8), dtype=torch.float32)
export_output = torch.onnx.dynamo_export(model, tensor_x)
As the code above shows, all you need is to provide :func:`torch.onnx.dynamo_export` with an instance of the model and its input.
The exporter will then return an instance of :class:`torch.onnx.ExportOutput` that contains the exported ONNX graph along with extra information.
The in-memory model available through ``export_output.model_proto`` is an ``onnx.ModelProto`` object in compliance with the `ONNX IR spec <https://github.com/onnx/onnx/blob/main/docs/IR.md>`_.
The ONNX model may then be serialized into a `Protobuf file <https://protobuf.dev/>`_ using the :meth:`torch.onnx.ExportOutput.save` API.
.. code-block:: python
export_output.save("mlp.onnx")
Inspecting the ONNX model using GUI
-----------------------------------
You can view the exported model using `Netron <https://netron.app/>`__.
.. image:: _static/img/onnx/onnx_dynamo_mlp_model.png
:width: 40%
:alt: MLP model as viewed using Netron
Note that each layer is represented in a rectangular box with a *f* icon in the top right corner.
.. image:: _static/img/onnx/onnx_dynamo_mlp_model_function_highlight.png
:width: 40%
:alt: ONNX function highlighted on MLP model
By expanding it, the function body is shown.
.. image:: _static/img/onnx/onnx_dynamo_mlp_model_function_body.png
:width: 50%
:alt: ONNX function body
The function body is a sequence of ONNX operators or other functions.
Diagnosing issues with SARIF
----------------------------
ONNX diagnostics goes beyond regular logs through the adoption of
`Static Analysis Results Interchange Format (aka SARIF) <https://docs.oasis-open.org/sarif/sarif/v2.1.0/sarif-v2.1.0.html>`__
to help users debug and improve their model using a GUI, such as
Visual Studio Code's `SARIF Viewer <https://marketplace.visualstudio.com/items?itemName=MS-SarifVSCode.sarif-viewer>`_.
The main advantages are:
- The diagnostics are emitted in machine parseable `Static Analysis Results Interchange Format (SARIF) <https://docs.oasis-open.org/sarif/sarif/v2.1.0/sarif-v2.1.0.html>`__.
- A new clearer, structured way to add new and keep track of diagnostic rules.
- Serve as foundation for more future improvements consuming the diagnostics.
.. toctree::
:maxdepth: 1
:caption: ONNX Diagnostic SARIF Rules
:glob:
generated/onnx_dynamo_diagnostics_rules/*
API Reference
-------------
.. autofunction:: torch.onnx.dynamo_export
.. autoclass:: torch.onnx.ExportOptions
:members:
.. autofunction:: torch.onnx.enable_fake_mode
.. autoclass:: torch.onnx.ExportOutput
:members:
.. autoclass:: torch.onnx.ExportOutputSerializer
:members:
.. autoclass:: torch.onnx.OnnxExporterError
:members:
.. autoclass:: torch.onnx.OnnxRegistry
:members:
.. autoclass:: torch.onnx.DiagnosticOptions
:members:

View File

@ -0,0 +1,9 @@
ONNX Backend for TorchDynamo
============================
For a quick overview of ``torch.compiler``, see :ref:`torch.compiler_overview`.
.. warning::
The ONNX backend for torch.compile is a rapidly evolving beta technology.
.. autofunction:: torch.onnx.is_onnxrt_backend_supported

View File

@ -0,0 +1,719 @@
TorchScript-based ONNX Exporter
===============================
.. note::
To export an ONNX model using TorchDynamo instead of TorchScript, see :func:`torch.onnx.dynamo_export`.
.. contents:: :local:
Example: AlexNet from PyTorch to ONNX
-------------------------------------
Here is a simple script which exports a pretrained AlexNet to an ONNX file named ``alexnet.onnx``.
The call to ``torch.onnx.export`` runs the model once to trace its execution and then exports the
traced model to the specified file::
import torch
import torchvision
dummy_input = torch.randn(10, 3, 224, 224, device="cuda")
model = torchvision.models.alexnet(pretrained=True).cuda()
# Providing input and output names sets the display names for values
# within the model's graph. Setting these does not change the semantics
# of the graph; it is only for readability.
#
# The inputs to the network consist of the flat list of inputs (i.e.
# the values you would pass to the forward() method) followed by the
# flat list of parameters. You can partially specify names, i.e. provide
# a list here shorter than the number of inputs to the model, and we will
# only set that subset of names, starting from the beginning.
input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(16) ]
output_names = [ "output1" ]
torch.onnx.export(model, dummy_input, "alexnet.onnx", verbose=True, input_names=input_names, output_names=output_names)
The resulting ``alexnet.onnx`` file contains a binary `protocol buffer <https://developers.google.com/protocol-buffers/>`_
which contains both the network structure and parameters of the model you exported
(in this case, AlexNet). The argument ``verbose=True`` causes the
exporter to print out a human-readable representation of the model::
# These are the inputs and parameters to the network, which have taken on
# the names we specified earlier.
graph(%actual_input_1 : Float(10, 3, 224, 224)
%learned_0 : Float(64, 3, 11, 11)
%learned_1 : Float(64)
%learned_2 : Float(192, 64, 5, 5)
%learned_3 : Float(192)
# ---- omitted for brevity ----
%learned_14 : Float(1000, 4096)
%learned_15 : Float(1000)) {
# Every statement consists of some output tensors (and their types),
# the operator to be run (with its attributes, e.g., kernels, strides,
# etc.), its input tensors (%actual_input_1, %learned_0, %learned_1)
%17 : Float(10, 64, 55, 55) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[11, 11], pads=[2, 2, 2, 2], strides=[4, 4]](%actual_input_1, %learned_0, %learned_1), scope: AlexNet/Sequential[features]/Conv2d[0]
%18 : Float(10, 64, 55, 55) = onnx::Relu(%17), scope: AlexNet/Sequential[features]/ReLU[1]
%19 : Float(10, 64, 27, 27) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%18), scope: AlexNet/Sequential[features]/MaxPool2d[2]
# ---- omitted for brevity ----
%29 : Float(10, 256, 6, 6) = onnx::MaxPool[kernel_shape=[3, 3], pads=[0, 0, 0, 0], strides=[2, 2]](%28), scope: AlexNet/Sequential[features]/MaxPool2d[12]
# Dynamic means that the shape is not known. This may be because of a
# limitation of our implementation (which we would like to fix in a
# future release) or shapes which are truly dynamic.
%30 : Dynamic = onnx::Shape(%29), scope: AlexNet
%31 : Dynamic = onnx::Slice[axes=[0], ends=[1], starts=[0]](%30), scope: AlexNet
%32 : Long() = onnx::Squeeze[axes=[0]](%31), scope: AlexNet
%33 : Long() = onnx::Constant[value={9216}](), scope: AlexNet
# ---- omitted for brevity ----
%output1 : Float(10, 1000) = onnx::Gemm[alpha=1, beta=1, broadcast=1, transB=1](%45, %learned_14, %learned_15), scope: AlexNet/Sequential[classifier]/Linear[6]
return (%output1);
}
You can also verify the output using the `ONNX <https://github.com/onnx/onnx/>`_ library,
which you can install using ``pip``::
pip install onnx
Then, you can run::
import onnx
# Load the ONNX model
model = onnx.load("alexnet.onnx")
# Check that the model is well formed
onnx.checker.check_model(model)
# Print a human readable representation of the graph
print(onnx.helper.printable_graph(model.graph))
You can also run the exported model with one of the many
`runtimes that support ONNX <https://onnx.ai/supported-tools.html#deployModel>`_.
For example after installing `ONNX Runtime <https://www.onnxruntime.ai>`_, you can
load and run the model::
import onnxruntime as ort
import numpy as np
ort_session = ort.InferenceSession("alexnet.onnx")
outputs = ort_session.run(
None,
{"actual_input_1": np.random.randn(10, 3, 224, 224).astype(np.float32)},
)
print(outputs[0])
Here is a more involved `tutorial on exporting a model and running it with ONNX Runtime <https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html>`_.
.. _tracing-vs-scripting:
Tracing vs Scripting
--------------------
Internally, :func:`torch.onnx.export()` requires a :class:`torch.jit.ScriptModule` rather than
a :class:`torch.nn.Module`. If the passed-in model is not already a ``ScriptModule``,
``export()`` will use *tracing* to convert it to one:
.. TODO(justinchuby): Add a word on recommending tracing over scripting for most use cases.
* **Tracing**: If ``torch.onnx.export()`` is called with a Module that is not already a
``ScriptModule``, it first does the equivalent of :func:`torch.jit.trace`, which executes the model
once with the given ``args`` and records all operations that happen during that execution. This
means that if your model is dynamic, e.g., changes behavior depending on input data, the exported
model will *not* capture this dynamic behavior.
We recommend examining the exported model and making sure the operators look
reasonable. Tracing will unroll loops and if statements, exporting a static graph that is exactly
the same as the traced run. If you want to export your model with dynamic control flow, you will
need to use *scripting*.
* **Scripting**: Compiling a model via scripting preserves dynamic control flow and is valid for inputs
of different sizes. To use scripting:
* Use :func:`torch.jit.script` to produce a ``ScriptModule``.
* Call ``torch.onnx.export()`` with the ``ScriptModule`` as the model. The ``args`` are still required,
but they will be used internally only to produce example outputs, so that the types and shapes of the
outputs can be captured. No tracing will be performed.
See `Introduction to TorchScript <https://pytorch.org/tutorials/beginner/Intro_to_TorchScript_tutorial.html>`_
and `TorchScript <jit.html>`_ for more details, including how to compose tracing and scripting to suit the
particular requirements of different models.
Avoiding Pitfalls
-----------------
Avoid NumPy and built-in Python types
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
PyTorch models can be written using NumPy or Python types and functions, but
during :ref:`tracing<tracing-vs-scripting>`, any variables of NumPy or Python
types (rather than torch.Tensor) are converted to constants, which will produce
the wrong result if those values should change depending on the inputs.
For example, rather than using numpy functions on numpy.ndarrays: ::
# Bad! Will be replaced with constants during tracing.
x, y = np.random.rand(1, 2), np.random.rand(1, 2)
np.concatenate((x, y), axis=1)
Use torch operators on torch.Tensors: ::
# Good! Tensor operations will be captured during tracing.
x, y = torch.randn(1, 2), torch.randn(1, 2)
torch.cat((x, y), dim=1)
And rather than use :func:`torch.Tensor.item` (which converts a Tensor to a Python
built-in number): ::
# Bad! y.item() will be replaced with a constant during tracing.
def forward(self, x, y):
return x.reshape(y.item(), -1)
Use torch's support for implicit casting of single-element tensors: ::
# Good! y will be preserved as a variable during tracing.
def forward(self, x, y):
return x.reshape(y, -1)
Avoid Tensor.data
^^^^^^^^^^^^^^^^^
Using the Tensor.data field can produce an incorrect trace and therefore an incorrect ONNX graph.
Use :func:`torch.Tensor.detach` instead. (Work is ongoing to
`remove Tensor.data entirely <https://github.com/pytorch/pytorch/issues/30987>`_).
Avoid in-place operations when using tensor.shape in tracing mode
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
In tracing mode, shapes obtained from ``tensor.shape`` are traced as tensors,
and share the same memory. This might cause a mismatch the final output values.
As a workaround, avoid the use of inplace operations in these scenarios.
For example, in the model::
class Model(torch.nn.Module):
def forward(self, states):
batch_size, seq_length = states.shape[:2]
real_seq_length = seq_length
real_seq_length += 2
return real_seq_length + seq_length
``real_seq_length`` and ``seq_length`` share the same memory in tracing mode.
This could be avoided by rewriting the inplace operation::
real_seq_length = real_seq_length + 2
Limitations
-----------
Types
^^^^^
* Only :class:`torch.Tensors`, numeric types that can be trivially converted to torch.Tensors (e.g. float, int),
and tuples and lists of those types are supported as model inputs or outputs. Dict and str inputs and
outputs are accepted in :ref:`tracing<tracing-vs-scripting>` mode, but:
* Any computation that depends on the value of a dict or a str input **will be replaced with the
constant value** seen during the one traced execution.
* Any output that is a dict will be silently replaced with a **flattened sequence of its values
(keys will be removed)**. E.g. ``{"foo": 1, "bar": 2}`` becomes ``(1, 2)``.
* Any output that is a str will be silently removed.
* Certain operations involving tuples and lists are not supported in
:ref:`scripting<tracing-vs-scripting>` mode due to limited support in ONNX for nested sequences.
In particular appending a tuple to a list is not supported. In tracing mode, the nested sequences
will be flattened automatically during the tracing.
Differences in Operator Implementations
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Due to differences in implementations of operators, running the exported model on different runtimes
may produce different results from each other or from PyTorch. Normally these differences are
numerically small, so this should only be a concern if your application is sensitive to these
small differences.
.. _tensor-indexing:
Unsupported Tensor Indexing Patterns
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Tensor indexing patterns that cannot be exported are listed below.
If you are experiencing issues exporting a model that does not include any of
the unsupported patterns below, please double check that you are exporting with
the latest ``opset_version``.
Reads / Gets
~~~~~~~~~~~~
When indexing into a tensor for reading, the following patterns are not supported: ::
# Tensor indices that includes negative values.
data[torch.tensor([[1, 2], [2, -3]]), torch.tensor([-2, 3])]
# Workarounds: use positive index values.
Writes / Sets
~~~~~~~~~~~~~
When indexing into a Tensor for writing, the following patterns are not supported: ::
# Multiple tensor indices if any has rank >= 2
data[torch.tensor([[1, 2], [2, 3]]), torch.tensor([2, 3])] = new_data
# Workarounds: use single tensor index with rank >= 2,
# or multiple consecutive tensor indices with rank == 1.
# Multiple tensor indices that are not consecutive
data[torch.tensor([2, 3]), :, torch.tensor([1, 2])] = new_data
# Workarounds: transpose `data` such that tensor indices are consecutive.
# Tensor indices that includes negative values.
data[torch.tensor([1, -2]), torch.tensor([-2, 3])] = new_data
# Workarounds: use positive index values.
# Implicit broadcasting required for new_data.
data[torch.tensor([[0, 2], [1, 1]]), 1:3] = new_data
# Workarounds: expand new_data explicitly.
# Example:
# data shape: [3, 4, 5]
# new_data shape: [5]
# expected new_data shape after broadcasting: [2, 2, 2, 5]
Adding support for operators
----------------------------
When exporting a model that includes unsupported operators, you'll see an error message like:
.. code-block:: text
RuntimeError: ONNX export failed: Couldn't export operator foo
When that happens, there are a few things you can do:
#. Change the model to not use that operator.
#. Create a symbolic function to convert the operator and register it as a custom symbolic function.
#. Contribute to PyTorch to add the same symbolic function to :mod:`torch.onnx` itself.
If you decided to implement a symbolic function (we hope you will contribute it back to PyTorch!), here is how you can get started:
ONNX exporter internals
^^^^^^^^^^^^^^^^^^^^^^^
A "symbolic function" is a function that decomposes a PyTorch operator into a
composition of a series of ONNX operators.
During export, each node (which contains a PyTorch operator) in the TorchScript
graph is visited by the exporter in topological order.
Upon visiting a node, the exporter looks for a registered symbolic functions for
that operator. Symbolic functions are implemented in Python. A symbolic function for
an op named ``foo`` would look something like::
def foo(
g,
input_0: torch._C.Value,
input_1: torch._C.Value) -> Union[None, torch._C.Value, List[torch._C.Value]]:
"""
Adds the ONNX operations representing this PyTorch function by updating the
graph g with `g.op()` calls.
Args:
g (Graph): graph to write the ONNX representation into.
input_0 (Value): value representing the variables which contain
the first input for this operator.
input_1 (Value): value representing the variables which contain
the second input for this operator.
Returns:
A Value or List of Values specifying the ONNX nodes that compute something
equivalent to the original PyTorch operator with the given inputs.
None if it cannot be converted to ONNX.
"""
...
The ``torch._C`` types are Python wrappers around the types defined in C++ in
`ir.h <https://github.com/pytorch/pytorch/blob/main/torch/csrc/jit/ir/ir.h>`_.
The process for adding a symbolic function depends on the type of operator.
.. _adding-support-aten:
ATen operators
^^^^^^^^^^^^^^
`ATen <https://pytorch.org/cppdocs/#aten>`_ is PyTorch's built-in tensor library.
If the operator is an ATen operator (shows up in the TorchScript graph with the prefix
``aten::``), make sure it is not supported already.
List of supported operators
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Visit the auto generated :doc:`list of supported TorchScript operators <../onnx_torchscript_supported_aten_ops>`
for details on which operator are supported in each ``opset_version``.
Adding support for an aten or quantized operator
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If the operator is not in the list above:
* Define the symbolic function in ``torch/onnx/symbolic_opset<version>.py``, for example
`torch/onnx/symbolic_opset9.py <https://github.com/pytorch/pytorch/blob/main/torch/onnx/symbolic_opset9.py>`_.
Make sure the function has the same name as the ATen function, which may be declared in
``torch/_C/_VariableFunctions.pyi`` or ``torch/nn/functional.pyi`` (these files are generated at
build time, so will not appear in your checkout until you build PyTorch).
* By default, the first arg is the ONNX graph.
Other arg names must EXACTLY match the names in the ``.pyi`` file,
because dispatch is done with keyword arguments.
* In the symbolic function, if the operator is in the
`ONNX standard operator set <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_,
we only need to create a node to represent the ONNX operator in the graph.
If not, we can compose several standard operators that have the
equivalent semantics to the ATen operator.
Here is an example of handling missing symbolic function for the ``ELU`` operator.
If we run the following code::
print(
torch.jit.trace(
torch.nn.ELU(), # module
torch.ones(1) # example input
).graph
)
We see something like::
graph(%self : __torch__.torch.nn.modules.activation.___torch_mangle_0.ELU,
%input : Float(1, strides=[1], requires_grad=0, device=cpu)):
%4 : float = prim::Constant[value=1.]()
%5 : int = prim::Constant[value=1]()
%6 : int = prim::Constant[value=1]()
%7 : Float(1, strides=[1], requires_grad=0, device=cpu) = aten::elu(%input, %4, %5, %6)
return (%7)
Since we see ``aten::elu`` in the graph, we know this is an ATen operator.
We check the `ONNX operator list <https://github.com/onnx/onnx/blob/master/docs/Operators.md>`_,
and confirm that ``Elu`` is standardized in ONNX.
We find a signature for ``elu`` in ``torch/nn/functional.pyi``::
def elu(input: Tensor, alpha: float = ..., inplace: bool = ...) -> Tensor: ...
We add the following lines to ``symbolic_opset9.py``::
def elu(g, input: torch.Value, alpha: torch.Value, inplace: bool = False):
return g.op("Elu", input, alpha_f=alpha)
Now PyTorch is able to export models containing the ``aten::elu`` operator!
See the ``torch/onnx/symbolic_opset*.py`` files for more examples.
torch.autograd.Functions
^^^^^^^^^^^^^^^^^^^^^^^^
If the operator is a sub-class of :class:`torch.autograd.Function`, there are three ways
to export it.
Static Symbolic Method
~~~~~~~~~~~~~~~~~~~~~~
You can add a static method named ``symbolic`` to your function class. It should return
ONNX operators that represent the function's behavior in ONNX. For example::
class MyRelu(torch.autograd.Function):
@staticmethod
def forward(ctx, input: torch.Tensor) -> torch.Tensor:
ctx.save_for_backward(input)
return input.clamp(min=0)
@staticmethod
def symbolic(g: torch.Graph, input: torch.Value) -> torch.Value:
return g.op("Clip", input, g.op("Constant", value_t=torch.tensor(0, dtype=torch.float)))
.. FIXME(justinchuby): PythonOps are too complicated and the example below
.. uses private methods we do not expose. We are looking to
.. improve the experience. Since SymbolicContext is deprecated, we think
.. defining a symbolic staticmethod is a better way to go for now.
.. PythonOp Symbolic
.. ~~~~~~~~~~~~~~~~~
.. Alternatively, you can register a custom symbolic function.
.. This gives the symbolic function access to more info through the
.. ``torch.onnx.SymbolicContext`` object, which gets passed in as the first
.. argument (before the ``Graph`` object).
.. All autograd ``Function``\ s appear in the TorchScript graph as ``prim::PythonOp`` nodes.
.. In order to differentiate between different ``Function`` subclasses, the
.. symbolic function should use the ``name`` kwarg which gets set to the name of the class.
.. Custom symbolic functions should add type and shape information by calling ``setType(...)``
.. on Value objects before returning them (implemented in C++ by
.. . ``torch::jit::Value::setType``). This is not required, but it can help the exporter's
.. shape and type inference for down-stream nodes. For a non-trivial example of ``setType``, see
.. ``test_aten_embedding_2`` in
.. `test_operators.py <https://github.com/pytorch/pytorch/blob/main/test/onnx/test_operators.py>`_.
.. The example below shows how you can access ``requires_grad`` via the ``Node`` object:
.. class MyClip(torch.autograd.Function):
.. @staticmethod
.. def forward(ctx, input, min):
.. ctx.save_for_backward(input)
.. return input.clamp(min=min)
.. class MyRelu(torch.autograd.Function):
.. @staticmethod
.. def forward(ctx, input):
.. ctx.save_for_backward(input)
.. return input.clamp(min=0)
.. def symbolic_python_op(g: "GraphContext", *args, **kwargs):
.. n = ctx.cur_node
.. print("original node: ", n)
.. for i, out in enumerate(n.outputs()):
.. print("original output {}: {}, requires grad: {}".format(i, out, out.requiresGrad()))
.. import torch.onnx.symbolic_helper as sym_helper
.. for i, arg in enumerate(args):
.. requires_grad = arg.requiresGrad() if sym_helper._is_value(arg) else False
.. print("arg {}: {}, requires grad: {}".format(i, arg, requires_grad))
.. name = kwargs["name"]
.. ret = None
.. if name == "MyClip":
.. ret = g.op("Clip", args[0], args[1])
.. elif name == "MyRelu":
.. ret = g.op("Relu", args[0])
.. else:
.. # Logs a warning and returns None
.. return _unimplemented("prim::PythonOp", "unknown node kind: " + name)
.. # Copy type and shape from original node.
.. ret.setType(n.type())
.. return ret
.. from torch.onnx import register_custom_op_symbolic
.. . register_custom_op_symbolic("prim::PythonOp", symbolic_python_op, 1)
Inline Autograd Function
~~~~~~~~~~~~~~~~~~~~~~~~
In cases where a static symbolic method is not provided for its subsequent :class:`torch.autograd.Function` or
where a function to register ``prim::PythonOp`` as custom symbolic functions is not provided,
:func:`torch.onnx.export` tries to inline the graph that corresponds to that :class:`torch.autograd.Function` such that
this function is broken down into individual operators that were used within the function.
The export should be successful as long as these individual operators are supported. For example::
class MyLogExp(torch.autograd.Function):
@staticmethod
def forward(ctx, input: torch.Tensor) -> torch.Tensor:
ctx.save_for_backward(input)
h = input.exp()
return h.log().log()
There is no static symbolic method present for this model, yet it is exported as follows::
graph(%input : Float(1, strides=[1], requires_grad=0, device=cpu)):
%1 : float = onnx::Exp[](%input)
%2 : float = onnx::Log[](%1)
%3 : float = onnx::Log[](%2)
return (%3)
If you need to avoid inlining of :class:`torch.autograd.Function`, you should export models with
``operator_export_type`` set to ``ONNX_FALLTHROUGH`` or ``ONNX_ATEN_FALLBACK``.
Custom operators
^^^^^^^^^^^^^^^^
You can export your model with custom operators that includes a combination of many standard ONNX ops,
or are driven by self-defined C++ backend.
ONNX-script functions
~~~~~~~~~~~~~~~~~~~~~
If an operator is not a standard ONNX op, but can be composed of multiple existing ONNX ops, you can utilize
`ONNX-script <https://github.com/microsoft/onnx-script>`_ to create an external ONNX function to support the operator.
You can export it by following this example::
import onnxscript
# There are three opset version needed to be aligned
# This is (1) the opset version in ONNX function
from onnxscript.onnx_opset import opset15 as op
opset_version = 15
x = torch.randn(1, 2, 3, 4, requires_grad=True)
model = torch.nn.SELU()
custom_opset = onnxscript.values.Opset(domain="onnx-script", version=1)
@onnxscript.script(custom_opset)
def Selu(X):
alpha = 1.67326 # auto wrapped as Constants
gamma = 1.0507
alphaX = op.CastLike(alpha, X)
gammaX = op.CastLike(gamma, X)
neg = gammaX * (alphaX * op.Exp(X) - alphaX)
pos = gammaX * X
zero = op.CastLike(0, X)
return op.Where(X <= zero, neg, pos)
# setType API provides shape/type to ONNX shape/type inference
def custom_selu(g: jit_utils.GraphContext, X):
return g.onnxscript_op(Selu, X).setType(X.type())
# Register custom symbolic function
# There are three opset version needed to be aligned
# This is (2) the opset version in registry
torch.onnx.register_custom_op_symbolic(
symbolic_name="aten::selu",
symbolic_fn=custom_selu,
opset_version=opset_version,
)
# There are three opset version needed to be aligned
# This is (2) the opset version in exporter
torch.onnx.export(
model,
x,
"model.onnx",
opset_version=opset_version,
# only needed if you want to specify an opset version > 1.
custom_opsets={"onnx-script": 2}
)
The example above exports it as a custom operator in the "onnx-script" opset.
When exporting a custom operator, you can specify the custom domain version using the
``custom_opsets`` dictionary at export. If not specified, the custom opset version defaults to 1.
NOTE: Be careful to align the opset version mentioned in the above example, and make sure they are consumed in exporter step.
The example usage of how to write a onnx-script function is a beta version in terms of the active development on onnx-script.
Please follow the latest `ONNX-script <https://github.com/microsoft/onnx-script>`_
C++ Operators
~~~~~~~~~~~~~
If a model uses a custom operator implemented in C++ as described in
`Extending TorchScript with Custom C++ Operators <https://pytorch.org/tutorials/advanced/torch_script_custom_ops.html>`_,
you can export it by following this example::
from torch.onnx import symbolic_helper
# Define custom symbolic function
@symbolic_helper.parse_args("v", "v", "f", "i")
def symbolic_foo_forward(g, input1, input2, attr1, attr2):
return g.op("custom_domain::Foo", input1, input2, attr1_f=attr1, attr2_i=attr2)
# Register custom symbolic function
torch.onnx.register_custom_op_symbolic("custom_ops::foo_forward", symbolic_foo_forward, 9)
class FooModel(torch.nn.Module):
def __init__(self, attr1, attr2):
super().__init__()
self.attr1 = attr1
self.attr2 = attr2
def forward(self, input1, input2):
# Calling custom op
return torch.ops.custom_ops.foo_forward(input1, input2, self.attr1, self.attr2)
model = FooModel(attr1, attr2)
torch.onnx.export(
model,
(example_input1, example_input1),
"model.onnx",
# only needed if you want to specify an opset version > 1.
custom_opsets={"custom_domain": 2}
)
The example above exports it as a custom operator in the "custom_domain" opset.
When exporting a custom operator, you can specify the custom domain version using the
``custom_opsets`` dictionary at export. If not specified, the custom opset version defaults to 1.
The runtime that consumes the model needs to support the custom op. See
`Caffe2 custom ops <https://caffe2.ai/docs/custom-operators.html>`_,
`ONNX Runtime custom ops <https://onnxruntime.ai/docs/reference/operators/add-custom-op.html>`_,
or your runtime of choice's documentation.
Discovering all unconvertible ATen ops at once
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
When export fails due to an unconvertible ATen op, there may in fact be more
than one such op but the error message only mentions the first. To discover
all of the unconvertible ops in one go you can::
# prepare model, args, opset_version
...
torch_script_graph, unconvertible_ops = torch.onnx.utils.unconvertible_ops(
model, args, opset_version=opset_version
)
print(set(unconvertible_ops))
The set is approximated because some ops may be removed during the conversion
process and don't need to be converted. Some other ops may have partial support
that will fail conversion with particular inputs, but this should give you a
general idea of what ops are not supported. Please feel free to open GitHub Issues
for op support requests.
Frequently Asked Questions
--------------------------
Q: I have exported my LSTM model, but its input size seems to be fixed?
The tracer records the shapes of the example inputs. If the model should accept
inputs of dynamic shapes, set ``dynamic_axes`` when calling :func:`torch.onnx.export`.
Q: How to export models containing loops?
See `Tracing vs Scripting`_.
Q: How to export models with primitive type inputs (e.g. int, float)?
Support for primitive numeric type inputs was added in PyTorch 1.9.
However, the exporter does not support models with str inputs.
Q: Does ONNX support implicit scalar datatype casting?
The ONNX standard does not, but the exporter will try to handle that part.
Scalars are exported as constant tensors.
The exporter will figure out the right data type for scalars. In rare cases when it is unable
to do so, you will need to manually specify the datatype with e.g. `dtype=torch.float32`.
If you see any errors, please [create a GitHub issue](https://github.com/pytorch/pytorch/issues).
Q: Are lists of Tensors exportable to ONNX?
Yes, for ``opset_version`` >= 11, since ONNX introduced the Sequence type in opset 11.
Python API
----------
.. automodule:: torch.onnx
Functions
^^^^^^^^^
.. autofunction:: export
.. autofunction:: export_to_pretty_string
.. autofunction:: register_custom_op_symbolic
.. autofunction:: unregister_custom_op_symbolic
.. autofunction:: select_model_mode_for_export
.. autofunction:: is_in_onnx_export
.. autofunction:: enable_log
.. autofunction:: disable_log
.. autofunction:: torch.onnx.verification.find_mismatch
Classes
^^^^^^^
.. autosummary::
:toctree: generated
:nosignatures:
:template: classtemplate.rst
JitScalarType
torch.onnx.verification.GraphInfo
torch.onnx.verification.VerificationOptions

View File

@ -5,7 +5,7 @@ ONNX supported TorchScript operators
.. This file is automatically generated during the documentation build
.. by cross referencing ONNX operator symbolics with TorchScript operators via
.. ``docs/source/scripts/build_onnx_supported_aten_op_csv_table.py``.
.. ``docs/source/scripts/build_onnx_torchscript_supported_aten_op_csv_table.py``.
.. Do not modify directly and instead `rebuild the docs <https://github.com/pytorch/pytorch#building-the-documentation>`_.
This page lists the TorchScript operators that are supported/unsupported by ONNX export.

View File

@ -119,7 +119,9 @@ def generate_index_rst(example_cases, tag_to_modules, support_level_to_modules):
blurb = file.read()
# Generate contents of the .rst file
doc_contents = f"""ExportDB
doc_contents = f""".. _torch.export_db:
ExportDB
========
{blurb}

View File

@ -575,6 +575,7 @@ Tensor class reference
Tensor.reciprocal_
Tensor.record_stream
Tensor.register_hook
Tensor.register_post_accumulate_grad_hook
Tensor.remainder
Tensor.remainder_
Tensor.renorm

View File

@ -37,7 +37,7 @@ TorchDynamo requires a backend that converts the captured graphs into a fast
machine code. Different backends can result in various optimization gains.
The default backend is called TorchInductor, also known as *inductor*,
TorchDynamo has a list of supported backends developed by our partners,
which can be see by running ``torch.compile.list_backends()`` each of which
which can be see by running ``torch.compiler.list_backends()`` each of which
with its optional dependencies.
Some of the most commonly used backends include:
@ -54,6 +54,10 @@ Some of the most commonly used backends include:
- Uses the TorchInductor backend. `Read more <https://dev-discuss.pytorch.org/t/torchinductor-a-pytorch-native-compiler-with-define-by-run-ir-and-symbolic-shapes/747>`__
* - ``torch.compile(m, backend="cudagraphs")``
- CUDA graphs with AOT Autograd. `Read more <https://github.com/pytorch/torchdynamo/pull/757>`__
* - ``torch.compile(m, backend="ipex")``
- Uses IPEX on CPU. `Read more <https://github.com/intel/intel-extension-for-pytorch>`__
* - ``torch.compile(m, backend="onnxrt")``
- Uses ONNX Runtime for training on CPU/GPU. :doc:`Read more <onnx_dynamo_onnxruntime_backend>`
**Inference-only backends**
@ -63,10 +67,8 @@ Some of the most commonly used backends include:
* - Backend
- Description
* - ``torch.compile(m, backend="onnxrt")``
- Uses ONNXRT for inference on CPU/GPU. `Read more <https://onnxruntime.ai/>`__
* - ``torch.compile(m, backend="tensorrt")``
- Uses ONNXRT to run TensorRT for inference optimizations. `Read more <https://github.com/onnx/onnx-tensorrt>`__
- Uses ONNX Runtime to run TensorRT for inference optimizations. `Read more <https://github.com/onnx/onnx-tensorrt>`__
* - ``torch.compile(m, backend="ipex")``
- Uses IPEX for inference on CPU. `Read more <https://github.com/intel/intel-extension-for-pytorch>`__
* - ``torch.compile(m, backend="tvm")``

View File

@ -19,25 +19,27 @@ In supporting dynamic shapes, we chose not to support dynamic rank programs, e.g
Abridged public API
-------------------
The eventual plan:
The default dynamic behavior in PyTorch 2.1 is:
- PT2 assumes everything is static by default
- If we recompile because a size changed, we will instead attempt to recompile that size as being dynamic (so we will never recompile because of that size again)
- If you know ahead of time something will be dynamic, you can skip the first recompile with ``torch._dynamo.mark_dynamic(tensor, dim)``
- If you say ``torch.compile(dynamic=True)`` we will attempt to make as much dynamic as possible
Unbacked integers for eager mode:
- If we recompile because a size changed, we will instead attempt to recompile
that size as being dynamic (sizes that have changed are likely to change in
the future). This generalization may fail (e.g., because user code does a
conditional branch on the size in question or missing dynamic shapes support
in PT2). If you are trying to understand why PT2 has overspecialized some
code, run with ``TORCH_LOGS=dynamic`` and look for "eval" entries that say
when guards are added and why.
What we have currently:
- If you know ahead of time something will be dynamic, you can skip the first
recompile with ``torch._dynamo.mark_dynamic(tensor, dim)``.
- You must explicitly opt into dynamic shapes with ``torch._dynamo.config.automatic_dynamic_shapes = True`` or ``torch.compile(dynamic=True)``
- ``torch.compile(dynamic=True)`` proactively attempts to make everything dynamic
- ``torch._dynamo.config.automatic_dynamic_shapes`` will assume everything is
static, but if we recompile because a size varied, the next time we will try
to compile it dynamically
- ``torch._dynamo.mark_dynamic`` works
Use ``TORCH_LOGS=dynamic`` to view more information about what is going on with dynamic shapes.
- If you say ``torch.compile(dynamic=False)``, we will turn off automatic
dynamic shapes on recompiles and always recompile for each distinct size.
Conversely, if you say ``torch.compile(dynamic=True)``, we will try to make
everything as dynamic as possible. This is mostly useful for small
operators; if you try it on a big model it will (1) probably crash PT2 and
(2) run slow for no good reason.
The Guard Model
---------------
@ -114,3 +116,10 @@ Naively implemented, this is too restrictive: most PyTorch programs will immedia
- On tensor creation, PyTorch precomputes a lot of data about a tensor; for example, if you use ``empty_strided`` to create a tensor, we will eagerly sort the strides and determine if the tensor is non-overlapping and dense. Sorts produce a lot of guards. However, it is more common to produce a tensor directly with a higher-level API like ``empty``, which is guaranteed to produce a non-overlapping and dense tensor. We modified PyTorch to avoid needlessly recomputing these properties.
- Even if nontrivial compute is needed, sometimes a property is never actually queried at all. Making these precomputed properties lazy allows us to avoid guarding on an unbacked symbolic integer unless it is actually needed.
- The data in an integer tensor is generally not known to be non-negative. However, we provide an API ``constrain_range`` whereby a user can specify that a size is bounded above and below by known limits.
In future versions of PT2 (beyond PT2.1), we will extend our reasoning system
to infer that an unbacked symbolic integer is size-like based on usage. For
example, if you pass the result of an ``.item()`` call to a factory function
like ``torch.empty``, we will automatically infer that the result is a size
(because if it was not, it would fail.) This assumption would get validated
at runtime, raising an error if it was not fulfilled.

View File

@ -317,47 +317,12 @@ them by default: ``env TORCHDYNAMO_DYNAMIC_SHAPES=0 python model.py`` 2.
CUDA graphs with Triton are enabled by default in inductor but removing
them may alleviate some OOM issues: ``torch._inductor.config.triton.cudagraphs = False``.
``torch.func`` does not work with ``torch.compile``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Does ``torch.func`` work with ``torch.compile`` (for `grad` and `vmap` transforms)?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Applying a ``torch.func`` transform to a function that uses ``torch.compile``
does not work:
.. code-block:: python
import torch
@torch.compile
def f(x):
return torch.sin(x)
def g(x):
return torch.grad(f)(x)
x = torch.randn(2, 3)
g(x)
As a workaround, use ``torch.compile`` outside of the ``torch.func`` function:
.. code-block:: python
import torch
def f(x):
return torch.sin(x)
@torch.compile
def g(x):
return torch.vmap(f)(x)
x = torch.randn(2, 3)
g(x)
Applying a ``torch.func`` transform to a function handled with ``torch.compile``
--------------------------------------------------------------------------------
For example, you have the following code:
.. code-block:: python
import torch
@ -374,12 +339,18 @@ For example, you have the following code:
This code will not work. There is an `issue <https://github.com/pytorch/pytorch/issues/100320>`__
that you can track for this.
As a workaround, please put the ``torch.compile`` outside of ``torch.func`` transform:
As a workaround, use ``torch.compile`` outside of the ``torch.func`` function:
.. note::
This is an experimental feature and can be used by setting `torch._dynamo.config.capture_func_transforms=True`
.. code-block:: python
import torch
torch._dynamo.config.capture_func_transforms=True
def f(x):
return torch.sin(x)
@ -393,18 +364,137 @@ As a workaround, please put the ``torch.compile`` outside of ``torch.func`` tran
Calling ``torch.func`` transform inside of a function handled with ``torch.compile``
------------------------------------------------------------------------------------
Compiling ``torch.func.grad`` with ``torch.compile``
----------------------------------------------------
.. code-block:: python
import torch
@torch.compile
def f(x):
return torch.vmap(torch.sum)(x)
torch._dynamo.config.capture_func_transforms=True
x = torch.randn(2, 3)
f(x)
def wrapper_fn(x):
return torch.func.grad(lambda x: x.sin().sum())(x)
This doesn't work yet. As a workaround, use ``torch._dynamo.allow_in_graph``
x = torch.randn(3, 3, 3)
grad_x = torch.compile(wrapper_fn)(x)
Compiling ``torch.vmap`` with ``torch.compile``
-----------------------------------------------
.. code-block:: python
import torch
torch._dynamo.config.capture_func_transforms=True
def my_fn(x):
return torch.vmap(lambda x: x.sum(1))(x)
x = torch.randn(3, 3, 3)
output = torch.compile(my_fn)(x)
Limitations
-----------
There are currently a few cases which are not supported and lead to graph breaks
(that is, torch.compile falls back to eager-mode PyTorch on these). We are working
on improving the situation for the next release (PyTorch 2.2)
1. The inputs and outputs of the function being transformed over must be tensors.
We do not yet support things like tuple of Tensors.
.. code-block:: python
import torch
torch._dynamo.config.capture_func_transforms=True
def fn(x):
x1, x2 = x
return x1 + x2
def my_fn(x):
return torch.func.vmap(fn)(x)
x1 = torch.randn(3, 3, 3)
x2 = torch.randn(3, 3, 3)
# Unsupported, falls back to eager-mode PyTorch
output = torch.compile(my_fn)((x1, x2))
2. Keyword arguments are not supported.
.. code-block:: python
import torch
torch._dynamo.config.capture_func_transforms=True
def fn(x, y):
return (x + y).sum()
def my_fn(x, y):
return torch.func.grad(fn)(x, y=y)
x = torch.randn(3, 3)
y = torch.randn(3, 3)
# Unsupported, falls back to eager-mode PyTorch
output = torch.compile(my_fn)(x, y)
3. Functions with observable side effects. For example, it is OK to mutate a list created in the function,
but not OK to mutate a list created outside of the function.
.. code-block:: python
import torch
torch._dynamo.config.capture_func_transforms=True
some_list = []
def f(x, y):
some_list.append(1)
return x + y
def my_fn(x, y):
return torch.func.vmap(f)(x, y)
x = torch.ones(2, 3)
y = torch.randn(2, 3)
# Unsupported, falls back to eager-mode PyTorch
output = torch.compile(my_fn)(x, y)
4. ``torch.vmap`` over a function that calls one or more operators in the following list.
.. note::
'stride', 'requires_grad', 'storage_offset', 'layout', 'data', 'is_coalesced', 'is_complex',
'is_conj', 'is_contiguous', 'is_cpu', 'is_cuda', 'is_distributed', 'is_floating_point',
'is_inference', 'is_ipu', 'is_leaf', 'is_meta', 'is_mkldnn', 'is_mps', 'is_neg', 'is_nested',
'is_nonzero', 'is_ort', 'is_pinned', 'is_quantized', 'is_same_size', 'is_set_to', 'is_shared',
'is_signed', 'is_sparse', 'is_sparse_csr', 'is_vulkan', 'is_xla', 'is_xpu'
.. code-block:: python
import torch
torch._dynamo.config.capture_func_transforms=True
def bad_fn(x):
x.stride()
return x
def my_fn(x):
return torch.func.vmap(bad_fn)(x)
x = torch.randn(3, 3, 3)
# Unsupported, falls back to eager-mode PyTorch
output = torch.compile(my_fn)(x)
Compiling functions besides the ones which are supported (escape hatch)
-----------------------------------------------------------------------
For other transforms, as a workaround, use ``torch._dynamo.allow_in_graph``
``allow_in_graph`` is an escape hatch. If your code does not work with
``torch.compile``, which introspects Python bytecode, but you believe it
@ -438,6 +528,160 @@ invokes an ``nn.Module``. This is because the outputs now depend on the
parameters of the ``nn.Module``. To get this to work, use
``torch.func.functional_call`` to extract the module state.
Does NumPy work with ``torch.compile``?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Starting in 2.1, ``torch.compile`` understands native NumPy programs that
work on NumPy arrays, and mixed PyTorch-NumPy programs that convert from PyTorch
to NumPy and back via ``x.numpy()``, ``torch.from_numpy``, and related functions.
.. _nonsupported-numpy-feats:
Which NumPy features does ``torch.compile`` support?
----------------------------------------------------
NumPy within ``torch.compile`` follows NumPy 2.0 pre-release.
Generally, ``torch.compile`` is able to trace through most NumPy constructions,
and when it cannot, it falls back to eager and lets NumPy execute that piece of
code. Even then, there are a few features where ``torch.compile`` semantics
slightly deviate from those of NumPy:
- NumPy scalars: We model them as 0-D arrays. That is, ``np.float32(3)`` returns
a 0-D array under ``torch.compile``. To avoid a graph break, it is best to use this 0-D
array. If this breaks your code, you can workaround this by casting the NumPy scalar
to the relevant Python scalar type ``bool/int/float``.
- Negative strides: ``np.flip`` and slicing with a negative step return a copy.
- Type promotion: NumPy's type promotion will change in NumPy 2.0. The new rules
are described in `NEP 50 <https://numpy.org/neps/nep-0050-scalar-promotion.html)>`__.
``torch.compile`` implements NEP 50 rather than the current soon-to-be deprecated rules.
- ``{tril,triu}_indices_from/{tril,triu}_indices`` return arrays rather than a tuple of arrays.
There are other features for which we do not support tracing and we gracefully
fallback to NumPy for their execution:
- Non-numeric dtypes like datetimes, strings, chars, void, structured dtypes and recarrays.
- Long dtypes ``np.float128/np.complex256`` and some unsigned dtypes ``np.uint16/np.uint32/np.uint64``.
- ``ndarray`` subclasses.
- Masked arrays.
- Esoteric ufunc machinery like ``axes=[(n,k),(k,m)->(n,m)]`` and ufunc methods (e.g., ``np.add.reduce``).
- Sorting / ordering ``complex64/complex128`` arrays.
- NumPy ``np.poly1d`` and ``np.polynomial``.
- Positional ``out1, out2`` args in functions with 2 or more returns (``out=tuple`` does work).
- ``__array_function__``, ``__array_interface__`` and ``__array_wrap__``.
- ``ndarray.ctypes`` attribute.
Can I execute NumPy code on CUDA via ``torch.compile``?
-------------------------------------------------------
Yes you can! To do so, you may simply execute your code within a ``torch.device("cuda")``
context. Consider the example
.. code-block:: python
import torch
import numpy as np
@torch.compile
def numpy_fn(X: np.ndarray, Y: np.ndarray) -> np.ndarray:
return np.sum(X[:, :, None] * Y[:, None, :], axis=(-2, -1))
X = np.random.randn(1024, 64)
Y = np.random.randn(1024, 64)
with torch.device("cuda"):
Z = numpy_fn(X, Y)
In this example, ``numpy_fn`` will be executed in CUDA. For this to be
possible, ``torch.compile`` automatically moves ``X`` and ``Y`` from CPU
to CUDA, and then it moves the result ``Z`` from CUDA to CPU. If we are
executing this function several times in the same program run, we may want
to avoid all these rather expensive memory copies. To do so, we just need
to tweak our ``numpy_fn`` so that it accepts cuda Tensors and returns tensors:
.. code-block:: python
@torch.compile
def numpy_fn(X: torch.Tensor, Y: torch.Tensor) -> torch.Tensor:
X, Y = X.numpy(), Y.numpy()
Z = np.sum(X[:, :, None] * Y[:, None, :], axis=(-2, -1))
return torch.from_numpy(Z)
X = torch.randn(1024, 64, device="cuda")
Y = torch.randn(1024, 64, device="cuda")
with torch.device("cuda"):
Z = numpy_fn(X, Y)
By doing this, we explicitly create the tensors in CUDA memory, and we keep
them there. In this case ``X.numpy()`` and ``from_numpy()`` are hints to the compiler
but no real data movement happens. Note that the original program would not run
on eager mode now. If you want to run it in eager mode, you would need to call
``.numpy(force=True)`` doing ``Z = Z.cuda()`` before returning
``Z``. Of course, doing this would execute the program on eager mode NumPy, and
on CPU.
How do I debug NumPy code under ``torch.compile``?
--------------------------------------------------
Debugging JIT compiled code is challenging, given the complexity of modern
compilers and the daunting errors that they raise.
`The tutorial on how to diagnose runtime errors within torch.compile <https://pytorch.org/docs/main/torch.compiler_troubleshooting.html#diagnosing-runtime-errors>`__
contains a few tips and tricks on how to tackle this task.
If the above is not enough to pinpoint the origin of the issue, there are still
a few other NumPy-specific tools we can use. We can discern whether the bug
is entirely in the PyTorch code by disabling tracing through NumPy functions:
.. code-block:: python
from torch._dynamo import config
config.trace_numpy = False
If the bug lies in the traced NumPy code, we can execute the NumPy code eagerly (without ``torch.compile``)
using PyTorch as a backend by importing ``import torch._numpy as np``.
This should just be used for **debugging purposes** and is in no way a
replacement for the PyTorch API, as it is **much less performant** and, as a
private API, **may change without notice**. At any rate, ``torch._numpy`` is a
Python implementation of NumPy in terms of PyTorch and it is used internally by ``torch.compile`` to
transform NumPy code into Pytorch code. It is rather easy to read and modify,
so if you find any bug in it feel free to submit a PR fixing it or simply open
an issue.
If the program does work when importing ``torch._numpy as np``, chances are
that the bug is in TorchDynamo. If this is the case, please feel open an issue
with a `minimal reproducer <https://pytorch.org/docs/2.1/torch.compiler_troubleshooting.html>`__.
I ``torch.compile`` some NumPy code and I did not see any speed-up.
-------------------------------------------------------------------
The best place to start is the
`tutorial with general advice for how to debug these sort of torch.compile issues <https://pytorch.org/docs/main/torch.compiler_faq.html#why-am-i-not-seeing-speedups>`__.
Some graph breaks may happen because of the use of unsupported features. See
:ref:`nonsupported-numpy-feats`. More generally, it is useful to keep in mind
that some widely used NumPy features do not play well with compilers. For
example, in-place modifications make reasoning difficult within the compiler and
often yield worse performance than their out-of-place counterparts.As such, it is best to avoid
them. Same goes for the use of the ``out=`` parameter. Instead, prefer
out-of-place ops and let ``torch.compile`` optimize the memory use. Same goes
for data-dependent ops like masked indexing through boolean masks, or
data-dependent control flow like ``if`` or ``while`` constructions.
Which API to use for fine grain tracing?
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Some files were not shown because too many files have changed in this diff Show More