f2ae7084eb
[BE] Use linux.2xlarge.memory
for ASAN builds ( #165164 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165164
Approved by: https://github.com/janeyx99
2025-10-10 18:13:42 +00:00
10a9fb641b
Switch build jobs from linux.4xlarge to c7i ( #165057 )
...
Switch build jobs that use linux.4xlarge which uses c5 instance types to c7i variant. This should improve performance by ~15-20% while cutting costs by ~10-15%.
Relates to pytorch/test-infra#7175
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165057
Approved by: https://github.com/huydhn
2025-10-10 15:13:40 +00:00
44b1ff54e9
[CD] Do not propagate download.pytorch.org IP into container ( #165075 )
...
Followup after https://github.com/pytorch/pytorch/pull/164969
Should fix binary build test failures
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165075
Approved by: https://github.com/seemethere , https://github.com/huydhn
ghstack dependencies: #164968 , #164969
2025-10-10 04:27:29 +00:00
daea35df5c
Revert "[CD] Do not propagate download.pytorch.org IP into container ( #165075 )"
...
This reverts commit 6d27a8e5093ee2a21d44dceeeffcb272e6e0f655.
Reverted https://github.com/pytorch/pytorch/pull/165075 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/165075#issuecomment-3388228013 ))
2025-10-10 04:20:51 +00:00
6d27a8e509
[CD] Do not propagate download.pytorch.org IP into container ( #165075 )
...
Followup after https://github.com/pytorch/pytorch/pull/164969
Should fix binary build test failures
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165075
Approved by: https://github.com/seemethere , https://github.com/huydhn
ghstack dependencies: #164968 , #164969
2025-10-09 21:59:31 +00:00
e7fd296930
[CI] Add full debug build to trunk ( #164974 )
...
But not test, just import torch, as regression test for https://github.com/pytorch/pytorch/issues/164297
Test plan: Re-apply #164974 on top of this change and observer the failure in the workflows: https://github.com/pytorch/pytorch/actions/runs/18383302153/job/52375282838
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164974
Approved by: https://github.com/seemethere , https://github.com/clee2000 , https://github.com/atalman
ghstack dependencies: #164968 , #164969
2025-10-09 20:12:16 +00:00
ee6a1ecb0a
[ROCm] Enable MI355 CI on PRs, and run full set of UTs on PRs ( #160215 )
...
Useful to have PR testing for PRs such as https://github.com/pytorch/pytorch/pull/151360
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160215
Approved by: https://github.com/malfet , https://github.com/atalman
Co-authored-by: Jeff Daily <jeff.daily@amd.com >
2025-10-09 18:03:12 +00:00
b28b24a9fc
Switch build jobs that use linux.12xlarge to c7i ( #164941 )
...
This PR updates build jobs that currently use linux.12xlarge to the
c7i varient which should increase build times by 15% - 20% depending
on the job and reduce costs of these jobs by 10% - 15%.
Signed-off-by: Thanh Ha <thanh.ha@linuxfoundation.org >
2025-10-09 09:58:52 -04:00
5b8174bc28
Revert "[vllm hash update] update the pinned vllm hash ( #164628 )"
...
This reverts commit 7b691546d2949790ffc8f6bd3c674faa6a46ff7c.
Reverted https://github.com/pytorch/pytorch/pull/164628 on behalf of https://github.com/huydhn due to There are some broken vLLM tests ([comment](https://github.com/pytorch/pytorch/pull/164628#issuecomment-3384560957 ))
2025-10-09 07:43:02 +00:00
a753ffa9af
Revert "Use runner with more memory for ASAN builds ( #165000 )"
...
This reverts commit f5fd18f7e24378bd9eb91404f697f1c81a8187d5.
Reverted https://github.com/pytorch/pytorch/pull/165000 on behalf of https://github.com/izaitsevfb due to not sure how, but this broke lint ([comment](https://github.com/pytorch/pytorch/pull/165000#issuecomment-3384286412 ))
2025-10-09 06:22:28 +00:00
7b691546d2
[vllm hash update] update the pinned vllm hash ( #164628 )
...
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml ).
Update the pinned vllm hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164628
Approved by: https://github.com/pytorchbot
2025-10-09 04:35:36 +00:00
f5fd18f7e2
Use runner with more memory for ASAN builds ( #165000 )
...
An attempt to [address OOM here](aed5ed1076/1
).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165000
Approved by: https://github.com/seemethere , https://github.com/malfet , https://github.com/huydhn
2025-10-09 01:09:28 +00:00
f1229b6db9
[BE] Remove manual IP address resolution ( #164969 )
...
As https://github.com/pytorch/pytorch/issues/100400 has been closed a while back
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164969
Approved by: https://github.com/seemethere
ghstack dependencies: #164968
2025-10-08 21:22:34 +00:00
15800888b6
[CI] Print GPU info during setup linux ( #164968 )
...
I.e. run `nvidia-smi` if present
Helps detecting what driver version this runner is on, which would have helped debugging some of the issues recently
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164968
Approved by: https://github.com/ngimel
2025-10-08 20:58:33 +00:00
e7ed1a00eb
Run inductor-perf-test-nightly-h100 once per day ( #164967 )
...
To reduce inductor costs, though I'm not sure how much this one matters specifically since h100s are reserved
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164967
Approved by: https://github.com/BoyuanFeng
2025-10-08 20:58:19 +00:00
90c0825e2d
[GHF] Allow reverts from pytorch-auto-revert app ( #164911 )
...
This is a bit weird, but author_login is not a unique field, but author_url is.
Explicitly allow https://github.com/apps/pytorch-auto-revert to issue revert commands
Update mocks by running
```
sed -i -e s/8e262b0495bd934d39dda198d4c09144311c5ddd6cca6a227194bd48dbfe7201/47860a8f57a214a426d1150c29893cbc2aa49507f12b731483b1a1254bca3428/ gql_mocks.json
```
Test plan: Run
```python
from trymerge import GitHubPR
pr=GitHubPR("pytorch", "pytorch", 164660)
print(pr.get_last_comment().author_url, pr.get_comment_by_id(3375785595).author_url)
```
that should produce
```
https://github.com/pytorch-auto-revert https://github.com/apps/pytorch-auto-revert
```
Plus added a regression test that checks two particular comments for revert validity
`pytorch-auto-revert` user is my alter ego :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164911
Approved by: https://github.com/jeanschmidt
2025-10-08 15:15:45 +00:00
1927783aa3
Revert "Reland vision pinned commit hash update ( #164492 )"
...
This reverts commit 6861a270624b44954826688f8dad668eb0154452.
Reverted https://github.com/pytorch/pytorch/pull/164492 on behalf of https://github.com/izaitsevfb due to see autorevert msg above, inductor breakage is legit ([comment](https://github.com/pytorch/pytorch/pull/164492#issuecomment-3379537888 ))
2025-10-08 04:38:26 +00:00
6861a27062
Reland vision pinned commit hash update ( #164492 )
...
Redo https://github.com/pytorch/pytorch/pull/154694
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164492
Approved by: https://github.com/yangw-dev
2025-10-07 22:45:05 +00:00
955f21dc2c
[ROCm][CI] Add support for gfx1100 in rocm workflow + test skips ( #148355 )
...
This PR adds infrastructure support for gfx1100 in the rocm workflow. Nodes have been allocated for this effort.
@dnikolaev-amd contributed all the test skips.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148355
Approved by: https://github.com/jeffdaily
Co-authored-by: Dmitry Nikolaev <dmitry.nikolaev@amd.com >
Co-authored-by: Jeff Daily <jeff.daily@amd.com >
2025-10-07 22:36:25 +00:00
68350660ee
Increase timeout for nightly macOS performance tests to 300 minutes ( #164793 )
...
the Test step time recently went slightly up.
hopefully this fixes https://github.com/pytorch/alerting-infra/issues/263
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164793
Approved by: https://github.com/seemethere
2025-10-07 08:44:07 +00:00
1f9614cef8
[ROCm][CI] Change rocm periodic workflow label to linux.rocm.gpu.mi250.4 ( #164616 )
...
Testing done on this PR: https://github.com/pytorch/pytorch/pull/156491
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164616
Approved by: https://github.com/jeffdaily , https://github.com/huydhn
2025-10-06 15:51:07 +00:00
ea42517e45
[xla hash update] update the pinned xla hash ( #164727 )
...
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml ).
Update the pinned xla hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164727
Approved by: https://github.com/pytorchbot
2025-10-06 11:54:10 +00:00
331191ce4b
Revert "[BE] Make PyObjectSlot use a global PyInterpreter ( #162659 )"
...
This reverts commit 29cbcbac4215e0d9070a1b7a07ddaec9a36bbd08.
Reverted https://github.com/pytorch/pytorch/pull/162659 on behalf of https://github.com/izaitsevfb due to reverted internally, see [D83214133](https://www.internalfb.com/diff/D83214133 ) ([comment](https://github.com/pytorch/pytorch/pull/162659#issuecomment-3369348172 ))
2025-10-05 21:39:57 +00:00
412c6d28ec
[ROCm][CI] additional dynamo benchmarks for inductor-periodic ( #164279 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164279
Approved by: https://github.com/jeffdaily
Co-authored-by: Jeff Daily <jeff.daily@amd.com >
2025-10-04 00:55:17 +00:00
fac6f20ae3
[CI] Add another win shard ( #164605 )
...
Since its timing out 0b4f2b46d9/1
the first shard is disproportionately long because of cpp tests, I'm trying to figure that out but for now we can do this or increase the timeout
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164605
Approved by: https://github.com/seemethere , https://github.com/malfet
2025-10-03 22:51:09 +00:00
8d53d788fe
lint: add .pyi to changed files on .pyi.in changes ( #164603 )
...
We were observing issues where the lint on trunk vs. PRs would be different
due to missing .pyi files. This change adds the .pyi files to the changed files
list when .pyi.in files are changed.
Signed-off-by: Eli Uriegas <eliuriegas@meta.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164603
Approved by: https://github.com/atalman , https://github.com/malfet , https://github.com/Skylion007
2025-10-03 21:30:54 +00:00
7eb1eb4313
ci: Removing ROCm tests from trunk. ( #164585 )
...
Had a conversation with the AMD team today and I think we are all in
agreement that the current state of queueing for AMD is beyond where
we'd like to be for there to be blocking CI for ROCm.
Moving the representative testing jobs for this into the ciflow/rocm
workflow.
I'd love for these to be back in trunk if we can get to a state where
our queueing metrics are below an hour for ROCm infrastructure.
Dashboards:
* ROCm Queueing (>60mins) ([link](https://hud.pytorch.org/queue_time_analysis?dateRange=30&startDate=2025-09-03T16%3A06%3A45.025Z&endDate=2025-10-03T16%3A06%3A45.025Z&granularity=week&chartType=bar&repos=pytorch%2Fpytorch&category=machine_type&machineTypes=linux.rocm.gpu.2&machineTypes=linux.rocm.gpu.4&machineTypes=linux.rocm.gpu.mi250&machineTypes=linux.rocm.gpu.gfx942.1&machineTypes=linux.rocm.gpu.mi250.4&machineTypes=linux.rocm.gpu.gfx942.4&machineTypes=linux.rocm.gpu.mi355.2&machineTypes=linux.rocm.gpu.gfx942.4.test&machineTypes=linux.rocm.gpu.mi250.1&machineTypes=linux.rocm.gpu.gfx942.1.test&machineTypes=linux.rocm.gpu.gfx90a.1&machineTypes=linux.rocm.gpu.gfx90a.4&items=linux.rocm.gpu.2&items=linux.rocm.gpu.4&items=linux.rocm.gpu.mi250&items=linux.rocm.gpu.gfx942.1&items=linux.rocm.gpu.mi250.4&items=linux.rocm.gpu.gfx942.4&items=linux.rocm.gpu.mi355.2&items=linux.rocm.gpu.gfx942.4.test&items=linux.rocm.gpu.mi250.1&items=linux.rocm.gpu.gfx942.1.test&items=linux.rocm.gpu.gfx90a.1&items=linux.rocm.gpu.gfx90a.4 ))
* NVIDIA queueing (<5mins) ([link](https://hud.pytorch.org/queue_time_analysis?dateRange=30&startDate=2025-09-03T16%3A05%3A08.000Z&endDate=2025-10-03T16%3A05%3A08.000Z&granularity=week&chartType=bar&repos=pytorch%2Fpytorch&category=machine_type&machineTypes=lf.linux.g4dn.4xlarge.nvidia.gpu&machineTypes=linux.g4dn.12xlarge.nvidia.gpu&machineTypes=linux.g4dn.metal.nvidia.gpu&machineTypes=linux.g5.4xlarge.nvidia.gpu&machineTypes=lf.linux.g4dn.12xlarge.nvidia.gpu&machineTypes=lf.linux.g5.12xlarge.nvidia.gpu&machineTypes=lf.linux.g5.4xlarge.nvidia.gpu&machineTypes=lf.linux.g6.4xlarge.experimental.nvidia.gpu&machineTypes=linux.g6.4xlarge.experimental.nvidia.gpu&machineTypes=linux.4xlarge.nvidia.gpu&machineTypes=linux.g5.12xlarge.nvidia.gpu&machineTypes=linux.g4dn.4xlarge.nvidia.gpu&machineTypes=lf.linux.4xlarge.nvidia.gpu&machineTypes=linux.g6.12xlarge.nvidia.gpu&items=lf.linux.g4dn.4xlarge.nvidia.gpu&items=linux.g4dn.12xlarge.nvidia.gpu&items=linux.g4dn.metal.nvidia.gpu&items=linux.g5.4xlarge.nvidia.gpu&items=lf.linux.g4dn.12xlarge.nvidia.gpu&items=lf.linux.g5.12xlarge.nvidia.gpu&items=lf.linux.g5.4xlarge.nvidia.gpu&items=lf.linux.g6.4xlarge.experimental.nvidia.gpu&items=linux.g6.4xlarge.experimental.nvidia.gpu&items=linux.4xlarge.nvidia.gpu&items=linux.g5.12xlarge.nvidia.gpu&items=linux.g4dn.4xlarge.nvidia.gpu&items=lf.linux.4xlarge.nvidia.gpu&items=linux.g6.12xlarge.nvidia.gpu ))
Signed-off-by: Eli Uriegas <eliuriegas@meta.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164585
Approved by: https://github.com/malfet , https://github.com/yangw-dev , https://github.com/atalman , https://github.com/jeffdaily
2025-10-03 18:19:24 +00:00
aed66248a0
[vllm hash update] update the pinned vllm hash ( #164319 )
...
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml ).
Update the pinned vllm hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164319
Approved by: https://github.com/pytorchbot
Co-authored-by: Huy Do <huydhn@gmail.com >
2025-10-03 12:30:33 +00:00
ddf8de28c2
Add Rocm to Operator Microbenchmark CI ( #164173 )
...
This pull request adds support for running operator microbenchmarks on ROCm (AMD GPU) environments in the CI workflow. The main changes involve introducing new build and test jobs for ROCm in the `.github/workflows/operator_microbenchmark.yml` file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164173
Approved by: https://github.com/huydhn
2025-10-03 07:35:32 +00:00
95a053284c
Fix vllm build issue ( #164361 )
...
Fixes #ISSUE_NUMBER
unstable https://github.com/pytorch/pytorch/issues/164362
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164361
Approved by: https://github.com/huydhn
Co-authored-by: Huy Do <huydhn@gmail.com >
2025-10-02 23:34:21 +00:00
0319556a35
Revert "[vision hash update] update the pinned vision hash ( #154694 )"
...
This reverts commit bcafea5c92ca2ee1b0dc8f6d8b62ecabb6f40228.
Reverted https://github.com/pytorch/pytorch/pull/154694 on behalf of https://github.com/yangw-dev due to break the unittest for inductor with improved, update benchmarks/dynamo/ci_expected_accuracy/inductor_torchbench_inference.csv, see failure example https://github.com/pytorch/pytorch/actions/runs/18185852421/job/51776537817 ([comment](https://github.com/pytorch/pytorch/pull/154694#issuecomment-3362285901 ))
2025-10-02 17:32:04 +00:00
bcafea5c92
[vision hash update] update the pinned vision hash ( #154694 )
...
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml ).
Update the pinned vision hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154694
Approved by: https://github.com/pytorchbot
Co-authored-by: Huy Do <huydhn@gmail.com >
2025-10-02 07:02:40 +00:00
1a5d023a5b
Add B200 to Operator Microbenchmark CI ( #164288 )
...
Add B200 to operator microbenchmarks nightly run
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164288
Approved by: https://github.com/huydhn
2025-10-01 23:56:34 +00:00
773c6762b8
[CD][CUDA13][NCCL] Fix nccl version typo for cu13 ( #164383 )
...
https://pypi.org/project/nvidia-nccl-cu13/#history does not have 2.27.5 but 2.27.7+.
Companion PR: https://github.com/pytorch/pytorch/pull/164352
Fixes a potential binary breakage due to non-existence of referenced NCCL cu13 version.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164383
Approved by: https://github.com/tinglvv , https://github.com/Skylion007 , https://github.com/atalman
2025-10-01 21:32:25 +00:00
f63d16c6a9
Make viable/strict updatable again ( #164374 )
...
To allow viable/strict to move forward, after https://github.com/pytorch/pytorch/pull/164260 was landed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164374
Approved by: https://github.com/seemethere
2025-10-01 18:09:07 +00:00
1288c6d8bb
Enable keep-going for trunk tags ( #164307 )
...
Tags like `trunk/{sha}` are used to re-run signals by [autorevert project](https://github.com/pytorch/test-infra/blob/main/aws/lambda/pytorch-auto-revert/README.md ).
We need to have `keep-going` enabled for those reruns, so that they surface all test failures, not just the first one.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164307
Approved by: https://github.com/clee2000
2025-10-01 17:21:43 +00:00
2610746375
Revert nccl upgrade back to 2.27.5 ( #164352 )
...
Revert https://github.com/pytorch/pytorch/pull/162351 as it breaks H100
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164352
Approved by: https://github.com/atalman , https://github.com/malfet
2025-10-01 15:27:40 +00:00
9ddfc59b9b
[BE] Delete stale non-ephemeral runners workarounds ( #164285 )
...
As all Win runners are ephemeral, no need to cleanup leftover processes
or uninstall PyTorch at the end of the test
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164285
Approved by: https://github.com/Skylion007
2025-10-01 03:47:36 +00:00
6d4dfa0878
[CI] Push viable/strict/${time}
tags ( #164183 )
...
Every time viable strict is updated
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164183
Approved by: https://github.com/seemethere
2025-10-01 03:41:10 +00:00
bd0907dc4c
[BE][CI] Unify requirments ( #163396 )
...
Both Linux, Windows and MacOS CI workflows should use `.ci/docker/requirements-ci.txt`
TODOS:
- Investigate why `choco install cmake` is needed to successfully detect MKL
- Move `psutil` installation from specific scripts into requirements-ci.txt
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163396
Approved by: https://github.com/Skylion007
2025-10-01 03:28:48 +00:00
5b1c39f5a1
Add smoke tests to verify that stable ABI FA3 wheel runs w/ newer torch ( #163782 )
...
Passing CI: https://github.com/pytorch/pytorch/actions/runs/18141589975/job/51635340255?pr=163782
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163782
Approved by: https://github.com/huydhn , https://github.com/mikaylagawarecki
2025-10-01 02:30:38 +00:00
ff715366aa
[vllm hash update] update the pinned vllm hash ( #164190 )
...
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml ).
Update the pinned vllm hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164190
Approved by: https://github.com/pytorchbot
2025-09-30 22:43:49 +00:00
5a93f00c79
[CI] Delete binary smoke workflows ( #164260 )
...
Those were very useful in the past, because:
- CI builder jobs did not generates wheels, but rather run `python setup.py develop` and shared docker layers, which is no longer the case, all CI jobs produce wheels
- CD jobs were targeting pre-CXX11 ABI, but this is no longer the case after manylinux2_28 migration
Existing, but acceptable gaps:
- Windows libtorch debug builds sometimes might fail, but IMO it's ok not to be able to produce those for a few days, as number of libtorch users are somewhat small
- All CD jobs are based on AlmaLinux, while CI are based on Ubuntu, but this could be adjusted if needed, besides AlmaLinux-9 and Ubuntu-22.04 are pretty close in terms of glibc and gcc versions
- CD jobs build for all GPU architectures, while CI only for the one being tested, but there are now periodic H100 and B200 jobs, and not a lot of development happens for Voltas or Pascals
Besides there are better tools to alert about the nightly failures
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164260
Approved by: https://github.com/seemethere , https://github.com/atalman
2025-09-30 20:00:07 +00:00
906fe7b120
[ROCm][CI] no longer build almalinux image for ROCm 6.3 ( #164201 )
...
Missed during ROCm 7 upgrades. We only build N and N-1.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164201
Approved by: https://github.com/jeffdaily
Co-authored-by: Jeff Daily <jeff.daily@amd.com >
2025-09-30 16:59:31 +00:00
79fcfd49d6
Revert "[CI] Push viable/strict/${time}
tags ( #164183 )"
...
This reverts commit 9f27b0c24515d9cf319d9a728d5009bf9ed035cf.
Reverted https://github.com/pytorch/pytorch/pull/164183 on behalf of https://github.com/malfet due to Hmm, didn't work that way ([comment](https://github.com/pytorch/pytorch/pull/164183#issuecomment-3352494098 ))
2025-09-30 14:32:46 +00:00
9f27b0c245
[CI] Push viable/strict/${time}
tags ( #164183 )
...
Every time viable strict is updated
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164183
Approved by: https://github.com/seemethere
2025-09-30 04:00:22 +00:00
b7419b920d
[ROCm][CI] Upgrade ROCm to 7.0 ( #163140 )
...
Upgrade all the ROCm docker image to ROCm 7.0 release version.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163140
Approved by: https://github.com/jeffdaily
Co-authored-by: Jeff Daily <jeff.daily@amd.com >
2025-09-30 02:23:26 +00:00
cee4e36f9a
[BE] remove manylinuxcxx11-abi-builder:cpu-cxx11-abi docker image ( #164187 )
...
I believe this image is not used anywhere anymore.
Test:
```
git grep manylinuxcxx11-abi-builder
git grep manylinuxcxx11
```
Return no results.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164187
Approved by: https://github.com/izaitsevfb , https://github.com/malfet , https://github.com/seemethere
2025-09-30 00:26:20 +00:00
50d418f69f
Replace setup.py bdist_wheel with python -m build --wheel ( #156712 )
...
Previously we already replaced most use of `python setup.py develop/install`.
This PR also replaces the use of `setup.py bdist_wheel` with the modern `python -m build --wheel` alternative.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156712
Approved by: https://github.com/atalman
ghstack dependencies: #156711
2025-09-29 21:51:32 +00:00
349c960970
Use linux.g4dn.4xlarge.nvidia.gpu for cuda 12.4 legacy driver tests ( #163956 )
...
Workaround for https://github.com/pytorch/pytorch/issues/163658
Looks like the workflow passes on 12.8 build that use inux.g4dn.4xlarge.nvidia.gpu but its failing on 12.6 builds that use linux.4xlarge.nvidia.gpu: https://github.com/pytorch/pytorch/actions/runs/17953843505/job/51080623612#step:13:470
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163956
Approved by: https://github.com/malfet
Co-authored-by: Mark Saroufim <marksaroufim@meta.com >
2025-09-29 19:38:17 +00:00