5d62b63a76
[BE] Use Python-3.14 GE build ( #165804 )
...
3.14 reached general availability on Oct 7th 2025, so we can remove all pre-release workarounds
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165804
Approved by: https://github.com/yangw-dev , https://github.com/Skylion007 , https://github.com/cyyever
2025-10-19 11:45:10 +00:00
9095a9dfae
[CD] Apply the fix from #162455 to aarch64+cu129 build ( #165794 )
...
When trying to bring cu129 back in https://github.com/pytorch/pytorch/pull/163029 , I mainly looked at https://github.com/pytorch/pytorch/pull/163029 and missed another tweak coming from https://github.com/pytorch/pytorch/pull/162455
I discover this issue when testing aarch64+cu129 builds in https://github.com/pytorch/test-infra/actions/runs/18603342105/job/53046883322?pr=7373 . Surprisingly, there is no test running for aarch64 CUDA build from what I see in 79a37055e7
.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165794
Approved by: https://github.com/malfet
2025-10-18 04:16:24 +00:00
6ece527fc5
[CI] Add aarch64 operator benchmark ( #165585 )
...
Running on Graviton4
Skip ConvTranspose1d benchmarks if PyTorch is compiled with ACL, due to https://github.com/pytorch/pytorch/issues/165654
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165585
Approved by: https://github.com/huydhn
2025-10-17 14:42:14 +00:00
d82527b32a
[Windows] Add AOTI cross-compilation CI ( #165573 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165573
Approved by: https://github.com/malfet
ghstack dependencies: #165560
2025-10-17 01:05:35 +00:00
d7e275d4b4
[CI][CUDA] Add periodic b200 distributed job ( #159323 )
...
1. Run distributed job with B200 runner, periodically.
2. discovered generic distributed test issue that certain unit test hard-coded ranks, calling for require_exact_world_size(world_size) API instead of require_world_size(world_size).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159323
Approved by: https://github.com/eqy
Co-authored-by: Aidyn-A <aidyn.b.aitzhan@gmail.com >
2025-10-16 21:54:04 +00:00
d5db3aee0d
[CI] Use 1-GPU runners for rocm-mi355.yml ( #165658 )
...
Should only need 1-GPU runners for rocm-mi355.yml since it runs `default` test config which only needs 1 GPU
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165658
Approved by: https://github.com/jeffdaily
2025-10-16 21:53:22 +00:00
d795fb225a
[RFC] Add pyrefly to lintrunner ( #165179 )
...
This will add pyrefly to lint runner as a warning only - and allow us to collect feedback about the tool before switching to pyrefly as the main type checker.
References the steps outlined here: : https://github.com/pytorch/pytorch/issues/163283 :
test plan:
`lintrunner init`
`lintrunner`
confirm when pyrefly errors are present results look like: https://gist.github.com/maggiemoss/e6cb2d015dd1ded560ae1329098cf33f
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165179
Approved by: https://github.com/ezyang
2025-10-16 20:07:09 +00:00
6dedd34c31
[CD] Skip 12.9 build on Windows ( #165665 )
...
Per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165665
Approved by: https://github.com/Camyll , https://github.com/malfet
2025-10-16 19:11:27 +00:00
85586d7efc
Make c7i the default for _linux-build.yml ( #164747 )
...
Use linux.c7i.2xlarge as the default runner for the _linux-build.yml workflow. In testing we found that switching from c5 - c7i grants a 15-20% faster build times despite c7i costing 5% more. This should reduce costs of jobs using _linux-build.yml.
Relates to pytorch/test-infra#7175 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164747
Approved by: https://github.com/atalman
2025-10-16 17:37:51 +00:00
23fb7e9f4b
[CI] Add arch prefix in front of op benchmark results ( #165584 )
...
To be able to run x86 and aarch64 benchmarks later on
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165584
Approved by: https://github.com/huydhn
ghstack dependencies: #165583
2025-10-16 01:50:52 +00:00
c2bd41ac9f
Build vLLM nightly wheels for CUDA 13.0 ( #163239 )
...
Now that https://github.com/vllm-project/vllm/pull/24599 has been merged
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163239
Approved by: https://github.com/malfet , https://github.com/atalman
2025-10-16 01:03:26 +00:00
7e6721fb0a
[BE] Remove confusing opbenchmark-on-demand-build
( #165583 )
...
As it doesn't have a test shard, so what's the point or running the build? Was added in https://github.com/pytorch/pytorch/pull/143733 and looks like test shard never existed for it
Moreover, allow one to specify benchmark size as argument, so one
technically can do a workflow dispatch with different opbenchmark sizes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165583
Approved by: https://github.com/huydhn
2025-10-15 23:48:28 +00:00
d7e3f493d9
[ROCm][CI] add mi355 to inductor perf test nightly ( #165326 )
...
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165326
Approved by: https://github.com/jeffdaily
Co-authored-by: Jeff Daily <jeff.daily@amd.com >
2025-10-14 20:03:21 +00:00
09a4187b8e
Update windows cuda build to use 12.8 ( #165345 )
...
As title
Motivation: The rest of the pytorch and inductor build is using 12.8 and we're deprecating cuda 12.6 builds soon per https://github.com/pytorch/pytorch/issues/165111
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165345
Approved by: https://github.com/atalman , https://github.com/malfet
2025-10-14 13:58:20 +00:00
c5972ebdfb
Revert "Update windows cuda build to use 12.8 ( #165345 )"
...
This reverts commit ca96c675001fa87b9d9c648972415ab8b1591f11.
Reverted https://github.com/pytorch/pytorch/pull/165345 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/165345#issuecomment-3400344079 ))
2025-10-14 06:46:33 +00:00
ca96c67500
Update windows cuda build to use 12.8 ( #165345 )
...
As title
Motivation: The rest of the pytorch and inductor build is using 12.8 and we're deprecating cuda 12.6 builds soon per https://github.com/pytorch/pytorch/issues/165111
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165345
Approved by: https://github.com/atalman
2025-10-14 02:33:44 +00:00
a2601630cd
[vllm hash update] update the pinned vllm hash ( #164628 )
...
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml ).
Update the pinned vllm hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164628
Approved by: https://github.com/pytorchbot
Co-authored-by: Huy Do <huydhn@gmail.com >
2025-10-12 18:26:07 +00:00
79a33e2db2
Switch docs build from c5 to c7i ( #165082 )
...
Switch docs build from c5 to c7i which should increase build
performance by roughly 15-20% while reducing costs by 10-15%.
Signed-off-by: Thanh Ha <thanh.ha@linuxfoundation.org >
2025-10-11 10:59:18 -04:00
4400c5d31e
Continue to build nightly CUDA 12.9 for internal ( #163029 )
...
Revert part of https://github.com/pytorch/pytorch/pull/161916 to continue building CUDA 12.9 nightly
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163029
Approved by: https://github.com/malfet
2025-10-11 08:26:47 +00:00
cafca357fb
Fix h100 daily inductor running dispatch ( #165185 )
...
casued by merged pr: e7ed1a00eb
the if condition should also updated
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165185
Approved by: https://github.com/malfet , https://github.com/huydhn
2025-10-10 21:28:58 +00:00
0ec0120b19
Move aws OIDC credentials steps into setup-rocm.yml ( #164769 )
...
The AWS ECR login step needs `id-token: write` permissions. We move the steps to get OIDC-based credentials from `_rocm-test.yml` to `setup-rocm.yml`. This lays the groundwork to enable access to AWS ECR in workflows in other repos such as torchtitan that use [linux_job_v2.yml](https://github.com/pytorch/test-infra/blob/main/.github/workflows/linux_job_v2.yml ), which also uses [setup-rocm.yml](335f4f80a0/.github/workflows/linux_job_v2.yml (L168)
).
Any caller workflows that eventually execute `setup-rocm` action will thus need to provide the `id-token: write` permission.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164769
Approved by: https://github.com/huydhn
2025-10-10 21:24:29 +00:00
370b1c12d2
[CI] Put the no gpu tests on machines that don't have gpus ( #165183 )
...
I think this is just a copy paste error?
NS: Introduced by https://github.com/pytorch/pytorch/pull/161013
Not sure where it got copied from though, the other set of no gpu tests for the other cuda version already have cpu runners
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165183
Approved by: https://github.com/malfet
2025-10-10 20:59:09 +00:00
6fd1ca28e1
[lint] Run full lint on ciflow/trunk ( #165169 )
...
Add some naming stuff to differentiate between full + partial
If we find that partial always == full, then we can get rid of it
https://github.com/pytorch/pytorch/issues/165168
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165169
Approved by: https://github.com/Skylion007 , https://github.com/malfet
2025-10-10 20:38:51 +00:00
7cddda1234
Update asan in slow to linux.2xlarge.memory
...
Followup after f2ae7084eb
2025-10-10 12:02:29 -07:00
f2ae7084eb
[BE] Use linux.2xlarge.memory
for ASAN builds ( #165164 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165164
Approved by: https://github.com/janeyx99
2025-10-10 18:13:42 +00:00
10a9fb641b
Switch build jobs from linux.4xlarge to c7i ( #165057 )
...
Switch build jobs that use linux.4xlarge which uses c5 instance types to c7i variant. This should improve performance by ~15-20% while cutting costs by ~10-15%.
Relates to pytorch/test-infra#7175
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165057
Approved by: https://github.com/huydhn
2025-10-10 15:13:40 +00:00
44b1ff54e9
[CD] Do not propagate download.pytorch.org IP into container ( #165075 )
...
Followup after https://github.com/pytorch/pytorch/pull/164969
Should fix binary build test failures
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165075
Approved by: https://github.com/seemethere , https://github.com/huydhn
ghstack dependencies: #164968 , #164969
2025-10-10 04:27:29 +00:00
daea35df5c
Revert "[CD] Do not propagate download.pytorch.org IP into container ( #165075 )"
...
This reverts commit 6d27a8e5093ee2a21d44dceeeffcb272e6e0f655.
Reverted https://github.com/pytorch/pytorch/pull/165075 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/165075#issuecomment-3388228013 ))
2025-10-10 04:20:51 +00:00
6d27a8e509
[CD] Do not propagate download.pytorch.org IP into container ( #165075 )
...
Followup after https://github.com/pytorch/pytorch/pull/164969
Should fix binary build test failures
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165075
Approved by: https://github.com/seemethere , https://github.com/huydhn
ghstack dependencies: #164968 , #164969
2025-10-09 21:59:31 +00:00
e7fd296930
[CI] Add full debug build to trunk ( #164974 )
...
But not test, just import torch, as regression test for https://github.com/pytorch/pytorch/issues/164297
Test plan: Re-apply #164974 on top of this change and observer the failure in the workflows: https://github.com/pytorch/pytorch/actions/runs/18383302153/job/52375282838
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164974
Approved by: https://github.com/seemethere , https://github.com/clee2000 , https://github.com/atalman
ghstack dependencies: #164968 , #164969
2025-10-09 20:12:16 +00:00
ee6a1ecb0a
[ROCm] Enable MI355 CI on PRs, and run full set of UTs on PRs ( #160215 )
...
Useful to have PR testing for PRs such as https://github.com/pytorch/pytorch/pull/151360
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160215
Approved by: https://github.com/malfet , https://github.com/atalman
Co-authored-by: Jeff Daily <jeff.daily@amd.com >
2025-10-09 18:03:12 +00:00
b28b24a9fc
Switch build jobs that use linux.12xlarge to c7i ( #164941 )
...
This PR updates build jobs that currently use linux.12xlarge to the
c7i varient which should increase build times by 15% - 20% depending
on the job and reduce costs of these jobs by 10% - 15%.
Signed-off-by: Thanh Ha <thanh.ha@linuxfoundation.org >
2025-10-09 09:58:52 -04:00
a753ffa9af
Revert "Use runner with more memory for ASAN builds ( #165000 )"
...
This reverts commit f5fd18f7e24378bd9eb91404f697f1c81a8187d5.
Reverted https://github.com/pytorch/pytorch/pull/165000 on behalf of https://github.com/izaitsevfb due to not sure how, but this broke lint ([comment](https://github.com/pytorch/pytorch/pull/165000#issuecomment-3384286412 ))
2025-10-09 06:22:28 +00:00
f5fd18f7e2
Use runner with more memory for ASAN builds ( #165000 )
...
An attempt to [address OOM here](aed5ed1076/1
).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165000
Approved by: https://github.com/seemethere , https://github.com/malfet , https://github.com/huydhn
2025-10-09 01:09:28 +00:00
e7ed1a00eb
Run inductor-perf-test-nightly-h100 once per day ( #164967 )
...
To reduce inductor costs, though I'm not sure how much this one matters specifically since h100s are reserved
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164967
Approved by: https://github.com/BoyuanFeng
2025-10-08 20:58:19 +00:00
955f21dc2c
[ROCm][CI] Add support for gfx1100 in rocm workflow + test skips ( #148355 )
...
This PR adds infrastructure support for gfx1100 in the rocm workflow. Nodes have been allocated for this effort.
@dnikolaev-amd contributed all the test skips.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/148355
Approved by: https://github.com/jeffdaily
Co-authored-by: Dmitry Nikolaev <dmitry.nikolaev@amd.com >
Co-authored-by: Jeff Daily <jeff.daily@amd.com >
2025-10-07 22:36:25 +00:00
68350660ee
Increase timeout for nightly macOS performance tests to 300 minutes ( #164793 )
...
the Test step time recently went slightly up.
hopefully this fixes https://github.com/pytorch/alerting-infra/issues/263
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164793
Approved by: https://github.com/seemethere
2025-10-07 08:44:07 +00:00
1f9614cef8
[ROCm][CI] Change rocm periodic workflow label to linux.rocm.gpu.mi250.4 ( #164616 )
...
Testing done on this PR: https://github.com/pytorch/pytorch/pull/156491
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164616
Approved by: https://github.com/jeffdaily , https://github.com/huydhn
2025-10-06 15:51:07 +00:00
331191ce4b
Revert "[BE] Make PyObjectSlot use a global PyInterpreter ( #162659 )"
...
This reverts commit 29cbcbac4215e0d9070a1b7a07ddaec9a36bbd08.
Reverted https://github.com/pytorch/pytorch/pull/162659 on behalf of https://github.com/izaitsevfb due to reverted internally, see [D83214133](https://www.internalfb.com/diff/D83214133 ) ([comment](https://github.com/pytorch/pytorch/pull/162659#issuecomment-3369348172 ))
2025-10-05 21:39:57 +00:00
412c6d28ec
[ROCm][CI] additional dynamo benchmarks for inductor-periodic ( #164279 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164279
Approved by: https://github.com/jeffdaily
Co-authored-by: Jeff Daily <jeff.daily@amd.com >
2025-10-04 00:55:17 +00:00
fac6f20ae3
[CI] Add another win shard ( #164605 )
...
Since its timing out 0b4f2b46d9/1
the first shard is disproportionately long because of cpp tests, I'm trying to figure that out but for now we can do this or increase the timeout
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164605
Approved by: https://github.com/seemethere , https://github.com/malfet
2025-10-03 22:51:09 +00:00
8d53d788fe
lint: add .pyi to changed files on .pyi.in changes ( #164603 )
...
We were observing issues where the lint on trunk vs. PRs would be different
due to missing .pyi files. This change adds the .pyi files to the changed files
list when .pyi.in files are changed.
Signed-off-by: Eli Uriegas <eliuriegas@meta.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164603
Approved by: https://github.com/atalman , https://github.com/malfet , https://github.com/Skylion007
2025-10-03 21:30:54 +00:00
7eb1eb4313
ci: Removing ROCm tests from trunk. ( #164585 )
...
Had a conversation with the AMD team today and I think we are all in
agreement that the current state of queueing for AMD is beyond where
we'd like to be for there to be blocking CI for ROCm.
Moving the representative testing jobs for this into the ciflow/rocm
workflow.
I'd love for these to be back in trunk if we can get to a state where
our queueing metrics are below an hour for ROCm infrastructure.
Dashboards:
* ROCm Queueing (>60mins) ([link](https://hud.pytorch.org/queue_time_analysis?dateRange=30&startDate=2025-09-03T16%3A06%3A45.025Z&endDate=2025-10-03T16%3A06%3A45.025Z&granularity=week&chartType=bar&repos=pytorch%2Fpytorch&category=machine_type&machineTypes=linux.rocm.gpu.2&machineTypes=linux.rocm.gpu.4&machineTypes=linux.rocm.gpu.mi250&machineTypes=linux.rocm.gpu.gfx942.1&machineTypes=linux.rocm.gpu.mi250.4&machineTypes=linux.rocm.gpu.gfx942.4&machineTypes=linux.rocm.gpu.mi355.2&machineTypes=linux.rocm.gpu.gfx942.4.test&machineTypes=linux.rocm.gpu.mi250.1&machineTypes=linux.rocm.gpu.gfx942.1.test&machineTypes=linux.rocm.gpu.gfx90a.1&machineTypes=linux.rocm.gpu.gfx90a.4&items=linux.rocm.gpu.2&items=linux.rocm.gpu.4&items=linux.rocm.gpu.mi250&items=linux.rocm.gpu.gfx942.1&items=linux.rocm.gpu.mi250.4&items=linux.rocm.gpu.gfx942.4&items=linux.rocm.gpu.mi355.2&items=linux.rocm.gpu.gfx942.4.test&items=linux.rocm.gpu.mi250.1&items=linux.rocm.gpu.gfx942.1.test&items=linux.rocm.gpu.gfx90a.1&items=linux.rocm.gpu.gfx90a.4 ))
* NVIDIA queueing (<5mins) ([link](https://hud.pytorch.org/queue_time_analysis?dateRange=30&startDate=2025-09-03T16%3A05%3A08.000Z&endDate=2025-10-03T16%3A05%3A08.000Z&granularity=week&chartType=bar&repos=pytorch%2Fpytorch&category=machine_type&machineTypes=lf.linux.g4dn.4xlarge.nvidia.gpu&machineTypes=linux.g4dn.12xlarge.nvidia.gpu&machineTypes=linux.g4dn.metal.nvidia.gpu&machineTypes=linux.g5.4xlarge.nvidia.gpu&machineTypes=lf.linux.g4dn.12xlarge.nvidia.gpu&machineTypes=lf.linux.g5.12xlarge.nvidia.gpu&machineTypes=lf.linux.g5.4xlarge.nvidia.gpu&machineTypes=lf.linux.g6.4xlarge.experimental.nvidia.gpu&machineTypes=linux.g6.4xlarge.experimental.nvidia.gpu&machineTypes=linux.4xlarge.nvidia.gpu&machineTypes=linux.g5.12xlarge.nvidia.gpu&machineTypes=linux.g4dn.4xlarge.nvidia.gpu&machineTypes=lf.linux.4xlarge.nvidia.gpu&machineTypes=linux.g6.12xlarge.nvidia.gpu&items=lf.linux.g4dn.4xlarge.nvidia.gpu&items=linux.g4dn.12xlarge.nvidia.gpu&items=linux.g4dn.metal.nvidia.gpu&items=linux.g5.4xlarge.nvidia.gpu&items=lf.linux.g4dn.12xlarge.nvidia.gpu&items=lf.linux.g5.12xlarge.nvidia.gpu&items=lf.linux.g5.4xlarge.nvidia.gpu&items=lf.linux.g6.4xlarge.experimental.nvidia.gpu&items=linux.g6.4xlarge.experimental.nvidia.gpu&items=linux.4xlarge.nvidia.gpu&items=linux.g5.12xlarge.nvidia.gpu&items=linux.g4dn.4xlarge.nvidia.gpu&items=lf.linux.4xlarge.nvidia.gpu&items=linux.g6.12xlarge.nvidia.gpu ))
Signed-off-by: Eli Uriegas <eliuriegas@meta.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164585
Approved by: https://github.com/malfet , https://github.com/yangw-dev , https://github.com/atalman , https://github.com/jeffdaily
2025-10-03 18:19:24 +00:00
ddf8de28c2
Add Rocm to Operator Microbenchmark CI ( #164173 )
...
This pull request adds support for running operator microbenchmarks on ROCm (AMD GPU) environments in the CI workflow. The main changes involve introducing new build and test jobs for ROCm in the `.github/workflows/operator_microbenchmark.yml` file.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164173
Approved by: https://github.com/huydhn
2025-10-03 07:35:32 +00:00
95a053284c
Fix vllm build issue ( #164361 )
...
Fixes #ISSUE_NUMBER
unstable https://github.com/pytorch/pytorch/issues/164362
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164361
Approved by: https://github.com/huydhn
Co-authored-by: Huy Do <huydhn@gmail.com >
2025-10-02 23:34:21 +00:00
1a5d023a5b
Add B200 to Operator Microbenchmark CI ( #164288 )
...
Add B200 to operator microbenchmarks nightly run
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164288
Approved by: https://github.com/huydhn
2025-10-01 23:56:34 +00:00
773c6762b8
[CD][CUDA13][NCCL] Fix nccl version typo for cu13 ( #164383 )
...
https://pypi.org/project/nvidia-nccl-cu13/#history does not have 2.27.5 but 2.27.7+.
Companion PR: https://github.com/pytorch/pytorch/pull/164352
Fixes a potential binary breakage due to non-existence of referenced NCCL cu13 version.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164383
Approved by: https://github.com/tinglvv , https://github.com/Skylion007 , https://github.com/atalman
2025-10-01 21:32:25 +00:00
f63d16c6a9
Make viable/strict updatable again ( #164374 )
...
To allow viable/strict to move forward, after https://github.com/pytorch/pytorch/pull/164260 was landed
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164374
Approved by: https://github.com/seemethere
2025-10-01 18:09:07 +00:00
2610746375
Revert nccl upgrade back to 2.27.5 ( #164352 )
...
Revert https://github.com/pytorch/pytorch/pull/162351 as it breaks H100
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164352
Approved by: https://github.com/atalman , https://github.com/malfet
2025-10-01 15:27:40 +00:00
9ddfc59b9b
[BE] Delete stale non-ephemeral runners workarounds ( #164285 )
...
As all Win runners are ephemeral, no need to cleanup leftover processes
or uninstall PyTorch at the end of the test
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164285
Approved by: https://github.com/Skylion007
2025-10-01 03:47:36 +00:00