Compare commits

..

228 Commits

Author SHA1 Message Date
57f9e88fbc test 2025-08-19 17:29:55 -07:00
f782c790df migrate more simple gso checks (#160253)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160253
Approved by: https://github.com/bobrenjc93
2025-08-16 00:15:24 +00:00
16ce2c15fa Add python 3.14 support to linux aarch64 builds (#160788)
Related to https://github.com/pytorch/pytorch/issues/156856
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160788
Approved by: https://github.com/malfet
2025-08-16 00:03:21 +00:00
0d28d12b11 Fix typo packing libnvshmem into libtorch (#160778)
Fix typo after https://github.com/pytorch/pytorch/pull/160465
Fixes: https://github.com/pytorch/pytorch/issues/160762

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160778
Approved by: https://github.com/Camyll, https://github.com/malfet, https://github.com/ZainRizvi, https://github.com/Skylion007
2025-08-15 23:43:02 +00:00
838f22c57d Do not incorrectly chain each of the strings as iterables (#160709)
Signed-off-by: Edward Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160709
Approved by: https://github.com/Skylion007, https://github.com/fduwjj
2025-08-15 23:22:24 +00:00
eqy
387fe847ab [cuDNN][SDPA] Introduce TORCH_CUDNN_SDPA_AVOID_RECOMPILE=1 (#155958)
Opt-in for now, but basically uses the variable-sequence length/ragged path for the common case of BSHD layout to avoid recompiling for different sequence lengths.

Built on top of #149282

Tested using a primitive fuzzer, seems at least as stable as default path (with recompilation) on B200 (50000+ cases tested without any failures)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155958
Approved by: https://github.com/drisspg
2025-08-15 21:59:18 +00:00
40311e2ec1 [AOTInductor] ABI-Compatibility for RecordFunction. (#159842)
Summary:
Previous our implementation for RecordFunction injects Aten into
codegen, which is breaking the ABI contract for AOTInductor.

C10::IValue is aded to call the full record function. The extension of
more profiling info will come in later PRs.

Test Plan:
Included in commit.

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D79622071](https://our.internmc.facebook.com/intern/diff/D79622071)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159842
Approved by: https://github.com/desertfire
2025-08-15 21:45:47 +00:00
8ca8b6053c [inductor][while_loop][be] improve the readability of output handling (#160374)
The logic doesn't change but make it easier to read and change.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160374
Approved by: https://github.com/zou3519
ghstack dependencies: #160548
2025-08-15 20:13:12 +00:00
ff86509a06 [map] filter none gradients and add autograd inductor tests (#160548)
Will filter the none outputs in autograd backward for other hops as follow ups

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160548
Approved by: https://github.com/zou3519
2025-08-15 20:13:12 +00:00
fa75ba9303 Change IR node's stack traces to return a set of stack traces only (#160701)
Summary: There can be excessive stack trace outputs in TORCH_LOGS="+inductor" when a single line of code corresponds to many post grad nodes, e.g. `self.multihead_attn(x, x, x)`, in that case, we'll see the same stack trace many times in the IR node, spamming the output log. So we change to return a set of stack traces.

Test Plan:
CI

Rollback Plan:

Differential Revision: D80310549

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160701
Approved by: https://github.com/angelayi
2025-08-15 19:31:59 +00:00
b78968b4d1 Support next(iterator, default) (#159483)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159483
Approved by: https://github.com/mlazos
ghstack dependencies: #159365, #159366, #159368
2025-08-15 19:08:21 +00:00
e5621b4d8b Fixes for collections.Counter (#159368)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159368
Approved by: https://github.com/mlazos
ghstack dependencies: #159365, #159366
2025-08-15 19:08:21 +00:00
2542e71f3f Change mutation type of MutableMappingVariable to AttributeMutationNew (#159366)
Also add MutableMappingVariable to `call_or_` / `call_ior`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159366
Approved by: https://github.com/zou3519
ghstack dependencies: #159365
2025-08-15 19:08:21 +00:00
0242d40fa5 Enable trace through the collections module (#159365)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159365
Approved by: https://github.com/zou3519
2025-08-15 19:08:21 +00:00
17de899709 Add py3.14 to macos arm64 (#160593)
Related to https://github.com/pytorch/pytorch/issues/156856

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160593
Approved by: https://github.com/malfet, https://github.com/Skylion007
2025-08-15 18:52:10 +00:00
25d0d8b0a3 [inductor] Fix propagating torch.utils._sympy.functions.Identity in IndexPropagation (#155504)
Fixes https://github.com/pytorch/pytorch/issues/160535

Index may contain ` torch.utils._sympy.functions.Identity`. When we call `SymPyOps.index_expr`, if the value is a sympy.Expr with Identity, `TypedExpr(value, dtype)` will fail. So when we unwrap arguments, we expand the sympy expression to unwrap Identity.

Test Plan:
buck run @mode/dev-nosan //caffe2/test/inductor:test_aot_inductor -- -r test_sym_expr_indexing

Rollback Plan:

Differential Re vision: D76308640

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155504
Approved by: https://github.com/eellison
2025-08-15 18:38:23 +00:00
c6d697ff52 port 2 distributed pipeline test files for Intel GPU (#159140)
it's another pr to port distributed pipeline test for Intel GPU, while the other pr is https://github.com/pytorch/pytorch/pull/159033.
In this pr, we port two test files for Intel GPU
We could enable Intel GPU with following methods and try the best to keep the original code styles:

1. instantiate_device_type_tests()
2. skip the case at xpu due to accuracy gap introduced by oneDNN non-deterministic

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159140
Approved by: https://github.com/guangyey, https://github.com/d4l3k, https://github.com/H-Huang
2025-08-15 18:29:50 +00:00
30d2f98daa Revert "[cutlass backend] re-add pip cutlass path (#160180)"
This reverts commit d556586448f3caab85673c7da0978fe31c7748f7.

Reverted https://github.com/pytorch/pytorch/pull/160180 on behalf of https://github.com/atalman due to broke macos nightly ([comment](https://github.com/pytorch/pytorch/pull/160180#issuecomment-3192311552))
2025-08-15 18:00:41 +00:00
8780d28c65 raise exception in case of errors in memory reordering (#160455)
This PR introduce two checks in the memory reordering pass to catch graph issues before performing the reordering task. For situation not covered by these checks, the reordering pass might fail and an exception will be thrown in this case.

This addresses issue -- https://github.com/pytorch/pytorch/issues/159568

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160455
Approved by: https://github.com/eellison
2025-08-15 17:31:55 +00:00
da8f48d88f [associative_scan] support gen_schema for associative_scan (#158883)
In-place mutation may create inter-loop dependency that breaks the parallelism we have for associative_scan so we ban input mutations.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158883
Approved by: https://github.com/zou3519
ghstack dependencies: #154193, #158965, #158863, #158864
2025-08-15 17:28:44 +00:00
cb9e2092a8 [scan] support gen_schema for scan (#158864)
We don't want to allow scan's combine_fn to mutate its inputs. The semantic of the mutation can be confusing. For example:
```python
def combine_fn(init, x):
```
If combine_fn mutates init, only first iteration mutates init, the rest of the iterations mutates the previous carry, which is an intermediate result. This is kind of a weird semantic because the only observable mutation is for init, which can be done outside of the combine_fn.

If combine_fn mutates x, where x is a slice of scanned inputs (i.e. xs), this pattern is more meaningful but we've not seen any use case yet.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158864
Approved by: https://github.com/zou3519
ghstack dependencies: #154193, #158965, #158863
2025-08-15 17:28:44 +00:00
f6bf1573fc [while_loop] support gen_schema for while_loop (#158863)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158863
Approved by: https://github.com/zou3519
ghstack dependencies: #154193, #158965
2025-08-15 17:28:34 +00:00
82a18423be [BE] create an empty shape_env for check_input_alias_and_mutation_return_outputs (#158965)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158965
Approved by: https://github.com/zou3519
ghstack dependencies: #154193
2025-08-15 17:28:20 +00:00
3fe3c23d4e [cond] support gen_schema for cond (#154193)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/154193
Approved by: https://github.com/zou3519
2025-08-15 17:28:13 +00:00
052c441cf4 Add logging for when inbuilt_inline_nn_modules will help with ID_MATCH guard triggered recompiles (#160592)
We add a logging around when an ID_MATCH guard is added at a place where inbuilt_inline_nn_modules would inline it. This is done with the aim of tagging recompiles that could be avoided by setting inbuilt_inline_nn_modules flag.
It will help us log and track the flag's adoption and potentially quantify saving in the the number of recompiles.

Differential Revision: D80075975

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160592
Approved by: https://github.com/anijain2305
2025-08-15 17:09:39 +00:00
b26d2a9464 [ez] Make NUMA signpost parameters JSON serializable (#160710)
# Context
Broader context in #160163.

In order for the _utils_internal version of signpost_event to do proper logging, its parameters argument needs to be json serializable.

# This PR
Convert `NumaOptions` to serializable form before inputting to `signpost_event`.

# Test Plan
## Automated
Added tests `$ pytest test/test_numa_binding.py`.

## Manual
See [D80317206](https://www.internalfb.com/diff/D80317206).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160710
Approved by: https://github.com/kiukchung
2025-08-15 16:52:43 +00:00
6382302990 [MPS] Add grid_sampler_3d for MPS (#160541)
This PR adds support for `grid_sampler_3d` for MPS with "bilinear" interpolation.

NOTE: "nearest" interpolation is not yet supported

Fixes #159882
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160541
Approved by: https://github.com/malfet
2025-08-15 16:19:25 +00:00
80dd05e31e Disable flaky cpp test RecordDebugHandles.Basic (#160577)
Test is flaky and sometimes hangs in CI

Here's an example of the failure:
https://github.com/pytorch/pytorch/actions/runs/16946153494/job/48027937663
```

2025-08-13T20:54:00.1223688Z ==================================== RERUNS ====================================
2025-08-13T20:54:00.1224156Z ___________________________ RecordDebugHandles.Basic ___________________________
2025-08-13T20:54:00.1224682Z [gw2] linux -- Python 3.13.5 /opt/conda/envs/py_3.13/bin/python3.13
2025-08-13T20:54:00.1225568Z Internal Error: calling /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/bin/test_jit for test RecordDebugHandles.Basic failed (returncode=-6):
2025-08-13T20:54:00.1226430Z CUDA not available. Disabling CUDA and MultiCUDA tests
2025-08-13T20:54:00.1226988Z Note: Google Test filter = RecordDebugHandles.Basic-*_CUDA:*_MultiCUDA
2025-08-13T20:54:00.1227450Z [==========] Running 1 test from 1 test suite.
2025-08-13T20:54:00.1227792Z [----------] Global test environment set-up.
2025-08-13T20:54:00.1228145Z [----------] 1 test from RecordDebugHandles
2025-08-13T20:54:00.1228492Z [ RUN      ] RecordDebugHandles.Basic
2025-08-13T20:54:00.1228822Z [       OK ] RecordDebugHandles.Basic (1 ms)
2025-08-13T20:54:00.1229204Z [----------] 1 test from RecordDebugHandles (1 ms total)
2025-08-13T20:54:00.1229501Z
2025-08-13T20:54:00.1229666Z [----------] Global test environment tear-down
2025-08-13T20:54:00.1230033Z [==========] 1 test from 1 test suite ran. (1 ms total)
2025-08-13T20:54:00.1230355Z [  PASSED  ] 1 test.
2025-08-13T20:54:00.1230727Z terminate called after throwing an instance of 'std::system_error'
2025-08-13T20:54:00.1231154Z   what():  Invalid argument
2025-08-13T20:54:00.1231416Z unknown file:0: C++ failure
2025-08-13T20:54:00.1231788Z ------------------------------ Captured c++ call -------------------------------
2025-08-13T20:54:00.1232262Z CUDA not available. Disabling CUDA and MultiCUDA tests
2025-08-13T20:54:00.1232745Z Note: Google Test filter = RecordDebugHandles.Basic-*_CUDA:*_MultiCUDA
2025-08-13T20:54:00.1233199Z [==========] Running 1 test from 1 test suite.
2025-08-13T20:54:00.1233557Z [----------] Global test environment set-up.
2025-08-13T20:54:00.1233915Z [----------] 1 test from RecordDebugHandles
2025-08-13T20:54:00.1234247Z [ RUN      ] RecordDebugHandles.Basic
2025-08-13T20:54:00.1234590Z [       OK ] RecordDebugHandles.Basic (1 ms)
2025-08-13T20:54:00.1235020Z [----------] 1 test from RecordDebugHandles (1 ms total)
2025-08-13T20:54:00.1235304Z
2025-08-13T20:54:00.1235431Z [----------] Global test environment tear-down
2025-08-13T20:54:00.1235793Z [==========] 1 test from 1 test suite ran. (1 ms total)
2025-08-13T20:54:00.1236126Z [  PASSED  ] 1 test.
2025-08-13T20:54:00.1236481Z terminate called after throwing an instance of 'std::system_error'
2025-08-13T20:54:00.1236906Z   what():  Invalid argument
2025-08-13T20:54:00.1237287Z ___________________________ RecordDebugHandles.Basic ___________________________
2025-08-13T20:54:00.1237800Z [gw2] linux -- Python 3.13.5 /opt/conda/envs/py_3.13/bin/python3.13
2025-08-13T20:54:00.1238686Z Internal Error: calling /opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/bin/test_jit for test RecordDebugHandles.Basic failed (returncode=-6):
2025-08-13T20:54:00.1239551Z CUDA not available. Disabling CUDA and MultiCUDA tests
2025-08-13T20:54:00.1240048Z Note: Google Test filter = RecordDebugHandles.Basic-*_CUDA:*_MultiCUDA
2025-08-13T20:54:00.1240495Z [==========] Running 1 test from 1 test suite.
2025-08-13T20:54:00.1240848Z [----------] Global test environment set-up.
2025-08-13T20:54:00.1241199Z [----------] 1 test from RecordDebugHandles
2025-08-13T20:54:00.1241542Z [ RUN      ] RecordDebugHandles.Basic
2025-08-13T20:54:00.1241871Z [       OK ] RecordDebugHandles.Basic (1 ms)
2025-08-13T20:54:00.1242249Z [----------] 1 test from RecordDebugHandles (1 ms total)
2025-08-13T20:54:00.1242503Z
2025-08-13T20:54:00.1242641Z [----------] Global test environment tear-down
2025-08-13T20:54:00.1242993Z [==========] 1 test from 1 test suite ran. (19 ms total)
2025-08-13T20:54:00.1243329Z [  PASSED  ] 1 test.
2025-08-13T20:54:00.1243697Z terminate called after throwing an instance of 'std::system_error'
2025-08-13T20:54:00.1244113Z   what():  Invalid argument
2025-08-13T20:54:00.1244392Z unknown file:0: C++ failure
2025-08-13T20:54:00.1244759Z ------------------------------ Captured c++ call -------------------------------
2025-08-13T20:54:00.1245235Z CUDA not available. Disabling CUDA and MultiCUDA tests
2025-08-13T20:54:00.1283768Z ============== 1 failed, 568 passed, 2 rerun in 115.57s (0:01:55) ==============
```

Here's an example of the hang:
https://github.com/pytorch/pytorch/actions/runs/16942186826/job/48015238944
Logs aren't super helpful other than stating that it took a long time.  Usually this file takes <2min to run
```
2025-08-13T18:43:24.6586481Z [gw0] [ 97%] PASSED [1.4119s] ../../../../../opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/bin/test_jit::PyTorch/LiteInterpreterDynamicTypeTestFixture::Conformance/8
2025-08-13T18:43:24.6587278Z [gw1] [ 97%] PASSED [1.4866s] ../../../../../opt/conda/envs/py_3.13/lib/python3.13/site-packages/torch/bin/test_jit::PyTorch/LiteInterpreterDynamicTypeTestFixture::Conformance/9 Command took >30min, returning 124
2025-08-13T18:43:24.6587288Z
2025-08-13T18:43:24.6587632Z FINISHED PRINTING LOG FILE of cpp/test_jit 1/1 (test/test-reports/cpp.test_jit_1.1_c259e5a152845991_.log)
2025-08-13T18:43:24.6587639Z
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160577
Approved by: https://github.com/huydhn
2025-08-15 15:59:21 +00:00
9df07ecfbe Revert "[inductor] dont reuse buffers if it affects peak (#145883) (#159530)"
This reverts commit 3be70dc30e893b552fc0f23ca06cd8f7949b6d08.

Reverted https://github.com/pytorch/pytorch/pull/159530 on behalf of https://github.com/clee2000 due to newly added test fail internally D80316528, probably just a targets change, but also imo the tests should probably go into a testcase class from common or inductor utils.  While I'm pretty sure CI can run the globally defined ones, theres some CI related functionality that on the testcase class that CI benefits from ([comment](https://github.com/pytorch/pytorch/pull/159530#issuecomment-3191947506))
2025-08-15 15:49:04 +00:00
846963fa9b Revert "[Inductor] addmm + activation function fusion (#158137)"
This reverts commit b9d7de3a094598c3dc0dd52e57bce30eb684c9d8.

Reverted https://github.com/pytorch/pytorch/pull/158137 on behalf of https://github.com/malfet due to Broke inductor torchbench, see 663da17b62/1 ([comment](https://github.com/pytorch/pytorch/pull/158137#issuecomment-3191841298))
2025-08-15 15:34:09 +00:00
663da17b62 Update torch-xpu-ops commit pin (#160062)
Update the torch-xpu-ops commit to [77cc792cd265179745d335579d233e6d4f9a2667](77cc792cd2), includes:

- Ensures that the XPU cache is cleared before creating tensors during the test
- Add unused variable warning
- Fix test_linalg and test_torch issue with bf32_on_and_off updates
- Fix deterministic indexing with broadcast
- Fix dist.gather with noncontiguous tensor
- Improve accuracy of index put deterministic kernel
- Add generate file rely avoid build before generate
- optimize embedding bag

Fixes #160661

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160062
Approved by: https://github.com/EikanWang
2025-08-15 15:27:24 +00:00
e299926f72 [ONNX] Fix doc typo for symbolic_multi_out (#160702)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160702
Approved by: https://github.com/justinchuby
2025-08-15 14:34:42 +00:00
bbd11c4f23 Uninstall torchao on MPS benchmark (#160724)
Fixes https://github.com/pytorch/pytorch/issues/160689

The current torchao 0.12.0 doesn't work with transformers 4.54.0 and ends up with this error:

```
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/transformers/models/albert/modeling_albert.py", line 37, in <module>
    from ...modeling_utils import PreTrainedModel
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/transformers/modeling_utils.py", line 51, in <module>
    from torchao.quantization import Int4WeightOnlyConfig
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/torchao/__init__.py", line 41, in <module>
    from torchao.quantization import (
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/torchao/quantization/__init__.py", line 6, in <module>
    from .autoquant import (
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/torchao/quantization/autoquant.py", line 11, in <module>
    from torchao.dtypes import (
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/torchao/dtypes/__init__.py", line 1, in <module>
    from . import affine_quantized_tensor_ops
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/torchao/dtypes/affine_quantized_tensor_ops.py", line 38, in <module>
    from torchao.dtypes.uintx.dyn_int8_act_int4_wei_cpu_layout import (
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/torchao/dtypes/uintx/__init__.py", line 7, in <module>
    from .dyn_int8_act_int4_wei_cpu_layout import (
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/torchao/dtypes/uintx/dyn_int8_act_int4_wei_cpu_layout.py", line 320, in <module>
    from ...prototype.inductor.fx_passes import register_da8w4_concat_linear_cpu_pass
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/torchao/prototype/inductor/fx_passes/__init__.py", line 2, in <module>
    from .int8_sdpa_fusion import _int8_sdpa_init
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/torchao/prototype/inductor/fx_passes/int8_sdpa_fusion.py", line 22, in <module>
    from ..int8_sdpa_lowering import register_int8_sdpa  # noqa: F401
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/ec2-user/runner/_work/_temp/venv-3.12-1755212960/lib/python3.12/site-packages/torchao/prototype/inductor/int8_sdpa_lowering.py", line 6, in <module>
    from torch._inductor.kernel.flex_attention import construct_strides, maybe_realize
ModuleNotFoundError: No module named 'torch._inductor.kernel.flex_attention'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160724
Approved by: https://github.com/malfet
2025-08-15 13:55:39 +00:00
eaa5d9d3d3 Introduce OpInfo test for testing export on fake device (#160694)
Summary: Prepare for the upcoming diffs for exporting on fake cuda device.

Test Plan:
test

Rollback Plan:

Differential Revision: D80304225

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160694
Approved by: https://github.com/dolpm
2025-08-15 07:26:28 +00:00
a7c75ae976 [dde] use sym_or when checking normalized shape in layer_norm (#160683)
Use `sym_eq` to check equality on tuple of ints/symints

### DDE
```
torch._dynamo.exc.UserError: Could not guard on data-dependent expression Eq(u0, u1) (unhinted: Eq(u0, u1)).  (Size-like symbols: u1, u0)

Caused by: return torch.nn.functional.layer_norm(  # test/inductor/test_unbacked_symints.py:527 in fn (_refs/__init__.py:3292 in native_layer_norm)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160683
Approved by: https://github.com/bobrenjc93
2025-08-15 06:56:00 +00:00
f7ad69f59c [dynamic shapes] handle Max(*,1) for inductor layout contiguity (#160578)
Differential Revision: D80214882

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160578
Approved by: https://github.com/ZixinYang, https://github.com/bobrenjc93
2025-08-15 06:10:18 +00:00
4cae9cf2df Update triton xpu commit to support python 3.14 (#160183)
Follow PR #159725
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160183
Approved by: https://github.com/EikanWang, https://github.com/atalman
2025-08-15 05:41:17 +00:00
7710800865 [3/3][ghstack][vllm ci build setup]vllm build workflow (#160116)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160116
Approved by: https://github.com/huydhn
2025-08-15 05:35:46 +00:00
aa99e0958f Separate provenance tracking to different levels (#160383)
Summary: as title. We've got request from various parties who are interested in turning on the provenance tracking by default. In this PR, we prepare to turn on part of the provenance tracking that doesn't have too much overhead by default.

- Change `provenance_tracking` config to `provenance_tracking_level`
- turn on the following provenance tracking by default when `basic_provenance_tracking`=True
    - `set_kernel_post_grad_provenance_tracing` for kernels, this add mapping between triton kernels and post_grad nodes
    - `dump_inductor_provenance_info` if we're dumping tlparse log
    - `get_graph_provenance_json` and dump `reate_mapping_pre_post_grad_nodes`. This creates mapping between pre_grad and post_grad nodes. Since we're not turning on the provenance tracking in GraphTransformObserver by default, the mapping here maybe incomplete/limited.
    - add stack trace from post grad nodes to inductor IR nodes
    - add exception swallowing for all functions above

Test Plan:
CI

Rollback Plan:

Differential Revision: D80031559

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160383
Approved by: https://github.com/angelayi
2025-08-15 04:59:35 +00:00
3fc7a95176 [audio hash update] update the pinned audio hash (#160485)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160485
Approved by: https://github.com/pytorchbot
2025-08-15 04:27:49 +00:00
858fb80b9b [PT2]: Add Static Dispatch Kernel for wrapped_fbgemm_linear_fp16_weight (#160451)
Summary: Add static dispatch kernel for wrapped_fbgemm_linear_fp16_weight. This optimization should improve perf for all Ads DSNN models using Sigmoid.

Test Plan:
```
MODEL_TYPE=dpa_product_first_ctr_model
MODEL_ENTITY_ID=892669089
SNAPSHOT_ID=37
OTHER_MODEL_ENTITY_ID=892669089
OTHER_SNAPSHOT_ID=36

MODULES=(mix prepare_float_features object user)
SUFFIXES=(.predictor.local .predictor.precompute.prepare_float_features .predictor.precompute.remote_object_only .predictor.precompute.remote_request_only)

for i in "${!MODULES[@]}"; do
MODULE=${MODULES[i]}
SUFFIX=${SUFFIXES[i]}
buck2 run mode/opt caffe2/torch/fb/model_transform/fx2trt/packaging:load_net_predictor -- --loadMode=BenchmarkAB --inputNetFile=/data/users/$USER/models/${MODEL_ENTITY_ID}/${SNAPSHOT_ID}/${MODEL_ENTITY_ID}_${SNAPSHOT_ID}${SUFFIX} --otherNetFile=/data/users/$USER/models/${OTHER_MODEL_ENTITY_ID}/${OTHER_SNAPSHOT_ID}/${OTHER_MODEL_ENTITY_ID}_${OTHER_SNAPSHOT_ID}${SUFFIX} --moduleName=${MODULE} --submodToDevice "" --benchmarkDontRebatchSamples=true --doNotRandomizeSampleInputs=true
```

Before: P1900475429
I0810 19:29:22.782902 2717337 load_net_predictor_lib.cpp:1807] Average latency A: 0.0843 ms
I0810 19:29:22.782905 2717337 load_net_predictor_lib.cpp:1807] Average latency B: 0.0989 ms

After: P1900825771
I0811 15:42:34.866408 2311279 load_net_predictor_lib.cpp:1807] [36mAverage latency A: 0.0854 ms[0m
I0811 15:42:34.866411 2311279 load_net_predictor_lib.cpp:1807] [36mAverage latency B: 0.092 ms[0m

Still has some regression but the gap is smaller...

Rollback Plan:

Reviewed By: henryoier, muchulee8

Differential Revision: D80042054

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160451
Approved by: https://github.com/henryoier
2025-08-15 04:06:17 +00:00
55061c9602 [PT2]: Add Static Dispatch Kernel for scale_gradient (#160454)
Summary: Add Static Dispatch Kernel for scale_gradient

Test Plan:
```
MODEL_TYPE=dpa_product_first_ctr_model
MODEL_ENTITY_ID=892669089
SNAPSHOT_ID=37
OTHER_MODEL_ENTITY_ID=892669089
OTHER_SNAPSHOT_ID=36

MODULES=(mix prepare_float_features object user)
SUFFIXES=(.predictor.local .predictor.precompute.prepare_float_features .predictor.precompute.remote_object_only .predictor.precompute.remote_request_only)

for i in "${!MODULES[@]}"; do
MODULE=${MODULES[i]}
SUFFIX=${SUFFIXES[i]}
buck2 run mode/opt caffe2/torch/fb/model_transform/fx2trt/packaging:load_net_predictor -- --loadMode=BenchmarkAB --inputNetFile=/data/users/$USER/models/${MODEL_ENTITY_ID}/${SNAPSHOT_ID}/${MODEL_ENTITY_ID}_${SNAPSHOT_ID}${SUFFIX} --otherNetFile=/data/users/$USER/models/${OTHER_MODEL_ENTITY_ID}/${OTHER_SNAPSHOT_ID}/${OTHER_MODEL_ENTITY_ID}_${OTHER_SNAPSHOT_ID}${SUFFIX} --moduleName=${MODULE} --submodToDevice "" --benchmarkDontRebatchSamples=true --doNotRandomizeSampleInputs=true
```

Rollback Plan:

Reviewed By: henryoier

Differential Revision: D80062244

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160454
Approved by: https://github.com/henryoier
2025-08-15 03:42:39 +00:00
214d04833a [PT2]: Add Static Dispatch Kernel for fmod.Scalar (#160654)
Summary: Add static dispatch for torch.ops.aten.fmod.Scalar. Found this missing in user/object nets for DSNN models.

Test Plan:
```
MODEL_TYPE=dpa_product_first_ctr_model
MODEL_ENTITY_ID=892669089
SNAPSHOT_ID=36
MODULE=user
SUFFIX=.predictor.precompute.remote_request_only

buck2 run mode/opt caffe2/torch/fb/model_transform/fx2trt/packaging:load_net_predictor -- --loadMode=BenchmarkByOp --inputNetFile=/data/users/$USER/models/${MODEL_ENTITY_ID}/${SNAPSHOT_ID}/${MODEL_ENTITY_ID}_${SNAPSHOT_ID}${SUFFIX} --moduleName=${MODULE} --submodToDevice="" --benchmarkEnableProfiling=true --benchmarkDontRebatchSamples=true --doNotRandomizeSampleInputs=true --benchmarkNumIterations=1000
```

Object tower: P1904347784
User tower: P1904348406

Rollback Plan:

Differential Revision: D80238495

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160654
Approved by: https://github.com/henryoier
2025-08-15 03:11:48 +00:00
9c5601ecc3 [NVIDIA] Refactor Family Blackwell Support codegen (#156176)
With the legacy driver (nvgpu) used for CUDA 12.9, Thor was operating with SM 10.1.
This changes to SM 11.0 when the newer driver model (OpenRM), which is intended for CUDA 13.0, is introduced.
Thor 10.1 --> 11.0
Spark 12.1
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156176
Approved by: https://github.com/ezyang
2025-08-15 02:51:26 +00:00
5b9ad951f8 [BE][Docker] Do not install cuda:11.8 (#160695)
As CUDA-11.8 binary are no longer produced by CD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160695
Approved by: https://github.com/huydhn
2025-08-15 02:23:04 +00:00
4d5f92aa39 typing tvm.py (#160369)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160369
Approved by: https://github.com/Skylion007
ghstack dependencies: #160362, #160363, #160364, #160365, #160366, #160367, #160368
2025-08-15 02:09:31 +00:00
39ca0ce0c8 Type backend torchxla (#160368)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160368
Approved by: https://github.com/Skylion007
ghstack dependencies: #160362, #160363, #160364, #160365, #160366, #160367
2025-08-15 02:09:31 +00:00
d52bb67ac3 typing registry.py (#160367)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160367
Approved by: https://github.com/Skylion007
ghstack dependencies: #160362, #160363, #160364, #160365, #160366
2025-08-15 02:09:31 +00:00
05b9b63fb6 typing inductor and placeholder backends (#160366)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160366
Approved by: https://github.com/Skylion007
ghstack dependencies: #160362, #160363, #160364, #160365
2025-08-15 02:09:31 +00:00
453cfa5153 typing distributed.py (#160365)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160365
Approved by: https://github.com/StrongerXi
ghstack dependencies: #160362, #160363, #160364
2025-08-15 02:09:31 +00:00
9faca5f260 typing debugging.py (#160364)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160364
Approved by: https://github.com/Skylion007
ghstack dependencies: #160362, #160363
2025-08-15 02:09:31 +00:00
6fe6dd9fdc Type cudagraphs.py (#160363)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160363
Approved by: https://github.com/StrongerXi
ghstack dependencies: #160362
2025-08-15 02:09:31 +00:00
f82c7eed84 Typing for common.py (#160362)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160362
Approved by: https://github.com/Skylion007
2025-08-15 02:09:31 +00:00
25ccc4716e [Inductor] [Triton] Apply feedback to Enable padded stride support (#160614)
Summary:
Issue I noticed while fixing tests for TMA store. This triton.language.make_tensor_descriptor call hardcodes the shape information as the stride, which is not necessarily correct.

In particular, its legal to have a stride bigger than the shape (e.g. padded to a size). A good example of the usage of this would be to allocate a tensor to always be a multiple of 16 and just pad the result so TMA is legal.

This is redo of https://github.com/pytorch/pytorch/pull/160493 because I broke this accidentally trying to land internally first instead of merging through Github directly.

Test Plan:
Tested with `buck2 run mode/opt-split-dwarf mode/inplace -c fbcode.nvcc_arch=h100 caffe2/test/inductor:max_autotune 2>&1 | tee ~/test_logs.log` and confirmed all max autotune tests passed.

Rollback Plan:

Differential Revision: D80224578

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160614
Approved by: https://github.com/eellison
2025-08-15 02:06:14 +00:00
d387a48c38 [generator] Raise StopIteration(value) with value from the return stmt (#157152)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157152
Approved by: https://github.com/zou3519
ghstack dependencies: #157148
2025-08-15 01:42:40 +00:00
831e85104a [contextlib] Fixes for CPython contextlib tests (#157148)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157148
Approved by: https://github.com/zou3519
2025-08-15 01:42:40 +00:00
211c98859a [inductor][triton] Update triton_builtin handling after triton # 7239 (#160658)
https://github.com/triton-lang/triton/pull/7239 will search for a _semantic kwarg in the signature of the function before passing in this kwarg. To fix this in Inductor:

1. explicitly take a _semantic kwarg
2. remove the functools.wraps around the wrapper function, which was causing inspect.signature to return the signature of the wrapped function (instead of the signature of the wrapper, which does contain the _semantic arg)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160658
Approved by: https://github.com/PaulZhang12, https://github.com/njriasan
2025-08-15 00:39:24 +00:00
dae7710bf2 [cuda][cupy] Improve cupy device placement when device is provided with explicit index (#158529)
resubmit https://github.com/pytorch/pytorch/pull/158320 , fixing a potential bug when device index is not specified explicitly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158529
Approved by: https://github.com/ezyang
2025-08-15 00:27:42 +00:00
dc194a3096 Test multiprocessing spawn timing fix (#160672)
Submitting PR to fix #160511.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160672
Approved by: https://github.com/mikaylagawarecki
2025-08-15 00:11:55 +00:00
4051b42c29 [ROCm] hipify needs specific header mappings (#160675)
Fixes #160579.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160675
Approved by: https://github.com/ScottTodd, https://github.com/jeffdaily

Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-08-15 00:09:04 +00:00
eb0eaa67e1 [BE][ci] Increase frequency of cutlass backend ci (#160656)
* increase frequency from every 24 hours to every 12 hours
* automatically enable it if cutlass backend files are touched.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160656
Approved by: https://github.com/eellison
2025-08-14 23:44:55 +00:00
98373e5ad2 [doc] AOTI debugging guide (#160430)
Folded from https://discuss.pytorch.org/t/a-beginners-guide-to-debugging-aot-inductor-cuda-illegal-memory-access/222188

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160430
Approved by: https://github.com/angelayi
2025-08-14 23:42:17 +00:00
371eacb2ae [Dynamo][Hierarchical Compile] Refactor for tuple flattening (#158810)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158810
Approved by: https://github.com/StrongerXi
2025-08-14 22:45:44 +00:00
3650989e6e Revert "[cutlass] fix dictionary iteration error (#160552)"
This reverts commit 29d20d49f0b7f4e362e1cefdcdc4b5659969312c.

Reverted https://github.com/pytorch/pytorch/pull/160552 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/160552#issuecomment-3189940880))
2025-08-14 21:41:28 +00:00
3be70dc30e [inductor] dont reuse buffers if it affects peak (#145883) (#159530)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159530
Approved by: https://github.com/eellison
2025-08-14 21:14:36 +00:00
47a1db823d [triton_heuristics] Optimize the triton launcher in pt2 (#160000)
Summary:

(Original author: Xu Zhao. Commandeered by David to land this since it is relatively urgent)

We observed ~10us PT2-Triton launch overhead regression after pin update.

Before Triton pin-update:
 {F1980557238}

After Triton pin-update:
 {F1980557240}

The root cause is because https://github.com/pytorch/pytorch/pull/145051 adds `_get_args_with_constexprs` to the cubin launcher caller function, which is on the critical path.

The motivation for `_get_args_with_constexprs` was that between triton 3.2 and triton 3.3, the convention for calling Triton kernels (at the level that non-static-cuda-launcher inductor integrates) changed. Previously, the callable did not take constexpr arguments as parameters; after 3.3, it does. With pointwise/reduction kernels, we don't know the constexpr values until after autotuning occurs; so `_get_args_with_constexprs` would inject constexprs into the arguments list before calling the Triton kernel. The fix (in this PR) is to instead inject the constexpr args into the launcher string - this avoids the cost of sorting/reordering arguments which previously occurred upon execution of each kernel.

Note that the static_cuda_launcher.py does not require constants to be passed to the cubin launcher (e96c7c4bb0/torch/_inductor/runtime/static_cuda_launcher.py (L220)), there is no need to pass in constexprs to the generated launcher code.

The new launcher code needs to work on three cases:
- StaticallyLaunchedCudaKernel
- triton.compile.CompiledKernel
- AOTInductor

Analysis: https://docs.google.com/document/d/1PHaSmx2w59K8qpjw5_qzKWShfEgptf_Zpv_DL7YxiWU/edit?tab=t.0

Test Plan:
Before:
```
$ buck2 run mode/opt //pytorch/benchmark:pt2 -- --only BERT_pytorch --performance --backend=inductor --training --amp --disable-cudagraphs

1.893x
```

```

$ buck2 run mode/opt //pytorch/tritonbench:run -- --op launch_latency
  x_val    nop_python_function-walltime    nop_triton_kernel-walltime    nop_triton_compiled_kernel_run-walltime    nop_inductor_kernel-walltime    nop_inductor_kernel_cudagraph-walltime
-------  ------------------------------  ----------------------------  -----------------------------------------  ------------------------------  ----------------------------------------
      0                      0.00760921                       1.80298                                   0.623282                         5.25024                                  0.203722
     19                      0.00799885                       4.78223                                   1.00226                          5.8213                                   0.239084
average                      0.00780403                       3.29261                                   0.812769                         5.53577                                  0.221403
```

After:

```
buck2 run mode/opt //pytorch/tritonbench:run -- --op launch_latency
  x_val    nop_python_function-walltime    nop_triton_kernel-walltime    nop_triton_compiled_kernel_run-walltime    nop_inductor_kernel-walltime    nop_inductor_kernel_cudagraph-walltime
-------  ------------------------------  ----------------------------  -----------------------------------------  ------------------------------  ----------------------------------------
      0                      0.00747067                       1.92589                                   0.726509                         4.35459                                  0.204205
     19                      0.00747823                       7.36852                                   1.26241                          6.28208                                  0.239278
average                      0.00747445                       4.6472                                    0.994459                         5.31834                                  0.221741
```

```
$ buck2 run mode/opt //pytorch/benchmark:pt2 -- --only BERT_pytorch --performance --backend=inductor --training --amp --disable-cudagraphs

1.985x
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160000
Approved by: https://github.com/jansel, https://github.com/mlazos

Co-authored-by: Xu Zhao <xzhao9@meta.com>
2025-08-14 21:04:08 +00:00
eac2d9d695 Revert "appending the pythonpath (#160219)"
This reverts commit 1d80d697a269234b47ec7ede192faf3bb9b159e3.

Reverted https://github.com/pytorch/pytorch/pull/160219 on behalf of https://github.com/clee2000 due to broke inductor? [GH job link](https://github.com/pytorch/pytorch/actions/runs/16970222746/job/48108262003) [HUD commit link](1d80d697a2) ([comment](https://github.com/pytorch/pytorch/pull/160219#issuecomment-3189850381))
2025-08-14 20:58:14 +00:00
3fe19a7a0a [Test Fix] Delete dynamo skipfile for OpenMP test_one_thread (#160562)
Fixes #120648

During issue scrubbing I could not repro these failing tests, so reenabling them to close out the issue

### Test
Original repro command:
```
 PYTORCH_TEST_WITH_DYNAMO=1 pytest test/test_openmp.py -v -k test_one_thread
```

Now results in
```
platform linux -- Python 3.12.11, pytest-8.4.1, pluggy-1.6.0 -- /home/lucaskabela/.conda/envs/pytorch-3.12/bin/python3.12
cachedir: .pytest_cache
hypothesis profile 'default'
rootdir: /home/lucaskabela/pytorch
configfile: pytest.ini
plugins: hypothesis-6.138.0
collected 2 items / 1 deselected / 1 selected
Running 1 items in this shard

test/test_openmp.py::TestOpenMP_ParallelFor::test_one_thread PASSED [3.6874s]                                                       [100%]

===================================================== 1 passed, 1 deselected in 6.07s =====================================================
```

And:
```
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_openmp.py TestOpenMP_ParallelFor.test_one_thread
```
```
PYTORCH_TEST_WITH_DYNAMO=1 python test/test_sort_and_select.py TestSortAndSelectCPU.test_sort_overflow_cpu_int16
```

Both result in:
```
.
----------------------------------------------------------------------
Ran 1 test in 0.003s
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160562
Approved by: https://github.com/zou3519
2025-08-14 20:55:59 +00:00
4a90dc0c1f Update checkpoint warning to target PyTorch 2.9 (#160643)
Fixes #160534

Updates the warning in torch.utils.checkpoint to state that starting in PyTorch 2.9, calling checkpoint without explicitly passing use_reentrant will raise an exception. Follows the guidance from the issue discussion.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160643
Approved by: https://github.com/soulitzer
2025-08-14 20:53:17 +00:00
1fc683cf17 [Inductor] Allow indexing a flexible layout for extract_input_node_reduction_ranges (#160645)
Differential Revision: D79831747

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160645
Approved by: https://github.com/eellison
2025-08-14 20:43:35 +00:00
b9d7de3a09 [Inductor] addmm + activation function fusion (#158137)
PR implements a pass in post_grad to fuse activation(add + mm)

This was previously done similarly here #106912 but was reverted for performance reasons. it was replaced with a pass that unfuses the activation and add from addmm/addmm_activation and let inductor handle the fusion.

however since then cuBLAS team has made a lot of perf improvements on this, will update this post with more benchmarks but preliminary benchmark show good results

perf dash board
<img width="3371" height="1240" alt="Screenshot from 2025-08-07 13-41-35" src="https://github.com/user-attachments/assets/d44d6205-b33a-4a20-9f0f-d9db176b3738" />

Relu works with both training and inference but gelu only works with inference mode due to some fundamental limitations since gelu's derivative depends on input and relu's doesnt. don't think this is fixable with the current addmm_activation API

Graph module before and after this pass

Relu(addmm)
```
graph():
    %primals_1 : [num_users=1] = placeholder[target=primals_1]
    %primals_2 : [num_users=2] = placeholder[target=primals_2]
    %primals_3 : [num_users=2] = placeholder[target=primals_3]
    %addmm : [num_users=1] = call_function[target=torch.ops.aten.addmm.default](args = (%primals_1, %primals_3, %primals_2), kwargs = {})
    %relu : [num_users=2] = call_function[target=torch.ops.aten.relu.default](args = (%addmm,), kwargs = {})
    %le : [num_users=1] = call_function[target=torch.ops.aten.le.Scalar](args = (%relu, 0), kwargs = {})
    %permute_1 : [num_users=1] = call_function[target=torch.ops.aten.permute.default](args = (%primals_3, [1, 0]), kwargs = {})
    return (relu, primals_2, le, permute_1)
graph():
    %primals_1 : [num_users=1] = placeholder[target=primals_1]
    %primals_2 : [num_users=2] = placeholder[target=primals_2]
    %primals_3 : [num_users=2] = placeholder[target=primals_3]
    %_addmm_activation_default : [num_users=2] = call_function[target=torch.ops.aten._addmm_activation.default](args = (%primals_1, %primals_3, %primals_2), kwargs = {})
    %le : [num_users=1] = call_function[target=torch.ops.aten.le.Scalar](args = (%_addmm_activation_default, 0), kwargs = {})
    %permute_1 : [num_users=1] = call_function[target=torch.ops.aten.permute.default](args = (%primals_3, [1, 0]), kwargs = {})
    return (_addmm_activation_default, primals_2, le, permute_1)
```
Gelu (addmm)
```
graph():
    %arg0_1 : [num_users=1] = placeholder[target=arg0_1]
    %arg1_1 : [num_users=1] = placeholder[target=arg1_1]
    %arg2_1 : [num_users=1] = placeholder[target=arg2_1]
    %addmm : [num_users=4] = call_function[target=torch.ops.aten.addmm.default](args = (%arg0_1, %arg2_1, %arg1_1), kwargs = {})
    %mul : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%addmm, %addmm), kwargs = {})
    %mul_1 : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%mul, %addmm), kwargs = {})
    %mul_2 : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%mul_1, 0.044715), kwargs = {})
    %add : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%addmm, %mul_2), kwargs = {})
    %mul_3 : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%add, 0.7978845608028654), kwargs = {})
    %mul_4 : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%addmm, 0.5), kwargs = {})
    %tanh : [num_users=1] = call_function[target=torch.ops.aten.tanh.default](args = (%mul_3,), kwargs = {})
    %add_1 : [num_users=1] = call_function[target=torch.ops.aten.add.Tensor](args = (%tanh, 1), kwargs = {})
    %mul_5 : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%mul_4, %add_1), kwargs = {})
    return (mul_5,)
graph():
    %arg0_1 : [num_users=1] = placeholder[target=arg0_1]
    %arg1_1 : [num_users=1] = placeholder[target=arg1_1]
    %arg2_1 : [num_users=1] = placeholder[target=arg2_1]
    %_addmm_activation_default : [num_users=1] = call_function[target=torch.ops.aten._addmm_activation.default](args = (%arg0_1, %arg2_1, %arg1_1), kwargs = {use_gelu: True})
    return (_addmm_activation_default,)
```

Benchmark setup:
NGC pytorch 25.06 container
cublas version: 12.9.1.4
torch.compile ran with dynamic = False and max_autotune

H100
```
Testing with M=1024, N=1024, K=1024, dtype=bfloat16
============================================================
Average Time per Iteration (cublas):	 0.0107 ms
Average Time per Iteration (torch compile):	 0.0296 ms

============================================================
Testing with M=2048, N=2048, K=2048, dtype=bfloat16
============================================================
Average Time per Iteration (cublas):	 0.0262 ms
Average Time per Iteration (torch compile):	 0.0327 ms

============================================================
Testing with M=4096, N=4096, K=4096, dtype=bfloat16
============================================================
Average Time per Iteration (cublas):	 0.1763 ms
Average Time per Iteration (torch compile):	 0.2457 ms

============================================================
Testing with M=8192, N=8192, K=8192, dtype=bfloat16
============================================================
Average Time per Iteration (cublas):	 1.5280 ms
Average Time per Iteration (torch compile):	 1.9437 ms
```

A100
```
############################################################
Testing with dtype: float16
############################################################

============================================================
Testing with M=1024, N=1024, K=1024, dtype=float16
============================================================
Average Time per Iteration (cublas):	 0.0313 ms
Average Time per Iteration (torch compile):	 0.0643 ms

============================================================
Testing with M=2048, N=2048, K=2048, dtype=float16
============================================================
Average Time per Iteration (cublas):	 0.1149 ms
Average Time per Iteration (torch compile):	 0.1255 ms

============================================================
Testing with M=4096, N=4096, K=4096, dtype=float16
============================================================
Average Time per Iteration (cublas):	 0.6297 ms
Average Time per Iteration (torch compile):	 0.7547 ms

============================================================
Testing with M=8192, N=8192, K=8192, dtype=float16
============================================================
Average Time per Iteration (cublas):	 4.3821 ms
Average Time per Iteration (torch compile):	 5.0740 ms
```

Script
```py
import torch
torch.manual_seed(0)

warmup, numrun= 10, 100

sizes = [1024, 2048, 4096, 8192]
dtypes = [torch.float16, torch.bfloat16, torch.float32]

device = torch.device("cuda")

for dtype in dtypes:
    dtype_name = str(dtype).split('.')[-1]
    print(f"\n{'#'*60}")
    print(f"Testing with dtype: {dtype_name}")
    print(f"{'#'*60}")

    for size in sizes:
        M, N, K = size, size, size
        print(f"\n{'='*60}")
        print(f"Testing with M={M}, N={N}, K={K}, dtype={dtype_name}")
        print(f"{'='*60}")

        A = torch.randn(M, K, device=device, dtype=dtype)
        B = torch.randn(K, N, device=device, dtype=dtype)
        C = torch.randn(M, device=device, dtype=dtype)

        def func1():
            return torch._addmm_activation(C, A, B, use_gelu=True)

        def func2():
            return torch.nn.functional.gelu(torch.add(C, torch.mm(A, B)), approximate="tanh")

        func2_compiled = torch.compile(
            func2,
            dynamic=False,
            options={
                "force_disable_caches": True,
                "max_autotune": True,
                "max_autotune_gemm": True,
                "max_autotune_gemm_backends": "TRITON",
                "autotune_fallback_to_aten": False,
            }
        )

        for _ in range(warmup): func1()
        torch.cuda.synchronize(device=device)

        start_event = torch.cuda.Event(enable_timing=True)
        end_event = torch.cuda.Event(enable_timing=True)

        total_time_ms = 0.0
        start_event.record()
        for _ in range(numrun): func1()
        end_event.record()
        torch.cuda.synchronize(device=device)
        total_time_ms += start_event.elapsed_time(end_event)
        avg_time_ms = total_time_ms / numrun

        print(f"Average Time per Iteration (cublas):\t {avg_time_ms:.4f} ms")

        for _ in range(warmup): func2_compiled()
        torch.cuda.synchronize(device=device)

        start_event = torch.cuda.Event(enable_timing=True)
        end_event = torch.cuda.Event(enable_timing=True)

        total_time_ms = 0.0
        start_event.record()
        for _ in range(numrun): func2_compiled()
        end_event.record()
        torch.cuda.synchronize(device=device)
        total_time_ms += start_event.elapsed_time(end_event)
        avg_time_ms = total_time_ms / numrun

        print(f"Average Time per Iteration (torch compile):\t {avg_time_ms:.4f} ms")
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158137
Approved by: https://github.com/eellison
2025-08-14 20:41:38 +00:00
1028c5e2d5 [Dynamo] Add CPython default dict tests (#155263)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/155263
Approved by: https://github.com/zou3519
2025-08-14 20:22:22 +00:00
19b4283884 Typo correction in variable name uninitalized_val in resize() function (#160636)
Fixes #160633

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160636
Approved by: https://github.com/mikaylagawarecki, https://github.com/Skylion007
2025-08-14 20:11:43 +00:00
8d6d324631 [Dynamo][Hierarchical-Compile] Don't allow node duplicates to be added (#160605)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160605
Approved by: https://github.com/StrongerXi
2025-08-14 20:02:10 +00:00
fdfd69bb05 Set PYTHONHOME for inductor subprocesses using torch (#160008)
This is needed for subprocesses that are trying to call back into torch functionality, i.e. anything that's also setting `PYTHONPATH`.  If they're part of an application that bundles the Python runtime, then they should use the bundled runtime to keep their view of the world consistent.

There are more `sys.executable` subprocesses in torch/ but it seems like they're fine.

Previous PR at https://github.com/pytorch/pytorch/pull/159382, but was reverted because it caused macOS jobs on GitHub to timeout.  What was happening was inductor subprocesses were scheduling C++ compilation tasks that were failing to find the Python.h header.  This was because they were running in venvs and now trying to find the CPython headers inside the venv, where the headers do not exist.  This PR gates the new behavior to internal builds only.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160008
Approved by: https://github.com/aorenste
2025-08-14 19:57:14 +00:00
0d3461bac0 DOC: update CrossEntropyLoss with note and example of incorrect target specification (#155649)
Fixes #134771

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155649
Approved by: https://github.com/mikaylagawarecki

Co-authored-by: Svetlana Karslioglu <svekars@meta.com>
Co-authored-by: mikaylagawarecki <mikaylagawarecki@gmail.com>
2025-08-14 18:34:57 +00:00
65053c03a3 [FR] Don't check incomplete ranks for printing (#160195)
When just printing the ranks (`-j` option) we should skip the check for "incomplete ranks" since that doesn't affect the print

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160195
Approved by: https://github.com/fduwjj
ghstack dependencies: #160097
2025-08-14 18:19:45 +00:00
96f9fbe21a Fix flight recorder for P2P ops (#160097)
Fixes errors in debugging a trace as mentioned in https://docs.google.com/document/d/1EKVJYmW2hj_VsvDvnSggXhZzJyvMu9dA0iDJWOZAtjY/edit?tab=t.0

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160097
Approved by: https://github.com/fduwjj
2025-08-14 18:19:45 +00:00
1c25871191 Allow torch.hub.load with unauthorized GITHUB_TOKEN (#159896)
Allow torch.hub.load with unauthorized GITHUB_TOKEN

`torch.hub.load` fails if a `GITHUB_TOKEN` with few permissions is set, as can be seen in the following example. Make sure that the model has not been cached before, for example with `rm ~/.cache/torch`. If the model has been downloaded already, it will not be downloaded again and the authorization error will not occur.

```python
export GITHUB_TOKEN=""
python
>>> import torch
>>> torch.hub.load('facebookresearch/dinov2', 'dinov2_vits14')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "~/miniconda3/lib/python3.12/site-packages/torch/hub.py", line 567, in load
    repo_or_dir = _get_cache_or_reload(repo_or_dir, force_reload, trust_repo, "load",
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/miniconda3/lib/python3.12/site-packages/torch/hub.py", line 231, in _get_cache_or_reload
    _validate_not_a_forked_repo(repo_owner, repo_name, ref)
  File "~/miniconda3/lib/python3.12/site-packages/torch/hub.py", line 191, in _validate_not_a_forked_repo
    response = json.loads(_read_url(Request(url, headers=headers)))
                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/miniconda3/lib/python3.12/site-packages/torch/hub.py", line 174, in _read_url
    with urlopen(url) as r:
         ^^^^^^^^^^^^
  File "~/miniconda3/lib/python3.12/urllib/request.py", line 215, in urlopen
    return opener.open(url, data, timeout)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "~/miniconda3/lib/python3.12/urllib/request.py", line 521, in open
    response = meth(req, response)
               ^^^^^^^^^^^^^^^^^^^
  File "~/miniconda3/lib/python3.12/urllib/request.py", line 630, in http_response
    response = self.parent.error(
               ^^^^^^^^^^^^^^^^^^
  File "~/miniconda3/lib/python3.12/urllib/request.py", line 559, in error
    return self._call_chain(*args)
           ^^^^^^^^^^^^^^^^^^^^^^^
  File "~/miniconda3/lib/python3.12/urllib/request.py", line 492, in _call_chain
    result = func(*args)
             ^^^^^^^^^^^
  File "~/miniconda3/lib/python3.12/urllib/request.py", line 639, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 401: Unauthorized
```

The cause of the error is that the function `_validate_not_a_forked_repo` in `hub.py` always uses `GITHUB_TOKEN` for authorization,  even when downloading does not require authorization.

0ba09a6d34/torch/hub.py (L194)

This fix simply retries the download without the token in case of a failure.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159896
Approved by: https://github.com/albanD
2025-08-14 18:15:49 +00:00
6c05ea6475 [DTensor] add op support: aten.squeeze_.dim (#159532)
**Summary**
This PR enables in-place op `aten.squeeze_.dim` on DTensor with a change to
DTensor dispatch logic: when processing in-place operator, we should assign
`output_sharding.output_spec` back to the first argument. This is because
the in-place op_call on `arg._local_tensor` could also shift the tensor meta.

**Test**
`pytest test/distributed/tensor/test_view_ops.py -s -k  test_squeeze_`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159532
Approved by: https://github.com/zpcore
2025-08-14 18:01:19 +00:00
5665dc9ab7 [PP] Allow larger world_size schedule tests (#160559)
Update schedule tests to use `world_size=4`, changes needed:
- Move some tests that require world_size=2 to new class
- Move helper methods from class level to function level
- Update some initialization to pass assert since gradients were super small.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160559
Approved by: https://github.com/wconstab
ghstack dependencies: #159591, #160558
2025-08-14 17:41:58 +00:00
2ff7c1c774 [PP] Rename _load_actions and validate (#160558)
Rename method and add validation
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160558
Approved by: https://github.com/wconstab
ghstack dependencies: #159591
2025-08-14 17:41:58 +00:00
3028fa6ce9 Wrap class definitions in set_fullgraph(False) in test_list/tuple (#160277)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160277
Approved by: https://github.com/zou3519
ghstack dependencies: #160216, #160217, #160276, #160278, #160330, #160331
2025-08-14 17:29:45 +00:00
077cb38974 Add dtype checks in meta dispatch for various ordering ops (#159556)
This adds data type checks for the unsupported bool and complex types for argmax/min topk, sort, minimum, maximum. As listed here:

0a99b026d6/torch/testing/_internal/common_methods_invocations.py (L21076)

Currently the ops will fail on CPU or CUDA calculation, rather than at meta dispatch stage as with for example max: 0a99b026d6/aten/src/ATen/native/TensorCompare.cpp (L285) . This will catch it early.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159556
Approved by: https://github.com/janeyx99
2025-08-14 17:06:27 +00:00
cd8d8c18f5 [pytorch][dynamo_compile] Log graph_node_shape to dynamo_compile (#160556)
This PR adds the dynamo graph node shape logging to dynamo compile. Also added unit tests to check if correct graph node shape is being logged.

Test Plan:
$ python -m test_utils
Ran 12 tests in 36.447s
OK

Note: Will merge after D80185628 lands.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160556
Approved by: https://github.com/masnesral, https://github.com/jingsh
2025-08-14 16:42:35 +00:00
63654ba4c5 [BE][Dynamo] Type improvements in _dynamo/utils to generics (#159824)
Follow up to #159580

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159824
Approved by: https://github.com/williamwen42
2025-08-14 16:06:50 +00:00
7e27347fd3 [SymmMem] Check return of nvshmem_malloc (#160603)
`nvshmem_malloc` returns a null pointer when allocation fails. We should check here.
Otherwise, the nullptr can go down the road and into the device kernel, causing CUDA illegal memory access.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160603
Approved by: https://github.com/fduwjj, https://github.com/ngimel
2025-08-14 15:57:55 +00:00
1d80d697a2 appending the pythonpath (#160219)
Fixes #160193

`PYTHONPATH=/torchbench` to `PYTHONPATH=/torchbench:$PYTHONPATH` in [pytorch/.ci/pytorch/test.sh](b5fd7223b1/.ci/pytorch/test.sh (L1715))

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160219
Approved by: https://github.com/malfet
2025-08-14 15:55:31 +00:00
b6b74aed60 [ROCm] Support large inputs for coalesceValuesKernel (#158281)
# Description

`.coalesce` cannot handle large inputs on ROCM due to maximal grid size limit.

This PR splits axis `X` into axes `X` and `Y`, and repurposes `Z` for original `Y` on ROCm to avoid such limitation.

Confirmed the new approach can handle large inputs. Correctness needs validation.

# Testing Command

`python torch_spmv.py 22500000 272500000`

## Script `torch_spmv.py`

``` python
import torch
import argparse

def parse_args():
    parser = argparse.ArgumentParser(
        description="Sparse COO Matrix by Dense Vector Multiplication using PyTorch"
    )
    parser.add_argument("n", type=int, help="Size of the NxN matrix")
    parser.add_argument("nnz", type=int, help="Number of non-zero entries")
    return parser.parse_args()

def main():
    args = parse_args()
    n = args.n
    nnz = args.nnz
    dtype = torch.float32
    device = torch.device('cuda')

    # Generate random indices for the sparse matrix in COO format.
    torch.manual_seed(42)
    rows = torch.randint(0, n, (nnz,), dtype=torch.int64, device=device)
    cols = torch.randint(0, n, (nnz,), dtype=torch.int64, device=device)
    indices = torch.stack([rows, cols], dim=0)

    # Generate random values.
    values = torch.randn(nnz, dtype=torch.float32, device=device)

    # Create the sparse COO matrix and move it to the target device.
    sparse_matrix = torch.sparse_coo_tensor(indices, values, size=(n, n), dtype=torch.float32, device=device)
    sparse_matrix = sparse_matrix.coalesce()

    # Generate a random dense vector.
    dense_vector = torch.randn(n, dtype=torch.float32, device=device)

    # Perform sparse matrix - dense vector multiplication.
    # Using torch.sparse.mm which expects a 2D tensor for the vector.
    result = torch.sparse.mm(sparse_matrix, dense_vector.unsqueeze(1)).squeeze()
    # result = torch.mv(sparse_matrix, dense_vector)

    # Print the result.
    print("Result of the multiplication:")
    print(torch.sum(result))

if __name__ == "__main__":
    main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158281
Approved by: https://github.com/jeffdaily
2025-08-14 15:09:16 +00:00
4a773e1e86 Warn when there is side effect in strict mode (#160060)
Differential Revision: [D79784354](https://our.internmc.facebook.com/intern/diff/D79784354)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160060
Approved by: https://github.com/zhxchen17, https://github.com/StrongerXi
2025-08-14 14:59:44 +00:00
198b5fd2d4 [PP] Add DualPipeV schedule (#159591)
Added the DualPipeV schedule according to http://github.com/deepseek-ai/DualPipe/blob/main/dualpipe/dualpipev.py#L11

<img width="3633" height="486" alt="image" src="https://github.com/user-attachments/assets/4e843bb9-87cd-4d11-936c-7dfe8ee12f16" />

This schedule doesn't perform the actual "overlap" during execution, but provides the scaffolding and schedule definition we need to run it E2E in torchtitan. Supporting the overlapped operation will be worked on in following PRs.

Tests:
```sh
python test/distributed/pipelining/test_schedule_multiproc.py -k test_v_shape_schedules
python test/distributed/pipelining/test_schedule.py -k test_pipeline_order_for_v_schedules
```

Also tested in TorchTitan and is running.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159591
Approved by: https://github.com/wconstab
2025-08-14 14:58:35 +00:00
20bdabbb3c [Dynamo] Fix MTIA dynamo backend by avoiding has_trition() at import time (#160604)
# Summary
MTIA's torch.compile tests were broken by D80037015. (For details, see internal task T234563969.) The root cause was that `has_triton` can change state after we call `torch.mtia.init()`, but it was used in a way that fixes Inductor's behavior at import time. (Note that `has_triton` is cached, and there's no opportunity to call `torch.mtia.init()` prior to `import torch`.)

To fix this, we use `try: import triton` as opposed to `has_triton()` at the module level.

# Test Plan

See the internal diff. As a follow-up, we will add appropriate unit tests and/or CI hints so this type of issue can be caught at PR/diff time.

Differential Revision: D80228000

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160604
Approved by: https://github.com/PaulZhang12, https://github.com/eellison
2025-08-14 14:54:49 +00:00
d556586448 [cutlass backend] re-add pip cutlass path (#160180)
Revert #156651 to allow using the cutlass PIP package which is easier for users than the Git checkout or similar method.

Also fix a bug where the PIP cutlass path wouldn't be available to subprocesses spawned during benchmarking for algorithm selection. Looks like the "spawn" method does not inherit the (potentially) already set up `config.cuda.cutlass_dir` so in the subprocess the include paths will still be set to `"../third_party/cutlass/"` leading to compilation failure due to missing headers.

Ensure `try_import_cutlass` is called at that point, which due to caching is a no-op in most cases, so doesn't hurt.
Change the logic to return `None` when cutlass isn't available returning more useful values for include paths, namely an empty list. This is in line with other inductor code which disables the CUTLASS backend when `try_import_cutlass` returns False

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160180
Approved by: https://github.com/henrylhtsang, https://github.com/mlazos
2025-08-14 14:48:31 +00:00
781e9a7724 Fix meta for constant_pad_nd (#159878)
Fixes https://github.com/pytorch/pytorch/issues/144187

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159878
Approved by: https://github.com/Skylion007, https://github.com/ezyang
2025-08-14 14:47:47 +00:00
e4de93f6a3 Add sm50 and sm60 back to windows builds (#160586)
Addresses the issue reported in  https://github.com/pytorch/pytorch/issues/160575
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160586
Approved by: https://github.com/malfet
2025-08-14 12:46:35 +00:00
a5652407e4 [CI] Fix triton xpu build on Windows (#160442)
Pin the ninja version to 1.11

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160442
Approved by: https://github.com/atalman
2025-08-14 12:43:49 +00:00
6f0f4e0c3e reduce threshold to suggest changes to expected results (#160463)
Since we increase threshold to 10% i would like suggestions to show up to update those +-2% instead of 3.3% now

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160463
Approved by: https://github.com/jamesjwu
2025-08-14 09:11:27 +00:00
db763b1717 [Intel GPU] Support SDPA backend selection and priority setting on XPU (#159464)
Currentlly SPDA XPU use own `priority_order` instead of the one from global context. Hence it does not support `with sdpa_kernel(order, set_priority=True)` with set_priority=True.

This PR enables this feature. To make default `priority_order` from global context works for XPU, I also move MATH backend to lowest priority, otherwise `cudnn attention` and `overrideable attention` will never be selected.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159464
Approved by: https://github.com/guangyey, https://github.com/drisspg

Co-authored-by: Yu, Guangye <106960996+guangyey@users.noreply.github.com>
Co-authored-by: mayuyuace <qiming1.zhang@intel.com>
2025-08-14 08:55:31 +00:00
089c4a1ba0 Fix wrong log file name in the docs of torch.distributed.elastic.multiprocessing.start_processes() (#160396)
Fixes #160395

In https://docs.pytorch.org/docs/stable/elastic/multiprocessing.html#starting-multiple-workers and also in the code comment of the function[1], it was specified that:

```
    For each process, the ``log_dir`` will contain:

    #. ``{local_rank}/error.json``: if the process failed, a file with the error info
    #. ``{local_rank}/stdout.json``: if ``redirect & STDOUT == STDOUT``
    #. ``{local_rank}/stderr.json``: if ``redirect & STDERR == STDERR``
```

While in code[2], the files are `stdout.log` and `stderr.log`, instead of the `.json` ones listed in the doc.

[1]: https://github.com/pytorch/pytorch/blob/main/torch/distributed/elastic/multiprocessing/__init__.py#L144-L145
[2]: https://github.com/pytorch/pytorch/blob/main/torch/distributed/elastic/multiprocessing/api.py#L354-L357

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160396
Approved by: https://github.com/fduwjj
2025-08-14 08:24:07 +00:00
97c8c98f8d measure dispatch overhead (#160504)
Reopen https://github.com/pytorch/pytorch/pull/159699 to merge to main.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160504
Approved by: https://github.com/wconstab
2025-08-14 06:13:53 +00:00
39aa3d1471 Remove the dead code in setup.py (#160515)
The following line has no effect.

34ec5ed275/setup.py (L1205)

This code was originally introduced in this PR: dd7cec680c,
and clang11 and later now support `-fstack-clash-protection`. Can we remove this line?

@malfet
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160515
Approved by: https://github.com/isuruf, https://github.com/albanD
2025-08-14 06:02:11 +00:00
639778b3ee [2/3 step][ vllm ci build setup] Add vlllm buld logic and dockerfile (#160089)
# set up vllm build logic
- dockerfile:  please notice the dockfile introduced here is only temporary, once we migrate this file to vllm, we will fetch it directly from there
- VllmBuildRunner:
   - implement logic to prepare and run vllm build with dockerfile
   -

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160089
Approved by: https://github.com/huydhn
ghstack dependencies: #160043
2025-08-14 05:51:45 +00:00
00d7d6f123 [1/3][ghstack] [vllm ci build setup ]setup lumen_cli (#160043)
# Description
set up torch_cli using argparses

## Details:
- add vllm placeholer in the cli
- add unittest for cli command

see Readme.md to see how to run the cli

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160043
Approved by: https://github.com/huydhn
2025-08-14 05:51:45 +00:00
c6d78d4dbd [ROCm] enable miopen channels last 3d for conv and batchnorm (#160529)
miopen batchnorm for channels last is guarded by env var PYTORCH_MIOPEN_SUGGEST_NHWC_BATCHNORM similar to existing PYTORCH_MIOPEN_SUGGEST_NHWC for conv.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160529
Approved by: https://github.com/jeffdaily

Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-08-14 05:30:19 +00:00
2898d3f965 [Lowering] Add assertion msg to sym_size and sym_stride (#160591)
Summary: Add assertion msg to sym_size and sym_stride lowering function.

Test Plan:
Will test in mast job.

Rollback Plan:

Differential Revision: D80187693

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160591
Approved by: https://github.com/angelayi
2025-08-14 04:55:32 +00:00
34358f335d [vllm hash update] update the pinned vllm hash (#160594)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vllm hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160594
Approved by: https://github.com/pytorchbot
2025-08-14 04:21:28 +00:00
fe3f5fe4ea Optimize min, max gradient behavior description (#160312)
Fixes #160273

## Test Result
<img width="897" height="593" alt="image" src="https://github.com/user-attachments/assets/6ebcdb2c-8a2c-4f0d-8195-656089e88325" />
<img width="985" height="653" alt="image" src="https://github.com/user-attachments/assets/606a7264-e223-4d2b-8c3f-f153ce43b208" />
<img width="903" height="607" alt="image" src="https://github.com/user-attachments/assets/0ae2f56f-820f-4194-b15c-a02a078c0487" />
<img width="903" height="607" alt="image" src="https://github.com/user-attachments/assets/79c38a17-45ac-4808-829f-d538178de36b" />

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160312
Approved by: https://github.com/ngimel
2025-08-14 04:18:49 +00:00
45ba7ecda8 Flex Attention heuristics: a Blackwell config (#160192)
Fixes #160074 and more.

This is the working config for B200 and RTX 5080.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160192
Approved by: https://github.com/drisspg
2025-08-14 03:47:02 +00:00
194fcfcfbd Add support for param mutation under inference mode (#159661)
Summary:
In HF model rwkv, we have parameter mutation under inference mode which should be safe. This PR does multiple things to make sure it works:
1. We execute global autograd mutation while tracing so that we can actually trace through parameter inplace mutation
2. Add support for parameter mutation under inference mode in AOTAutograd
3. Add support for parameter mutation under inference mode in export.

Test Plan:
test

Rollback Plan:

Differential Revision: D79460136

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159661
Approved by: https://github.com/ydwu4
2025-08-14 03:34:04 +00:00
29d20d49f0 [cutlass] fix dictionary iteration error (#160552)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160552
Approved by: https://github.com/henrylhtsang, https://github.com/jingsh
2025-08-14 03:23:46 +00:00
3faee0a631 Update nullcontext to return input args (#158776)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158776
Approved by: https://github.com/zou3519
2025-08-14 03:02:44 +00:00
8cfaf51d4e Generalize support of background thread in pinned allocator (#160505)
# Motivation
https://github.com/pytorch/pytorch/pull/135524 only introduces the support of background thread for CUDA, this PR intends to support it for other backend such as XPU as well.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160505
Approved by: https://github.com/albanD
2025-08-14 02:22:39 +00:00
af3cabc55d Wrap class definitions in set_fullgraph(False) in test_sort (#160331)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160331
Approved by: https://github.com/zou3519
ghstack dependencies: #160216, #160217, #160276, #160278, #160330
2025-08-14 02:12:20 +00:00
74bbe7b4a3 Wrap class definitions in set_fullgraph(False) in test_math/cmath (#160330)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160330
Approved by: https://github.com/zou3519
ghstack dependencies: #160216, #160217, #160276, #160278
2025-08-14 02:12:20 +00:00
7bfc424a61 Wrap class definitions in set_fullgraph(False) in test_iter (#160278)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160278
Approved by: https://github.com/williamwen42, https://github.com/zou3519
ghstack dependencies: #160216, #160217, #160276
2025-08-14 02:12:20 +00:00
5ace061254 finfo eps doc fix (#160502)
Existing documentation for torch.finfo().eps is as below:
| eps             | float  | The smallest representable number such that ``1.0 + eps != 1.0``.          |

Proposed documentation for torch.finfo().eps is as below:
| eps             | float  | The difference between 1.0 and the next smallest representable float larger than 1.0.	|

Fixes #160397

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160502
Approved by: https://github.com/ngimel
2025-08-14 01:49:35 +00:00
15e49f6164 Factor out the strings to templates for better editor integration (#160357)
# Summary

More code motion, tldr is that install 'Better Jinja' in vscode and now you can get highlighting

Before
<img width="776" height="926" alt="Screenshot 2025-08-11 at 2 41 08 PM" src="https://github.com/user-attachments/assets/10868b31-f8ac-4cf5-99fe-19b8789ce06b" />

After:
<img width="1184" height="1299" alt="Screenshot 2025-08-11 at 2 40 27 PM" src="https://github.com/user-attachments/assets/45203765-589e-4d76-8196-d895a2f2fbf6" />

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160357
Approved by: https://github.com/eellison
2025-08-14 01:07:53 +00:00
dd21c8a578 refresh expected results (#160537)
regression introduced  by https://github.com/pytorch/pytorch/pull/160314
not much worried about it since it did not effect other inductor benchmarks could not repo locally

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160537
Approved by: https://github.com/eellison
2025-08-14 00:56:14 +00:00
a06ec54d40 [MPS] Add API to query GPU core count (#160414)
Using good old IOKit to get `gpu-core-count` property from device implementing `AGXAccelerator` service
Expose this one as `torch.backend.mps.get_core_count()` and make it accessible via `MpsInterface` to the inductor

Test Plan: Run `python3 -c "import torch;print(torch.backends.mps.get_name(), torch.backends.mps.get_core_count())"` and compare it to `system_profiler SPDisplaysDataType|head -n10`
```
% python3 -c "import torch;print(torch.backends.mps.get_name(), torch.backends.mps.get_core_count())"
Apple M1 Pro 16
% system_profiler SPDisplaysDataType|head -n10
Graphics/Displays:

    Apple M1 Pro:

      Chipset Model: Apple M1 Pro
      Type: GPU
      Bus: Built-In
      Total Number of Cores: 16
      Vendor: Apple (0x106b)
      Metal Support: Metal 3
```

This would significantly improve occupancy for torch.compile generated kernels

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160414
Approved by: https://github.com/dcci
2025-08-14 00:05:17 +00:00
50a8c11875 Add getCurrentDeviceIndex to torch::stable::accelerator (#160453)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160453
Approved by: https://github.com/janeyx99
ghstack dependencies: #159679
2025-08-13 23:42:24 +00:00
e4e4dbd2f8 Add beginnings of torch::stable::accelerator (#159679)
Adds
- `torch::stable::accelerator::DeviceGuard`: `std::unique_ptr` to `DeviceGuardOpauqe` mostly copied from the below (but made generic)

   50eac811a6/torch/csrc/inductor/aoti_runtime/utils_cuda.h (L30-L46)
    - constructor `DeviceGuard(DeviceIndex)` (**this matches aoti but defers from the actual c10 DeviceGuard constructor that takes in device**)
    - `set_index(DeviceIndex)`
- `torch::stable::accelerator::Stream`: `std::shared_ptr` to `StreamOpaque`
     - constructor `Stream(StreamHandle stream)` (similar to torch::stable::Tensor)
     - `id() -> StreamId`

- `getCurrentStream(DeviceIndex device_index) -> stable::accelerator::Stream`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159679
Approved by: https://github.com/guangyey, https://github.com/janeyx99
2025-08-13 23:42:24 +00:00
d670304001 [ATen][CUDA] Use new CCCL API in v2.8 (#160554)
Silences deprecation warnings like:
```
In file included from tmpxft_003a195d_00000000-6_Nonzero.cudafe1.stub.c:1:
/tmp/tmpxft_003a195d_00000000-6_Nonzero.cudafe1.stub.c: At global scope:
/tmp/tmpxft_003a195d_00000000-6_Nonzero.cudafe1.stub.c:243:219: warning: 'template<class ValueType, class OffsetT> class at_cuda_detail::cub::CountingInputIterator' is deprecated: Use thrust::counting_iterator instead [-Wdeprecated-declarations]
  243 | static void __device_stub__ZN2at6native43_GLOBAL__N__3cee4041_10_Nonzero_cu_cba1aaa011flag_kernelILi512ELi16EhEEvPKT1_PlPKllli( const _ZN3c104impl20ScalarTypeToCPPTypeTILNS_10ScalarTypeE0EEE *__par0,  int64_t *__par1,  const int64_t *__par2,  int64_t __par3,  int64_t __par4,  int __par5) {  __cudaLaunchPrologue(6); __cudaSetupArgSimple(__par0, 0UL); __cudaSetupArgSimple(__par1, 8UL); __cudaSetupArgSimple(__par2, 16UL); __cudaSetupArgSimple(__par3, 24UL); __cudaSetupArgSimple(__par4, 32UL); __cudaSetupArgSimple(__par5, 40UL); __cudaLaunch(((char *)((void ( *)(const _ZN3c104impl20ScalarTypeToCPPTypeTILNS_10ScalarTypeE0EEE *, int64_t *, const int64_t *, int64_t, int64_t, int))at::native::_NV_ANON_NAMESPACE::flag_kernel<(int)512, (int)16, unsigned char> ))); }namespace at{
      |                                                                                                                                                                                                                           ^~~~~~~~~~~~~~~~~~~~~
/usr/local/cuda-12.9/include/cub/iterator/counting_input_iterator.cuh:93:63: note: declared here
   93 | class CCCL_DEPRECATED_BECAUSE("Use thrust::counting_iterator instead") CountingInputIterator
      |                                                               ^~~~~~~~~~~~~~~~~~~~~
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160554
Approved by: https://github.com/ngimel, https://github.com/malfet, https://github.com/atalman
2025-08-13 23:15:53 +00:00
c5efc5c8a6 Fix unit test test_equivalent_template_code (#160432)
Summary:
Fix unit test test_equivalent_template_code

https://github.com/pytorch/pytorch/pull/159920 treats  ReinterpretView as a not-realized node when searching FX origin nodes for fused triton kernel. In test_equivalent_template_code, there is a transpose node (which is a ReinterpretView) before matmul. It was not in FX graph segment before PR 159920. FX origin nodes are used to define the name of triton kernel. That is the reason test_equivalent_template_code failed with PR 159920 since it uses hard-coded triton kernel name to check the result. The fix is to update the triton kernel name in the unit test.

Test Plan:
buck2 run mode/opt caffe2/test/inductor:benchmark_fusion -- caffe2.test.inductor.test_benchmark_fusion.BenchmarkMultiTemplateFusionCudaTest

Rollback Plan:

Differential Revision: D80101711

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160432
Approved by: https://github.com/clee2000
2025-08-13 23:14:51 +00:00
6da11d9aaf [C10D] Add check_rng_sync util (#160283)
Debugs RNG desync by checking the current state on each rank in the group and summarizing the differences if any are detected.

Notes:
- used allgather instead of gather since its simpler to do this SPMD rather than add conditional behavior, though I could be convinced we only want to log on rank0.

Usage:
`check_rng_sync(generator, group)`

Prints something like this:

(cuda):
```
[rank0]:E0808 ] Generator desync detected:
[rank0]:E0808 ] Ranks    (Seed, Offset) values
[rank0]:E0808 ] -------  -----------------------
[rank0]:E0808 ] 0        (456, 0)
[rank0]:E0808 ] 1        (123, 4)
[rank0]:E0808 ] 2-3      (123, 0)
```

(cpu):
```
[rank2]:E0810 ] Generator desync detected:
[rank2]:E0810 ] Ranks      Generator State Hash values
[rank2]:E0810 ] -------  -----------------------------
[rank2]:E0810 ] 0                  7633364531954955665
[rank2]:E0810 ] 1                  8807615394212033278
[rank2]:E0810 ] 2-3               -6150027303226666531
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160283
Approved by: https://github.com/ezyang
2025-08-13 23:05:29 +00:00
182efe31db [inductor] add lowering for repeat_interleave.Tensor with output size specified (#147160) (#158462)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158462
Approved by: https://github.com/eellison
2025-08-13 22:54:18 +00:00
1ea688f9a2 [dynamo] fix EXTENDED_ARG starts_line dropping bug (#160478)
Fixes https://github.com/pytorch/pytorch/issues/160471

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160478
Approved by: https://github.com/Lucaskabela, https://github.com/billmguo
2025-08-13 22:27:40 +00:00
53e3949495 [MTIA-T][CFF] Pass backend parameter into GPU vertical pass file and pattern matcher (#160404)
Summary:
As titled
Please see https://fb.workplace.com/groups/1075192433118967/posts/1735215827116621/?comment_id=1735220747116129&reply_comment_id=1735242997113904

Basically, for MTIA, we want mtia_afg to show up in the counters and backend, instead of Inductor. MTIA is not using inductor yet. Using env var TORCHINDUCTOR_PATTERN_MATCH_BACKEND to pass in the actual backend.

The env var default value is "inductor", so nothing should break for GPU.

Test Plan:
Default is always "inductor", so existing test should not break.

CI tests

Rollback Plan:

Differential Revision: D80069072

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160404
Approved by: https://github.com/BoyuanFeng
2025-08-13 22:24:27 +00:00
33d9401866 Revert "[BE][Dynamo] Type improvements in _dynamo/utils to generics (#159824)"
This reverts commit 3ef2e1ef769582a82c6ddf150e9d11bf4bf1c44f.

Reverted https://github.com/pytorch/pytorch/pull/159824 on behalf of https://github.com/clee2000 due to I think this broke dynamo/test_trace_rules.py::TraceRuleTests::test_almost_impossible_missing_name [GH job link](https://github.com/pytorch/pytorch/actions/runs/16948305999/job/48035192324) [HUD commit link](3ef2e1ef76) ([comment](https://github.com/pytorch/pytorch/pull/159824#issuecomment-3186003531))
2025-08-13 22:17:29 +00:00
d1950d4bb5 Change IR node's stack trace to be computed lazily (#160487)
Summary: When an IR node is an inherited class, post_init is called once for each super().__init__() call. To avoid duplicated calls, we make stack trace computation happen lazily.

Test Plan:
CI

Rollback Plan:

Differential Revision: D80137870

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160487
Approved by: https://github.com/angelayi
2025-08-13 21:41:25 +00:00
1196bb1c2e Add utility to get computed kernel in torch.library (#158393)
Adds `OperatorEntry::getComputedKernelForDispatchKey` which returns the KernelFunction corresponding to `OperatorEntry.dispatchTable_[dispatch_ix]` for a given dispatch key
- Specifically it returns a `SafeKernelFunction` that holds a `KernelToken`. This `KernelToken` is registered to the `KernelFunction` in `OperatorEntry.kernels_` and will be invalidated when the `KernelFunction` is destructed (i.e. when the `AnnotatedKernel` that holds this `KernelFunction` is removed from `kernels_`, which happens when the corresponding impl is deregistered).
- `SafeKernelFunction` can be called via `callBoxed`, the validity of the token will be checked before this happens
- `SafeKernelFunction` is pybinded and `getComputedKernelForDispatchKey` is exposed to the frontend ia `torch.library.get_kernel`

Related to https://github.com/pytorch/pytorch/issues/155330

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158393
Approved by: https://github.com/albanD
2025-08-13 21:00:59 +00:00
e9eb2096a5 [cutlass backend] Allow bmm use cases when batch stride is 0 (#160356)
Differential Revision: [D80035771](https://our.internmc.facebook.com/intern/diff/D80035771/)

The motivation and the original change is to reduce the number parameters we pass into the kernel, which was motivated by aesthetic reasons only.

But seeing the need to use different batch stride, we should just pass in the batch stride. That would be a good long term fix.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160356
Approved by: https://github.com/mlazos
2025-08-13 20:52:24 +00:00
3ef2e1ef76 [BE][Dynamo] Type improvements in _dynamo/utils to generics (#159824)
Follow up to #159580

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159824
Approved by: https://github.com/williamwen42
2025-08-13 20:17:01 +00:00
4cde0acc0e Make triton build ROCm library version-agnostic (#158408)
Fixes maintenance of triton packaging script when library versions change from one ROCm version to next.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158408
Approved by: https://github.com/jeffdaily

Co-authored-by: Ethan Wee <Ethan.Wee@amd.com>
2025-08-13 19:49:23 +00:00
70ccdec44b [ROCm] Improve reduction sum performance (#160466)
* Use input vectorization for reduction_on_fastest_striding_dimension when dim0 >= 128

**Reproducer:**
```
import time
import torch

shapes = [
    (5079670, 128)
]

dims = [
    (1)
]

for i, shape in enumerate(shapes):
    x = torch.randn(shape, device='cuda', dtype=torch.float)
    for _ in range(10):
        w = torch.sum(x, dims[i])
    torch.cuda.synchronize()
    print(w.size())

    start_time = time.time()
    for _ in range(50):
        _ = torch.sum(x, dims[i])
    torch.cuda.synchronize()
    end_time = time.time()
    mean_time = (end_time - start_time)/50
    print(f"Avg time for shape {shape}: {mean_time * 1e6:.2f} us")
```

**Before (MI300X):**
Avg time for shape (5079670, 128): 1629.99 us

**After (MI300X)**
Avg time for shape (5079670, 128): 1008.59 us

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160466
Approved by: https://github.com/petrex, https://github.com/jeffdaily
2025-08-13 18:46:58 +00:00
db0b7f1cc9 [BE][CI] Adjust error_inputs for cat and complex (#160378)
MPS backend does not support double, so errors should be different
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160378
Approved by: https://github.com/dcci
2025-08-13 18:35:06 +00:00
1c26c53851 Fix the Doc of pivot in torch.lu (#159617)
Fixes #159616

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159617
Approved by: https://github.com/lezcano, https://github.com/jansel
2025-08-13 18:30:54 +00:00
adcca7d9a1 Do not rpath CUDA stubs folder in JIT generated code (#160179)
`_transform_cuda_paths` intentionally includes the CUDA stubs folder.

However this path must not be added to the rpath as otherwise any CUDA command will fail at runtime with
> CUDA_ERROR_STUB_LIBRARY: "CUDA driver is a stub library"

This results in e.g. non-descriptive errors like
```
cutlass_library/source/tools/util/include/cutlass/util/device_memory.h:67  cutlass::device_memory::allocate: cudaMalloc failed: bytes=4096
terminate called after throwing an instance of 'cutlass::cuda_exception'
  what():  std::exception
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160179
Approved by: https://github.com/jansel
2025-08-13 18:29:24 +00:00
01584d2a7d [ROCm] remove extra transposes in NHWC convolutions on MIOpen (#160435)
remove aten::contiguous for NHWC convolutions on ROCm

Tests:
- nn/test_convolution.py::TestConvolutionNNDeviceTypeCUDA::test_conv_cudnn_nhwc_cuda_float32
- nn/test_convolution.py::TestConvolutionNNDeviceTypeCUDA::test_conv_cudnn_nhwc_cuda_float16

Before:
<img width="1255" height="228" alt="image"
src="https://github.com/user-attachments/assets/b125ccab-00c2-4d3a-a341-4583e51d8d57" />

After:
<img width="874" height="153" alt="image"
src="https://github.com/user-attachments/assets/ec200754-3622-488e-8762-bff1c2d22818" />

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160435
Approved by: https://github.com/jeffdaily
2025-08-13 17:58:22 +00:00
87e6c4079d Fix the Doc issue on the description of edge_order in torch.gradient() (#159130)
Fixes #159129

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159130
Approved by: https://github.com/soulitzer
2025-08-13 16:48:47 +00:00
7d87e358ac Fix MPS conv3d autocast bias dtype mismatch (#160423)
## Summary
- register conv3d with MPS autocast to ensure bias dtypes match under AMP
- add regression test chaining two Conv3d layers on MPS autocast

Written by Codex, see https://chatgpt.com/codex/tasks/task_e_689b64192df883278648935963d2776d

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160423
Approved by: https://github.com/dcci
2025-08-13 16:23:21 +00:00
6ee175195a [DCP][OSS] Rank local checkpointing in DCP without collectives (#147758)
Summary:
DCP metadata collectives become prohibitively expensive as the job scale grows. This PR introduces rank-local checkpointing which basically saves and loads the checkpoint without any collective. The trade off for now is the dedupe and re-sharding. Support for these would be introduced soon.

Differential Revision: D70112642

Pull Request resolved: https://github.com/pytorch/pytorch/pull/147758
Approved by: https://github.com/meetv18
2025-08-13 16:20:28 +00:00
db32b60662 [ci] Add riscv opt-int build (#143979)
Hi, @malfet
Based on the previous discussion:

[RISCV CI support · Issue #141550 · pytorch/pytorch](https://github.com/pytorch/pytorch/issues/141550)

I have cross-compiled PyTorch for the RISC-V architecture on x86_64 Ubuntu 24.04 and created a new PR for it. Could you please help review it?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143979
Approved by: https://github.com/malfet

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-08-13 16:12:02 +00:00
56c828bef9 Followup of #160002, gracefully fail if Triton functions don't contain attributes (#160436)
Summary: Fixes internal test failures of D80037015

Test Plan:
CI

Rollback Plan:

Differential Revision: D80094187

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160436
Approved by: https://github.com/clee2000
2025-08-13 16:04:56 +00:00
a2fd106d67 guard cuMulticastUnbind call (#160499)
Fixes builds for old compilers

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160499
Approved by: https://github.com/Skylion007
2025-08-13 15:45:51 +00:00
c656334120 Revert "Factor out the strings to templates for better editor integration (#160357)"
This reverts commit cbffde774557752cf20447d42d99ec6102673c31.

Reverted https://github.com/pytorch/pytorch/pull/160357 on behalf of https://github.com/clee2000 due to broke a bunch of internal builds due to not being able to find the file  No such file or directory: torch/_inductor/kernel/flex/templates/flex_decode.py.jinja D80145761, might need a buck targets change? ([comment](https://github.com/pytorch/pytorch/pull/160357#issuecomment-3184435581))
2025-08-13 15:40:50 +00:00
31c9ac4319 [c10d] Fix test test_nccl_user_buffer_registration (#160497)
Fixed `test_nccl_user_buffer_registration ` due to https://github.com/pytorch/pytorch/pull/160145, somehow CI didn't capture it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160497
Approved by: https://github.com/ngimel
2025-08-13 15:29:41 +00:00
deea71a90e [ez][CI] Set timeout for linux-jammy-py3_13-clang12-test from 600min -> default val of 240 (#160500)
10 hours is very long
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160500
Approved by: https://github.com/huydhn
2025-08-13 15:14:24 +00:00
114a6c4043 Add placeholder for the User Guide (#159379)
- Add pytorch_overview.md
- Add pytorch_main_components.md
- Reorganize top nav to have Get Started, User Guide, Reference API, Community, Tutorials
- Move notes under user guide

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159379
Approved by: https://github.com/albanD

Co-authored-by: sekyondaMeta <127536312+sekyondaMeta@users.noreply.github.com>
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-08-13 14:56:04 +00:00
ee1b0412b9 [1/N]Port 3 distributed/_tools test cases to Intel GPU (#159543)
For [#114850](https://github.com/pytorch/pytorch/issues/114850), we will port distributed tests to Intel GPU.

We could enable Intel GPU with following methods and try the best to keep the original code styles:

1. use "torch.accelerator.current_accelerator()" to determine the accelerator backend
2. enabled XPU for some test path
3. skip some test cases which Intel GPU does not support

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159543
Approved by: https://github.com/guangyey, https://github.com/d4l3k

Co-authored-by: Yu, Guangye <106960996+guangyey@users.noreply.github.com>
2025-08-13 12:49:01 +00:00
42e51cd4b3 Support ddp zero hook XCCL path (#159240)
XCCL backend no https://github.com/pytorch/pytorch/issues/62300 issue, add xccl path here.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159240
Approved by: https://github.com/guangyey, https://github.com/Skylion007, https://github.com/EikanWang
2025-08-13 12:37:33 +00:00
96bd33b2de Fix get_free_symbol_uses for several nodes (#160314)
get_free_symbol_uses is used to know what unbacked symbols are used by a given node.
not having correct get_free_symbol_uses defined properly leads to :

- eliminating of some nodes due to not detection of any users. (See the added unit test)
- Incorrect topological sort.

Fix get_free_symbol_uses , NopKernel , ConcarKernel, InputsKerenl, external kernel.
for ComputedBuffer with NonOwningLayout its interesting case.
when layout is NonOwningLayout we need to access the actual view op base layout and use
detect symbols in it. Because when we codegen the ComputedBuffer we uses those symbols.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160314
Approved by: https://github.com/eellison
2025-08-13 12:28:29 +00:00
ecde76c764 [Hierarchical Compile] Sort all regions identically (#158814)
Before we would topologically sort each region individually, this works well except if some nodes have no arguments, then their order may change. To rectify this, we sort the first region as the reference region and use that sort order to sort the remaining regions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158814
Approved by: https://github.com/williamwen42
2025-08-13 11:55:23 +00:00
34ec5ed275 [Dynamo][Hierarchical Compile] Allow parameters to be propagated to submodules (#157979)
Fixes issue with HF Gen AI models where we mark a param as static and a get_attr node gets put in the region.

The effect of this is lifting get_attr nodes to be inputs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/157979
Approved by: https://github.com/williamwen42
2025-08-13 09:12:10 +00:00
641ee74781 Revert "Add label_smoothing param in nn.BCELoss and nn.BCEWithLogitsLoss (#150282)"
This reverts commit f990490a23815ea6ee27e487c70ba2cf513ba43d.

Reverted https://github.com/pytorch/pytorch/pull/150282 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/150282#issuecomment-3182844949))
2025-08-13 09:01:52 +00:00
6e8865fbc1 port 3 distributed test to Intel GPU and unified some common functions (#158533)
For https://github.com/pytorch/pytorch/issues/114850, we will port distributed tests to Intel GPU.
We could enable Intel GPU with following methods and try the best to keep the original code styles:

- instantiate_device_type_tests()
- use "torch.accelerator.current_accelerator()" to determine the accelerator backend
- enabled XPU for some test path
- Unify some common code under torch/testing/_internal for multiple backend, for example:
  - requires_nccl_version
  - _dynamo_dist_per_rank_init
  - DynamoDistributedSingleProcTestCase
  - DistTestCases
  - FSDPTestMultiThread

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158533
Approved by: https://github.com/guangyey, https://github.com/d4l3k

Co-authored-by: Yu, Guangye <106960996+guangyey@users.noreply.github.com>
2025-08-13 08:13:23 +00:00
9a06e6d031 [claude-code] Add top-level module doc for torch/distributed/tensor/_op_schema.py (#157804)
Not sure how good the description is, seeking insight from maintainers.

Signed-off-by: Edward Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/157804
Approved by: https://github.com/wanchaol
2025-08-13 07:27:11 +00:00
6ea8376f84 Enable XPU for test_autograd_function.py (#160309)
# Description
Fixes #114850, we will port dynamo tests to Intel GPU
We could enable Intel GPU with following methods and try the best to keep the original code styles:

# Changes
1. Get device type from get_devtype() method.
2. Replace the requires_cuda_and_triton with requires_gpu.
3. Add HAS_XPU_AND_TRITON into the scope.

# Notify

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160309
Approved by: https://github.com/guangyey, https://github.com/ezyang
2025-08-13 06:38:34 +00:00
8eee08d227 Replace TORCH_INTERNAL_ASSERT with TORCH_CHECK (#160411)
As the title stated.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160411
Approved by: https://github.com/ezyang
2025-08-13 06:31:10 +00:00
e497620260 Add compile_id: Optional[CompileID] to torch._logging._internal.trace_structured_artifact (#160440)
Context:
When writing a custom `torch.compile` backend, I quite frequently (ab)use `trace_structured_artifact` because I'm too lazy to customize tlparse (ref: 6d8b13c867).

I recently notice some of the artifacts I want to store are generated where CompileID cannot be correlated and `tlparse` html says
> Sometimes, logs are made without a compile id. This makes it difficult to correlate related logs. This stack trie shows all places where log entries occurred without compile context; to fix, look an appropriate place in the stack where compile id should have been specified.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160440
Approved by: https://github.com/ezyang
2025-08-13 06:28:23 +00:00
199e9abb6a [fx] fix split_module with symint (#160093)
Fixes https://github.com/pytorch/pytorch/issues/155220

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160093
Approved by: https://github.com/ezyang
2025-08-13 05:50:15 +00:00
685f15dbea [vllm hash update] update the pinned vllm hash (#160484)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vllm hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160484
Approved by: https://github.com/pytorchbot
2025-08-13 04:54:03 +00:00
85db508af5 Wrap class definitions in set_fullgraph(False) in test_int/bool/float/complex (#160276)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160276
Approved by: https://github.com/zou3519
ghstack dependencies: #160216, #160217
2025-08-13 04:53:03 +00:00
27156ec804 Wrap class definitions in set_fullgraph(False) in test_operator (#160217)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160217
Approved by: https://github.com/zou3519
ghstack dependencies: #160216
2025-08-13 04:53:03 +00:00
6746bc59df Wrap class definitions in set_fullgraph(False) in test_set (#160216)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160216
Approved by: https://github.com/zou3519
2025-08-13 04:53:03 +00:00
3008d985a8 [CD] Do not build pytorch with nvshem on ARM (#160465)
As nvshmem binary from 3.3.9 is not compatible with manylinux2_28, and 3.3.20 is not available for download yet
Also, package nvshmem binary into full wheel

Fixes https://github.com/pytorch/pytorch/issues/160425
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160465
Approved by: https://github.com/atalman, https://github.com/huydhn
2025-08-13 04:10:43 +00:00
652a6f5954 Revert "[Fix XPU CI][Inductor UT] Fix test cases broken by community. (#160403)"
This reverts commit 5a9c4cfce42b9eb87da0de40c5633f083115c307.

Reverted https://github.com/pytorch/pytorch/pull/160403 on behalf of https://github.com/malfet due to It indeed consistently broken inductor, see 118bc97b14/1 ([comment](https://github.com/pytorch/pytorch/pull/160403#issuecomment-3182101130))
2025-08-13 04:05:46 +00:00
118bc97b14 Write full tensors out at once in HF consolidation script (#159394)
Not all storage systems support writing at random offsets. This PR changes the writes of the consolidation script to write each tensor to a buffer, and then write out the buffer, sequentially going through every tensor in the output file. This will also help in the case where the sharded files weren't just sharded in the row-wise dimension. The reason is because small writes are expensive and we were writing each write for every chunk that was the largest number of contiguous bytes in the final tensor, but this could be a small amount of bytes for col-wise sharding. Now the full tensor is needed for the write, making the number of small writes smaller.

Differential Revision: [D78684452](https://our.internmc.facebook.com/intern/diff/D78684452/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159394
Approved by: https://github.com/saumishr
ghstack dependencies: #159392, #159393
2025-08-13 03:51:16 +00:00
305fa22393 [GHF] Remove app { name databaseId} query (#160494)
From `PRCheckSuites` fragment, as it's causes security exception when used with new GITHUB_TOKEN, that will looks as follows
```
RuntimeError: GraphQL query
fragment PRReviews on PullRequestReviewConnection {
  nodes {
    author {
      login
    }
    bodyText
    createdAt
    authorAssociation
    editor {
      login
    }
    databaseId
    url
    state
  }
  pageInfo {
    startCursor
    hasPreviousPage
  }
}

fragment PRCheckSuites on CheckSuiteConnection {
  edges {
    node {
      app {
        name
        databaseId
      }
      workflowRun {
        workflow {
          name
          databaseId
        }
        databaseId
        url
      }
      checkRuns(first: 50) {
        nodes {
          name
          conclusion
          detailsUrl
          databaseId
          title
          summary
        }
        pageInfo {
          endCursor
          hasNextPage
        }
      }
      conclusion
    }
    cursor
  }
  pageInfo {
    hasNextPage
  }
}

fragment CommitAuthors on PullRequestCommitConnection {
  nodes {
    commit {
      authors(first: 2) {
        nodes {
          user {
            login
          }
          email
          name
        }
      }
      oid
    }
  }
  pageInfo {
    endCursor
    hasNextPage
  }
}

query ($owner: String!, $name: String!, $number: Int!) {
  repository(owner: $owner, name: $name) {
    pullRequest(number: $number) {
      closed
      isCrossRepository
      author {
        login
      }
      title
      body
      headRefName
      headRepository {
        nameWithOwner
      }
      baseRefName
      baseRefOid
      baseRepository {
        nameWithOwner
        isPrivate
        defaultBranchRef {
          name
        }
      }
      mergeCommit {
        oid
      }
      commits_with_authors: commits(first: 100) {
        ...CommitAuthors
        totalCount
      }
      commits(last: 1) {
        nodes {
          commit {
            checkSuites(first: 10) {
              ...PRCheckSuites
            }
            status {
              contexts {
                context
                state
                targetUrl
              }
            }
            oid
          }
        }
      }
      changedFiles
      files(first: 100) {
        nodes {
          path
        }
        pageInfo {
          endCursor
          hasNextPage
        }
      }
      reviews(last: 100) {
        ...PRReviews
      }
      comments(last: 5) {
        nodes {
          bodyText
          createdAt
          author {
            login
          }
          authorAssociation
          editor {
            login
          }
          databaseId
          url
        }
        pageInfo {
          startCursor
          hasPreviousPage
        }
      }
      labels(first: 100) {
        edges {
          node {
            name
          }
        }
      }
    }
  }
}
, args {'name': 'pytorch', 'owner': 'pytorch', 'number': 159820} failed: [{'type': 'FORBIDDEN', 'path': ['repository', 'pullRequest', 'commits', 'nodes', 0, 'commit', 'checkSuites', 'edges', 4, 'node', 'app'], 'extensions': {'saml_failure': False}, 'locations': [{'line': 26, 'column': 7}], 'message': 'Resource not accessible by integration'}]
```
But the same query works fine if executed using one's Personal Access Token

Updated mocks file by running
```
sed -i -e s/a32a7ca3a2f6e2c9de07aef821b0111539758b4ac254f8a3432af32314f94876/8e262b0495bd934d39dda198d4c09144311c5ddd6cca6a227194bd48dbfe7201/ gql_mocks.json
sed -i -e s/157add81c519f614388f3a67e287bdf4fbb1791e6d0bffe312e169d02ac2813f/28349cb4c891bbf85255fab2c33c770baf77c3e02b29ca9a0e4c6c97bed041db/ gql_mocks.json
sed '/"app": {/,+3d' gql_mocks-orig.json >gql_mocks.json
sed '/"app": null/d' gql_mocks-orig.json >gql_mocks.json
```

Undisable offending jobs

Fixes https://github.com/pytorch/pytorch/issues/159894
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160494
Approved by: https://github.com/huydhn
ghstack dependencies: #160490, #160492
2025-08-13 03:46:39 +00:00
1151b40cbf [BE] Filter unused mocks (#160492)
Somebody checked in twice the number of mocks into the archive

Filter them out by running following script
```python
import json
with open("gql_mocks-orig.json") as f:
    mocks = json.load(f)

keys = list(mocks.keys())
good_shas = {'a32a7ca3a2f6e2c9de07aef821b0111539758b4ac254f8a3432af32314f94876',
             '157add81c519f614388f3a67e287bdf4fbb1791e6d0bffe312e169d02ac2813f',
             '4715ed05b382e572135c049664939f22f9b1249bc0c499ae278d655ad8cb598b',
             'a91ab398f97fb43cbe6e0899980dad8ff7447457ea5a71bbc59f7702a9280eb5',
             'e5130469b5373479776bfbccade8039ce4741b97873bb3bec4e279fed08602be',
             '5dc32efeb8306f03744f6804ef4b500882f2759f7ac17fdc9f123669bfe4805a',
             '0a34acb829d8aca9dd28a8ba388dfa52f6ecdde7e903ace1caabdcfaba87de98',
             '8b50878b010492fe64005cc4b4ed34ac5f6695ce093f06b0d8d5403b7787c2c0',
             '2877b3b1e8630ca4ae797b9d85d5673d25ca8488c01141e11ff55f4a1359fca7'}
for k in keys:
    if any(sha in k for sha in good_shas):
        continue
    del mocks[k]

with open("gql_mocks.json","w") as f:
    json.dump(mocks, f, indent=2)
    f.write("\n")
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160492
Approved by: https://github.com/huydhn
ghstack dependencies: #160490
2025-08-13 03:46:39 +00:00
d0f9785af3 [CI] Prevent accidental gql_mocks updates by test_trymerge (#160490)
As they could not longer be fetched from GitHub, see https://github.com/pytorch/pytorch/issues/160489
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160490
Approved by: https://github.com/huydhn
2025-08-13 03:46:32 +00:00
ba47821f52 [ROCm] Set thread_work_size to 16 for vectorized elementwise kernels for MI300X (#160444)
* thread_work_size of 16 is giving better perf with many workloads for MI300X

cherry-pick of fb81400d34

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160444
Approved by: https://github.com/jeffdaily
2025-08-13 03:41:25 +00:00
2c5e10a5fc Add new function consolidate_safetensors_files_on_every_rank for HF consolidation (#159393)
Currently we are only using rank-0 for HF consolidation. But we should be able to use every rank to consolidate the sharded files, which will speed up the consolidation by Nx (where N is the number of ranks). Adding a new method consolidate_safetensors_files_on_every_rank to do this.

Differential Revision: [D79000720](https://our.internmc.facebook.com/intern/diff/D79000720/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159393
Approved by: https://github.com/saumishr
ghstack dependencies: #159392
2025-08-13 03:31:36 +00:00
355462e127 Add stable Tensor get_device_index, use more stable DeviceIndex (#160143)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160143
Approved by: https://github.com/mikaylagawarecki
2025-08-13 03:27:10 +00:00
41673110cd [inductor] Windows inductor use intel-openmp. (#160258)
After some debug work, I found PyTorch torch_cpu.dll is using intel-openmp, but not MSVC openmp.
So, switch Windows inductor to intel-openmp.

It fixed: c8205cb354/test/inductor/test_aot_inductor.py (L2405-L2408)
<img width="896" height="230" alt="image" src="https://github.com/user-attachments/assets/273b00f8-7dc1-43c9-9b7f-752e16355a80" />

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160258
Approved by: https://github.com/ezyang
2025-08-13 02:36:19 +00:00
6be6d06295 Avoid potential deadlocks in host allocator (#159352)
# Motivation
This PR fixes a potential deadlock in the host allocator.
When calling `event->record(stream)`, the `record_stream` implementation may acquire the Python GIL.
In places such as 842cc77ab9/aten/src/ATen/cuda/CachingHostAllocator.cpp (L145-L151), and 842cc77ab9/aten/src/ATen/xpu/CachingHostAllocator.cpp (L22-L28) `record_stream` is invoked while holding the allocator lock.

To prevent deadlocks, we must ensure the locking order is:
**GIL → Allocator Lock**.
Reversing the order (**Allocator Lock → GIL**) can cause a deadlock.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159352
Approved by: https://github.com/cyyever, https://github.com/ezyang
2025-08-13 02:30:17 +00:00
f15ada5c6f Enable output padding when only outermost dim is dynamic (#159404)
Summary: When the shape of the output tensor has a dynamic outer most dim, the stride can still be padded to conform to configured alignment if required.

Test Plan:
CI

Rollback Plan:

Differential Revision: D79146886

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159404
Approved by: https://github.com/blaine-rister, https://github.com/eellison
2025-08-13 01:28:22 +00:00
69a0a9aa7f [Inductor][Triton] Pass GPUTarget param to updated make_ir function (#160422)
Summary: A recent Triton commit changed `ASTSource.make_ir` to a 5-arg signature that includes a `GPUTarget`. We need to pass in this new argument.

Test Plan:
`buck2 test 'fbcode//mode/opt' -m ovr_config//triton:trunk  fbcode//caffe2/test/inductor:test_inductor_cuda -- triton_kernel`

Rollback Plan:

Reviewed By: davidberard98

Differential Revision: D80069909

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160422
Approved by: https://github.com/davidberard98, https://github.com/mlazos
2025-08-13 01:27:57 +00:00
32099961d5 [EZ] Delete CircleCI case (#160479)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160479
Approved by: https://github.com/izaitsevfb
ghstack dependencies: #160477
2025-08-13 01:19:09 +00:00
8d1cf52922 [EZ][BE] Remove unused conda-env-macOS-ARM64 (#160477)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160477
Approved by: https://github.com/atalman
2025-08-12 23:41:25 +00:00
b1f43548ca [c10d] Error out the case when registering symmetric memory without eager init (#160145)
Instead of implicitly creating nccl comm inside mem pool registration for symmetric memory, we decide to error it out so that we only support eager init case when the nccl comm is already initiated.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160145
Approved by: https://github.com/kwen2501
2025-08-12 23:25:04 +00:00
0d71ca2c46 [EZ] Replace pytorch-labs with meta-pytorch (#160459)
This PR replaces all instances of 'pytorch-labs' with 'meta-pytorch' in this repository now that the 'pytorch-labs' org has been renamed to 'meta-pytorch'

## Changes Made
- Replaced all occurrences of 'pytorch-labs' with 'meta-pytorch'
- Only modified files with extensions: .py, .md, .sh, .rst, .cpp, .h, .txt, .yml
- Skipped binary files and files larger than 1MB due to GitHub api payload limits in the script to cover all repos in this org. Will do a more manual second pass later to cover any larger files

## Files Modified
This PR updates files that contained the target text.

Generated by automated script on 2025-08-12T20:41:29.888681+00:00Z
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160459
Approved by: https://github.com/huydhn, https://github.com/clee2000, https://github.com/atalman, https://github.com/malfet
2025-08-12 22:44:25 +00:00
5737372862 [CI] Switch ROCm MI300 GitHub Actions workflows from 2-GPU to 1-GPU runners (#158882)
Updated .github/actionlint.yaml to replace linux.rocm.gpu.mi300.2 with linux.rocm.gpu.mi300.1 in the supported runner list

Modified all affected workflows (inductor-perf-test-nightly-rocm.yml, inductor-periodic.yml, inductor-rocm-mi300.yml, and rocm-mi300.yml) to run jobs on 1-GPU MI300 runners instead of 2-GPU runners

This should help increase available runners even with same number of CI nodes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158882
Approved by: https://github.com/jeffdaily

Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-08-12 22:42:40 +00:00
2e4e5ab4be [MPS] Add mps keys to indices and values ops (#160223)
enable indices and values on sparse mps

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160223
Approved by: https://github.com/malfet
2025-08-12 22:08:44 +00:00
16d15445f8 Fullgraph graph capture with dynamo. (#159749)
Summary:
Following up on Avik's doc https://docs.google.com/document/d/11RW0Bbkp1QwFbEu8rCNW5d7wUFaEkxbL0uLyqcc2jTk/edit?tab=t.0

We are experimenting with a new API which utilizes torch.compile(fullgraph=True) and intend to use it to replace the old dynamo.export() API.

This PR adds a prototype for the API described in the doc.

Test Plan:
test_misc -- -k test_aot_capture

Rollback Plan:

Differential Revision: D79534608

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159749
Approved by: https://github.com/tugsbayasgalan
2025-08-12 22:06:18 +00:00
101276f81b [BE] Save attributes for CppCompileError for pickleing (#160294)
Differential Revision: [D79977408](https://our.internmc.facebook.com/intern/diff/D79977408/)

Context:
When testing cutlass backend and used autotune with subproc, sometimes I would see C++ compilation error (expected) followed by
```
Traceback (most recent call last):
  File "/torch/_inductor/autotune_process.py", line 175, in get
    result = TuningProcess.recv(self.read_pipe)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/torch/_inductor/autotune_process.py", line 99, in recv
    return pickle.load(read_pipe)
           ^^^^^^^^^^^^^^^^^^^^^^
TypeError: CppCompileError.__init__() missing 1 required positional argument: 'output'
```
which is unexpected. After asking claude, it seems

> Now I can see the issue. The `CppCompileError` class requires two arguments: `cmd` (a list of strings) and `output` (a string). However, when exceptions are being pickled and unpickled across process boundaries, the pickling process might not be preserving the constructor arguments correctly.
>
> The problem is likely that when a `CppCompileError` is raised in the subprocess and then pickled/unpickled through the `recv` function, the unpickling process is trying to reconstruct the exception but doesn't have the required constructor arguments.
>
> The issue is clear now. The `CppCompileError` class doesn't have custom pickle methods (`__reduce__`, `__getstate__`, `__setstate__`), so when it's pickled and unpickled across process boundaries, Python's default pickling mechanism tries to reconstruct it but fails because it doesn't preserve the constructor arguments properly.
>
> The solution is to add a `__reduce__` method to the `CppCompileError` class to ensure it can be properly pickled and unpickled. Let me implement this fix:

Adding these seem to help.

fbcode repro: [D79977541](https://www.internalfb.com/diff/D79977541)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160294
Approved by: https://github.com/masnesral
2025-08-12 22:03:36 +00:00
cbffde7745 Factor out the strings to templates for better editor integration (#160357)
# Summary

More code motion, tldr is that install 'Better Jinja' in vscode and now you can get highlighting

Before
<img width="776" height="926" alt="Screenshot 2025-08-11 at 2 41 08 PM" src="https://github.com/user-attachments/assets/10868b31-f8ac-4cf5-99fe-19b8789ce06b" />

After:
<img width="1184" height="1299" alt="Screenshot 2025-08-11 at 2 40 27 PM" src="https://github.com/user-attachments/assets/45203765-589e-4d76-8196-d895a2f2fbf6" />

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160357
Approved by: https://github.com/eellison
2025-08-12 21:59:54 +00:00
78a2fe1d42 [TorchScript] thread-safe ErrorReport::CallStack (#160386)
Context: During jit.script, the TorchScript frontend maintains a callstack of Python frames, which is used to present the corresponding user code in case TorchScript errors. The callstack is maintained via ErrorReport::CallStack RAII guards. Before recursing into a function, an ErrorReport::CallStack guard is created and the CallStack guard pushes the frame information onto a thread_local callstack (a list of calls); and after exiting, the frame information is popped off the callstack. Note that the CallStack guards are also sometimes used in python via pybindings.

The problem is that sometimes another thread can obtain a reference to the CallStack guard (if it's a Python CallStack guard). **This means that the destructor for a CallStack guard can be called from a different thread than the constructor was called**. When this happens, it causes a segfault.

This PR makes the callstack vector thread-safe to access, and each CallStack guard will store a reference to the callstack vector onto which it pushed. When the CallStack guard is destructed, it pops off the appropriate callstack vector. Although this could potentially lead to mangled callstacks, it should prevent segfaults.

Added a test `test_thread_safe_error_stacks` which segfaults prior to these changes, and no longer segfaults.

Differential Revision: [D80054972](https://our.internmc.facebook.com/intern/diff/D80054972)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160386
Approved by: https://github.com/eellison
2025-08-12 21:59:04 +00:00
f8f0414a59 fix cpp builder to avoid missing-source compile error (#160354)
Summary:
the condition
```
if config.is_fbcode() and (not self._aot_mode or self._use_relative_path):
    sources = [os.path.basename(i) for i in sources]
```
unintentionally (?) stripped paths even when use_relative_path was False (as long as aot_mode was False), breaking local tests that rely on absolute temp-file paths.

Fixes internal issue:
```

FAILED (errors=1)

CppCompileError: C++ compile error

Command:
/mnt/gvfs/third-party2/llvm-fb/0f1f083aa5508772f3db24bf4f697bc118ba0958/17/platform010/72a2ff8/bin/clang-17 czyi3nhzin5b3mc3376vmfnlbjobvjcghbvv4tatuazs3syqubay.cpp -shared -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -Werror=ignored-optimization-argument -g -o /re_tmp/tmpsp58ya2h/zy/test_symbol.so

Output:
clang-17: error: no such file or directory: 'czyi3nhzin5b3mc3376vmfnlbjobvjcghbvv4tatuazs3syqubay.cpp'
clang-17: error: no input files
```

Reviewed By: clee2000

Differential Revision: D80025417

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160354
Approved by: https://github.com/benjaminglass1, https://github.com/clee2000
2025-08-12 21:36:22 +00:00
4d419a7461 Add pad and narrow to torch/csrc/stable/ops.h (#159328)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159328
Approved by: https://github.com/janeyx99
ghstack dependencies: #159507
2025-08-12 21:29:49 +00:00
655137b678 Update torch::stable::Tensor() default constructor (#159507)
Allows things like

```cpp
Tensor cu_seqlens_q;
if (...) {
   cu_seqlens_q = ...
}
...
```

Also adds `torch::stable::Tensor.defined()`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159507
Approved by: https://github.com/janeyx99
2025-08-12 21:29:49 +00:00
f27232a213 [ROCm] Limit number of values per thread for reductions on three dimensions (#159652)
In the current implementation of reductions in three dimensions for AMD GPUs the number of values per thread is unbounded and can end up being in the hundreds of thousands for certain tensors. This of course is bad for performance. This patch fixes this issue by increasing the parallelism and thus lowering the number of value per thread to reasonable limits i.e. less than 2048 values per thread. The performance gains can be between 10x-17x for certain examples where the number of values per thread was originally very high.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159652
Approved by: https://github.com/jeffdaily
2025-08-12 21:15:56 +00:00
c24ca7f4bf [FSDP][Collectives] skipping allgather when world size is 1 (#160135)
**Summary:** In its current state, FSDP collectives uses cuda synchronizations and communication ops regardless of what the world size is. However, now that replicate will use FSDP, there will be instances where group size = 1 and these synchronizations and ops will be used needlessly. I have updated fsdp_params group to skip the foreach_all_gather and foreach_all_gather_copy_out APIs when world_size ‎ = 1. I have created a test that uses CommDebugMode to verify that the all gather comm has been removed. I also edited an affected test which used 1-way FSDP by verifying and changing its assert statements for CommDebugMode. Below, I have included the link to the profile trace verifying these two APIs were skipped and two test commands.

https://interncache-all.fbcdn.net/manifold/perfetto-artifacts/tree/ui/index.html#!/?url=https://interncache-all.fbcdn.net/manifold/perfetto_internal_traces/tree/shared_trace/anshulsi_f846ac3b-9467-4060-8e36-8cc3bc4449c3_devgpu263.prn2.facebook.com_652183.1753822140871934814.pt.trace.json

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160135
Approved by: https://github.com/weifengpy
2025-08-12 21:13:29 +00:00
b4596895b9 [DTensor] Registers sharding rule for rms_norm (#159692)
Reduces collective calls in the forward pass from 2 to 1

In #158716 I added the sharding rule for the backward pass but didn't add the forward pass as it didn't get dispatched. After #159324 this should get properly dispatched hence I am adding it now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159692
Approved by: https://github.com/tianyu-l
2025-08-12 21:05:24 +00:00
5a9c4cfce4 [Fix XPU CI][Inductor UT] Fix test cases broken by community. (#160403)
Fixes #160243, Fixes #160244, Fixes #160245

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160403
Approved by: https://github.com/janeyx99
2025-08-12 21:02:44 +00:00
a354fa91e2 added class or module info for functions blocked by weight-only load (#159935)
Fixes #152985
In #152985, users are confused why weights-only load failed even though functions were registered in safe_globals.
Because the error message doesn't make the critical failure reason clear, they couldn't figure out only some functions are missing from safe_globals registration.
This fix is to make that point more clear.

Here's the new errror message, the blocked function information will be following the warning message with a line breaker to make it stand out.
```
_pickle.UnpicklingError: Weights only load failed. In PyTorch 2.6, we changed the default value of the `weights_only` argument in `torch.load` from `False` to `True`. Re-running `torch.load` with `weights_only` set to `False` will likely succeed, but it can result in arbitrary code execution. Do it only if you got the file from a trusted source.
Please file an issue with the following so that we can make `weights_only=True` compatible with your use case: WeightsUnpickler error:

Trying to call reduce for unrecognized function <built-in method _unpickle of type object at 0x641e8a57d1f0> which belongs to <class 'zoneinfo.ZoneInfo'>

Check the documentation of torch.load to learn more about types accepted by default with weights_only https://pytorch.org/docs/stable/generated/torch.load.html.

To execute this test, run the following from the base repo dir:
    python test/test_serialization.py TestSerialization.test_weights_only_with_safe_zoneinfo_unpickle_registration_success

This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159935
Approved by: https://github.com/mikaylagawarecki
2025-08-12 20:52:25 +00:00
f95b58c284 Remove usage of fsspec in HF consolidation script (#159392)
Moving towards just supporting local storage to take advantage of HF apis such as safe_open. This was already done in Storage component in https://github.com/pytorch/pytorch/pull/159405. This PR removes fsspec usages in consolidation script and relies on local storage only

Differential Revision: [D78997975](https://our.internmc.facebook.com/intern/diff/D78997975/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159392
Approved by: https://github.com/sibuachu
2025-08-12 20:41:06 +00:00
8e6a313858 Add ownership token when needed on GradientEdge (#160098)
We can avoid the token by introducing PyObject preservation for THPFunction. But I think it will be too much complexity given that this kind of issue is very rare.
Happy to be talked into doing it though if someone really wants to.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160098
Approved by: https://github.com/ezyang, https://github.com/soulitzer
2025-08-12 20:14:18 +00:00
7e91394955 Support NUMA Binding for Callable Entrypoints (#160163)
# Context
This is an extension of #149334.

# This PR
Add support for NUMA bindings with Callable entrypoints, such as `do_train` instead of `/usr/local/bin/python`.

Most notably, we utilize a hack in order to force `Process.start()` to use custom NUMA bindings for each subprocess. Please search for `HACK:` in the code to see a description of the implementation we chose, and #160006 for discussion of alternatives and why this is necessary.

Other changes:
* Remove unnecessary `--preferred` option from all binding strategies. By default, Linux already allocates memory to the NUMA node local to the CPU which triggered the allocation. (See [MPOL_LOCAL](https://man7.org/linux/man-pages/man2/set_mempolicy.2.html).)
* Refactor so that the main API is `maybe_wrap_command_with_numa_bindings`, which computes bindings for a single rank at a time, rather than `maybe_wrap_with_numa_bindings` which computed bindings for all ranks at once. This allowed for more code sharing between `Callable` and `str` entrypoints.

# Test Plan
## Automated
`$ pytest test/test_numa_binding.py`

## Manual
Using [this benchmark,](https://gist.github.com/pdesupinski/bbe01ade455d86e989794f2c612e2d91), ran

```
$ PYTHONUNBUFFERED=1 LOGLEVEL=INFO perf stat -e ls_dmnd_fills_from_sys.dram_io_far,ls_dmnd_fills_from_sys.dram_io_near -- python -m torch.distributed.run --standalone --nproc-per-node=8 --numa-binding=node --run-path mlp_train.py 2>&1 | tee node_callable.txt && PYTHONUNBUFFERED=1 LOGLEVEL=INFO perf stat -e ls_dmnd_fills_from_sys.dram_io_far,ls_dmnd_fills_from_sys.dram_io_near -- python -u -m torch.distributed.run --standalone --nproc-per-node=8 --run-path mlp_train.py 2>&1 | tee none_callable.txt
```

and observed
* 6.6% remote memory accesses with 'node' bindings
* 11.6% remote without bindings

I also ran similar with `str` entrypoints as before just to be sure it's still working.

NOTE: [--run-path triggers the code to be run inside a `Callable`.](017259f9c6/torch/distributed/run.py (L870))

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160163
Approved by: https://github.com/d4l3k
2025-08-12 20:08:49 +00:00
89654db1ab [inductor] fix triton bucketize mask propagation (#159961)
See 6b414f56a4

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159961
Approved by: https://github.com/eellison
2025-08-12 19:59:32 +00:00
2d0cdee394 move thread-local capture mode guard to include work.isStarted (#160398)
Per title, should fix capture errors that happen because nccl watchdog races with capture start.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160398
Approved by: https://github.com/aorenste
2025-08-12 19:25:04 +00:00
eqy
9903ca4f70 [cuDNN][64-bit indexing] update conv depthwise 64bit indexing dispatch condition to match native kernel (#156140)
The native kernel doesn't support batch splitting so the previous check wasn't aggressive enough in dispatching to cuDNN

https://github.com/pytorch/pytorch/issues/155225

Pull Request resolved: https://github.com/pytorch/pytorch/pull/156140
Approved by: https://github.com/ngimel, https://github.com/atalman
2025-08-12 18:07:41 +00:00
f341077ce4 Revert "[ROCm] Support large inputs for coalesceValuesKernel (#158281)"
This reverts commit a7abf57aabec0ce686092e2d66e53ba185dbc56b.

Reverted https://github.com/pytorch/pytorch/pull/158281 on behalf of https://github.com/clee2000 due to broke windows cuda build? [GH job link](https://github.com/pytorch/pytorch/actions/runs/16915172288/job/47927141460) [HUD commit link](a7abf57aab).  Not caught b/c PR didn't have ciflow/trunk ([comment](https://github.com/pytorch/pytorch/pull/158281#issuecomment-3180408766))
2025-08-12 17:57:57 +00:00
3cec82a7e9 Ensure outer aliasing on DTensor matches inner aliasing (#158954)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158954
Approved by: https://github.com/albanD, https://github.com/wconstab
2025-08-12 17:47:48 +00:00
ee9f8ba11d [ROCm] Use opportunistic fastatomics based on hueristics (#159430)
* Opportunistic fast atomics works better with small sizes, since there is more chance of lanes doing atomics on the same address

Co-author: @amd-hhashemi

Reproducer:
```
import time
import torch

x = torch.randn((1_632_960, 128), device='cuda', dtype=torch.float)
ind = torch.randint(0, x.size(0), size=(5_079_670,), device='cuda')
src = torch.randn((5_079_670, 128), device='cuda', dtype=torch.float)

for _ in range(20):
    x.index_add_(0, ind, src)

start_time = time.time()
for i in range(100):
    x.index_add_(0, ind, src)
torch.cuda.synchronize()
end_time = time.time()
mean_time = (end_time - start_time)/100
print(f"Avg time for index_add_: {mean_time * 1e6:.2f} us")
```

Perf numbers:
```
Before:
Avg time for index_add_: 25652.16 us

After:
Avg time for index_add_: 2675.15 us
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159430
Approved by: https://github.com/pruthvistony, https://github.com/jeffdaily
2025-08-12 17:13:54 +00:00
1f4057c11a [inductor] remove no_x_dim (#159810)
no_x_dim is used to indicate that a reduction operates on a single row, and data loaded for the reduction is 1-dimensional.

no_x_dim was introduced in https://github.com/pytorch/pytorch/pull/102444 - in which there was bad perf in some reductions, and using 1D tensors fixed the perf issue.

However, it appears that this perf issue no longer exists in current Triton versions. https://github.com/pytorch/pytorch/pull/118822 checked this, and we can also check this on H100 benchmarks (linked below). And another motivation for removing this behavior is that it enables larger loads, which we observe is necessary for good performance on certain shapes on Blackwell.

H100 inference benchmarks:
https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Mon%2C%2004%20Aug%202025%2004%3A13%3A24%20GMT&stopTime=Mon%2C%2011%20Aug%202025%2004%3A13%3A24%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cuda%20(h100)&lBranch=gh/davidberard98/396/orig&lCommit=a6bcd4692fb39fa2fad260f290bff545d4425829&rBranch=main&rCommit=e96c7c4bb0f6aeae2ab3b6f040f7d67edbec199a

H100 training benchmarks:
https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Mon%2C%2004%20Aug%202025%2004%3A13%3A24%20GMT&stopTime=Mon%2C%2011%20Aug%202025%2004%3A13%3A24%20GMT&granularity=hour&mode=training&dtype=amp&deviceName=cuda%20(h100)&lBranch=gh/davidberard98/396/orig&lCommit=a6bcd4692fb39fa2fad260f290bff545d4425829&rBranch=main&rCommit=e96c7c4bb0f6aeae2ab3b6f040f7d67edbec199a

Overall, the benchmarks show minimal change in performance.

Differential Revision: [D79599286](https://our.internmc.facebook.com/intern/diff/D79599286)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159810
Approved by: https://github.com/ngimel, https://github.com/eellison
2025-08-12 17:10:31 +00:00
94b91a8763 [redone][pytorch] Moving torch.compile worker process logs to a dedicated rank based log directory (#160352)
Summary:
Writing torch.compile worked logs to dedicated_log_rank{RANK} if we're running on mast.
ref: D79456310 (got reverted because of linter)

Testing:
Refer differential Revision: D79917440

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160352
Approved by: https://github.com/masnesral
2025-08-12 16:49:08 +00:00
a7abf57aab [ROCm] Support large inputs for coalesceValuesKernel (#158281)
# Description

`.coalesce` cannot handle large inputs on ROCM due to maximal grid size limit.

This PR splits axis `X` into axes `X` and `Y`, and repurposes `Z` for original `Y` on ROCm to avoid such limitation.

Confirmed the new approach can handle large inputs. Correctness needs validation.

# Testing Command

`python torch_spmv.py 22500000 272500000`

## Script `torch_spmv.py`

``` python
import torch
import argparse

def parse_args():
    parser = argparse.ArgumentParser(
        description="Sparse COO Matrix by Dense Vector Multiplication using PyTorch"
    )
    parser.add_argument("n", type=int, help="Size of the NxN matrix")
    parser.add_argument("nnz", type=int, help="Number of non-zero entries")
    return parser.parse_args()

def main():
    args = parse_args()
    n = args.n
    nnz = args.nnz
    dtype = torch.float32
    device = torch.device('cuda')

    # Generate random indices for the sparse matrix in COO format.
    torch.manual_seed(42)
    rows = torch.randint(0, n, (nnz,), dtype=torch.int64, device=device)
    cols = torch.randint(0, n, (nnz,), dtype=torch.int64, device=device)
    indices = torch.stack([rows, cols], dim=0)

    # Generate random values.
    values = torch.randn(nnz, dtype=torch.float32, device=device)

    # Create the sparse COO matrix and move it to the target device.
    sparse_matrix = torch.sparse_coo_tensor(indices, values, size=(n, n), dtype=torch.float32, device=device)
    sparse_matrix = sparse_matrix.coalesce()

    # Generate a random dense vector.
    dense_vector = torch.randn(n, dtype=torch.float32, device=device)

    # Perform sparse matrix - dense vector multiplication.
    # Using torch.sparse.mm which expects a 2D tensor for the vector.
    result = torch.sparse.mm(sparse_matrix, dense_vector.unsqueeze(1)).squeeze()
    # result = torch.mv(sparse_matrix, dense_vector)

    # Print the result.
    print("Result of the multiplication:")
    print(torch.sum(result))

if __name__ == "__main__":
    main()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158281
Approved by: https://github.com/jithunnair-amd, https://github.com/jeffdaily
2025-08-12 16:42:55 +00:00
f7b2f3314c Revert "[triton_heuristics] Optimize the triton launcher in pt2 (#160000)"
This reverts commit d0e2240f680ea2a553f7ee8188f52482e130bfd0.

Reverted https://github.com/pytorch/pytorch/pull/160000 on behalf of https://github.com/davidberard98 due to D80054972 failing with test_triton_kernel_2d_autotune_grad_False_dynamic_True_backend_inductor_grid_type_1_tdlp_1 ([comment](https://github.com/pytorch/pytorch/pull/160000#issuecomment-3180144676))
2025-08-12 16:33:02 +00:00
9d37c960a4 [ROCm][CI] use new benchmark image for dynamo (#160421)
Follow-up to #160047 that separated the rocm image into default CI and benchmarks.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160421
Approved by: https://github.com/jeffdaily

Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-08-12 16:07:19 +00:00
b219ca2a00 Revert "Update triton xpu commit to support python 3.14 (#160183)"
This reverts commit 7fbc22855c17741ae016992803b2e147a13aa22d.

Reverted https://github.com/pytorch/pytorch/pull/160183 on behalf of https://github.com/clee2000 due to I'm not sure how, but it seems to have broken inductor/test_extension_backend.py::ExtensionBackendTests::test_open_device_registration [GH job link](https://github.com/pytorch/pytorch/actions/runs/16911267995/job/47917091939) [HUD commit link](7fbc22855c).  Maybe because the docker build changed?  Note to self: not bad TD ([comment](https://github.com/pytorch/pytorch/pull/160183#issuecomment-3179840160))
2025-08-12 15:29:19 +00:00
b7db86600a Fix Tensor illustration, use permalinks for image embedding in Readme.md (#160416)
Fixes Tensor illustration being broken on pypi.org. Also uses permalinks instead of links to images for embedding as per this suggestion of Alban: https://github.com/pytorch/pytorch/pull/160187#discussion_r2262978006

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160416
Approved by: https://github.com/malfet
2025-08-12 15:15:12 +00:00
9708fcf92d Account for triton kernel source code hidden in custom ops properly in AOTAutogradCache (#160120)
This PR fixes a bug where user defined triton kernels hidden behind `triton_op` do not register source code changes. If a user *only* changes a triton kernel source_code, because triton kernels are hidden under the custom op, dynamo hasn't traced into them yet.

This means at AOTAutograd time, we don't know the list of triton kernels that are defined by custom ops. This is an initial fix for the issue by parsing the AST of the custom op looking for triton kernels. This won't catch more degenerate cases if the custom op calls other custom ops/functions that then call triton kernels, and then the toplevel compiled graph doesn't know about it. To handle that, we'd have to trace through the custom op at dynamo time.

This should handle 99% of cases, though. I added an expectedFailure test to show the limitation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160120
Approved by: https://github.com/zou3519
2025-08-12 14:11:06 +00:00
a288b15ea9 [CI] Reduce XPU Windows build time (#159763)
Reduce the time cost from 2.5 hours to about 1.5 hours.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159763
Approved by: https://github.com/EikanWang, https://github.com/atalman
2025-08-12 14:04:29 +00:00
7fbc22855c Update triton xpu commit to support python 3.14 (#160183)
Follow PR #159725
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160183
Approved by: https://github.com/EikanWang, https://github.com/atalman
2025-08-12 14:02:36 +00:00
f33ce40bc0 [bucketing] Bucket only adjacent collectives to prevent reordering (#159983)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159983
Approved by: https://github.com/wconstab, https://github.com/eellison
2025-08-12 11:57:00 +00:00
4d5b3f2d5a [dynamo][guards] Install dict watchers for recrusive dict tag optimization (#159796)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159796
Approved by: https://github.com/jansel
2025-08-12 09:49:11 +00:00
f990490a23 Add label_smoothing param in nn.BCELoss and nn.BCEWithLogitsLoss (#150282)
Fixes #91545

## Changes

- Add `label_smoothing` param and docs
- Add test case for `label_smoothing`
- Remove duplicate description in `nn.BCELoss` and `nn.BCEWithLogitsLoss`

##  Test Result

```bash
pytest -s test/test_nn.py -k test_bce
```

![image](https://github.com/user-attachments/assets/30c0b7fe-fe49-4aa0-9b05-4d70403a7b05)

![image](https://github.com/user-attachments/assets/4fe3fd1c-54b8-4012-afd9-133ce9fb4964)

![image](https://github.com/user-attachments/assets/5cad019a-3a4c-475a-9fde-9c1acad5792d)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150282
Approved by: https://github.com/cyyever, https://github.com/mikaylagawarecki
2025-08-12 09:37:03 +00:00
b9003ed3d8 Dynamo Deep Dive Documentation Fix (#158860)
changed SourceBuilder to VariableBuilder

Fixes #158447

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158860
Approved by: https://github.com/mlazos
2025-08-12 08:53:33 +00:00
fea7e9dd37 extract shape in _view_has_unbacked_input (#160255)
Summary: We were getting DDE on reshape still!! i looked deeper and found an issue in _view_has_unbacked_input namely when input is [[,,]] it need to be normalized to [..]

Test Plan:
existing tests.

Rollback Plan:

Differential Revision: D79951119

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160255
Approved by: https://github.com/bobrenjc93
2025-08-12 08:38:19 +00:00
9a0f7a3bb0 [retry-land][pytorch][dynamo_compile] Log stack_trace to dynamo_compile (#160348)
refer: https://github.com/pytorch/pytorch/pull/159655

Earlier pr failed on dynamo/test_utils.py::TestDynamoTimed::test_dynamo_timed.
Updated test_dynamo_timed + re-ran locally to test.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160348
Approved by: https://github.com/masnesral
2025-08-12 06:24:54 +00:00
01bcf9a40d Bump transformers pin (#159291)
Trying to update hf pin.

Benchmarking run to figure out issues

<img width="1356" height="123" alt="image" src="https://github.com/user-attachments/assets/fbc435f3-a7cb-4280-9636-2ea6d15d7b6d" />

Retrying - https://github.com/pytorch/pytorch/pull/156118

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159291
Approved by: https://github.com/BoyuanFeng, https://github.com/huydhn

Co-authored-by: Huy Do <huydhn@gmail.com>
2025-08-12 05:14:17 +00:00
8d3d1c8443 [dynamo] fixes to propagate tag safeness (#159807)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159807
Approved by: https://github.com/jansel
2025-08-12 04:50:13 +00:00
0f3b10b8ee [audio hash update] update the pinned audio hash (#160384)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160384
Approved by: https://github.com/pytorchbot
2025-08-12 04:38:04 +00:00
5f1010fbb3 [Graph Partition] Pass all OSS unit tests (#154667)
Graph partition leads to 6.2% speedup on vision_maskrcnn, 5.8% speedup on yolov3. [P1819700563](https://www.internalfb.com/phabricator/paste/view/P1819700563), 39.5% speedup on speech_transformer inference [P1830602200](https://www.internalfb.com/phabricator/paste/view/P1830602200), 85% speedup on speech_transformer training [P1831115315](https://www.internalfb.com/phabricator/paste/view/P1831115315).

Run the same diff on two days and both show speedup on average.

[first TorchInductor Benchmark ci run](https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Mon%2C%2021%20Jul%202025%2016%3A37%3A55%20GMT&stopTime=Mon%2C%2028%20Jul%202025%2016%3A37%3A55%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cuda%20(h100)&lBranch=bf/partition-turn-on&lCommit=75ef90fe89b82c967362a2d40fdf1af047202bc2&rBranch=main&rCommit=abcb24f4de11f8fedf2c2c9ff53b6092ef42306d)
<img width="1885" height="752" alt="image" src="https://github.com/user-attachments/assets/13bba9fc-5dbf-42ad-8558-d54f7e367b41" />

[second TorchInductorBenchmark ci run](https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Wed%2C%2023%20Jul%202025%2016%3A38%3A27%20GMT&stopTime=Wed%2C%2030%20Jul%202025%2016%3A38%3A27%20GMT&granularity=hour&mode=inference&dtype=bfloat16&deviceName=cuda%20(h100)&lBranch=bf/partition-turn-on&lCommit=66de27e29338c26b1be94733049868cb0309ea52&rBranch=main&rCommit=70d2e9ba455c3c910f6f95b24171c8eee7bc00bf)
<img width="2513" height="1030" alt="image" src="https://github.com/user-attachments/assets/3a413dcb-2314-4292-919a-7ca181f9eeac" />

Pull Request resolved: https://github.com/pytorch/pytorch/pull/154667
Approved by: https://github.com/eellison
2025-08-12 04:37:58 +00:00
edaa151d0d [CI] Move CUDA tests to trunk workflow (#160379)
Which is getting run before PR is merged anyway, but according to 3X
less frequently than pull workflow according to [Flambeau](https://pytorchci.grafana.net/public-dashboards/1c571e79090443eaaa9811db71f8d23b)
<img width="796" height="573" alt="image" src="https://github.com/user-attachments/assets/0235e610-4e1c-4be5-88bf-ea8278d1c656" />

I.e. that will probably results in some longer time to signal, but considering that frequency of changes to eager PyTorch-on-CUDA slowed down and Inductor changes are decorated with ciflow/inductor, this looks like an acceptable tradeoff to reduce costs
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160379
Approved by: https://github.com/izaitsevfb
2025-08-12 04:23:50 +00:00
10bc36fe84 Get tensor subclasses and torch.library.triton_op to dispatch correctly (#160341)
Short-term fix for https://github.com/pytorch/pytorch/issues/160333

The problem is:
1) `triton_op` adds a decomposition for FunctionalTensorMode for this operation
2) Tensor Subclasses rely on FunctionalTensorMode's `__torch_dispatch__` returning NotImplemented.
3) `triton_op`'s FunctionalTensorMode decomposition takes precedence over FunctionalTensorMode's decomposition.

The easy fix is to copy-paste the FunctionalTensorMode's NotImplemented
return logic into the decomposition.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/160341
Approved by: https://github.com/drisspg
2025-08-12 04:09:37 +00:00
32e5e2f596 [vllm hash update] update the pinned vllm hash (#160259)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vllm hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160259
Approved by: https://github.com/pytorchbot
2025-08-12 04:04:53 +00:00
bfc873d02e [ROCm][Windows] Revert copying hipblaslt and rocblas dirs. (#159083)
This reverts the changes from b367e5f6a6. This will also close https://github.com/pytorch/pytorch/pull/158922.

Since 30387ab2e4, ROCm is bootstrapped using the 'rocm' Python module which contains these files (see https://github.com/ROCm/TheRock/blob/main/docs/packaging/python_packaging.md), so they do not need to be bundled into torch/lib.

There was also a bug in here - if `ROCM_DIR` is unset, the code crashes:
```
  File "D:\projects\TheRock\external-builds\pytorch\.venv\Lib\site-packages\setuptools\_distutils\dist.py", line 1002, in run_command
    cmd_obj.run()
  File "D:\b\pytorch_main\setup.py", line 853, in run
    rocm_dir_path = Path(os.environ["ROCM_DIR"])
                         ~~~~~~~~~~^^^^^^^^^^^^
  File "<frozen os>", line 714, in __getitem__
KeyError: 'ROCM_DIR'
```
The code could have checked for `ROCM_PATH` too.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159083
Approved by: https://github.com/jeffdaily
2025-08-12 02:45:49 +00:00
807 changed files with 19895 additions and 7260 deletions

View File

@ -208,7 +208,9 @@ if __name__ == "__main__":
build_vars = "CMAKE_SHARED_LINKER_FLAGS=-Wl,-z,max-page-size=0x10000 "
# MAX_JOB=5 is not required for CPU backend (see commit 465d98b)
if enable_cuda:
build_vars = "MAX_JOBS=5 " + build_vars
build_vars += "MAX_JOBS=5 "
# nvshmem is broken for aarch64 see https://github.com/pytorch/pytorch/issues/160425
build_vars += "USE_NVSHMEM=OFF "
override_package_version = os.getenv("OVERRIDE_PACKAGE_VERSION")
desired_cuda = os.getenv("DESIRED_CUDA")

View File

@ -76,7 +76,6 @@ ADD ./common/install_mnist.sh install_mnist.sh
RUN bash ./install_mnist.sh
FROM base as all_cuda
COPY --from=cuda11.8 /usr/local/cuda-11.8 /usr/local/cuda-11.8
COPY --from=cuda12.6 /usr/local/cuda-12.6 /usr/local/cuda-12.6
COPY --from=cuda12.8 /usr/local/cuda-12.8 /usr/local/cuda-12.8
COPY --from=cuda12.9 /usr/local/cuda-12.9 /usr/local/cuda-12.9

View File

@ -76,6 +76,9 @@ elif [[ "$image" == *cuda*linter* ]]; then
elif [[ "$image" == *linter* ]]; then
# Use a separate Dockerfile for linter to keep a small image size
DOCKERFILE="linter/Dockerfile"
elif [[ "$image" == *riscv* ]]; then
# Use RISC-V specific Dockerfile
DOCKERFILE="ubuntu-cross-riscv/Dockerfile"
fi
_UCX_COMMIT=7bb2722ff2187a0cad557ae4a6afa090569f83fb
@ -303,6 +306,9 @@ case "$tag" in
SKIP_LLVM_SRC_BUILD_INSTALL=yes
INDUCTOR_BENCHMARKS=yes
;;
pytorch-linux-noble-riscv64-py3.12-gcc14)
GCC_VERSION=14
;;
*)
# Catch-all for builds that are not hardcoded.
VISION=yes
@ -423,7 +429,14 @@ if [ -n "$ANACONDA_PYTHON_VERSION" ]; then
fi
if [ -n "$GCC_VERSION" ]; then
if !(drun gcc --version 2>&1 | grep -q " $GCC_VERSION\\W"); then
if [[ "$image" == *riscv* ]]; then
# Check RISC-V cross-compilation toolchain version
if !(drun riscv64-linux-gnu-gcc-${GCC_VERSION} --version 2>&1 | grep -q " $GCC_VERSION\\W"); then
echo "RISC-V GCC_VERSION=$GCC_VERSION, but:"
drun riscv64-linux-gnu-gcc-${GCC_VERSION} --version
exit 1
fi
elif !(drun gcc --version 2>&1 | grep -q " $GCC_VERSION\\W"); then
echo "GCC_VERSION=$GCC_VERSION, but:"
drun gcc --version
exit 1

View File

@ -1 +1 @@
243e186efbf7fb93328dd6b34927a4e8c8f24395
v4.54.0

View File

@ -1 +1 @@
ae324eeac8e102a2b40370e341460f3791353398
0958dc9b2bb815e428f721f9da599dab0dc1c5d7

View File

@ -26,15 +26,15 @@ function install_torchbench() {
python install.py --continue_on_fail
# TODO (huydhn): transformers-4.44.2 added by https://github.com/pytorch/benchmark/pull/2488
# is regressing speedup metric. This needs to be investigated further
pip install transformers==4.38.1
# soxr comes from https://github.com/huggingface/transformers/pull/39429
pip install transformers==4.54.0 soxr==0.5.0
echo "Print all dependencies after TorchBench is installed"
python -mpip freeze
popd
chown -R jenkins torchbench
chown -R jenkins /opt/conda
}
# Pango is needed for weasyprint which is needed for doctr

View File

@ -0,0 +1,155 @@
# Cross-compilation Docker container for RISC-V architecture
ARG UBUNTU_VERSION
FROM --platform=linux/amd64 ubuntu:${UBUNTU_VERSION} as base
ARG UBUNTU_VERSION
ENV GCC_VERSION=14
ENV PYTHON_VERSION=3.12.3
ENV DEBIAN_FRONTEND=noninteractive
ENV CC=riscv64-linux-gnu-gcc-${GCC_VERSION}
ENV CXX=riscv64-linux-gnu-g++-${GCC_VERSION}
ENV QEMU_LD_PREFIX=/usr/riscv64-linux-gnu/
ENV SYSROOT=/opt/sysroot
# Install basic dependencies
RUN apt-get update && apt-get install -y \
ninja-build \
autoconf \
automake \
libtool \
patchelf \
ccache \
git \
wget \
python3-pip \
python3-venv \
python-is-python3 \
cmake \
sudo \
lsb-release \
gcc-${GCC_VERSION}-riscv64-linux-gnu \
g++-${GCC_VERSION}-riscv64-linux-gnu \
pkg-config \
&& rm -rf /var/lib/apt/lists/*
# Install user
COPY ./common/install_user.sh install_user.sh
RUN bash ./install_user.sh && rm install_user.sh
FROM base as python
ARG ZLIB_VERSION=1.3.1
ARG FFI_VERSION=3.4.6
ARG BZ2_VERSION=1.0.8
ARG XZ_VERSION=5.4.6
ARG OPENSSL_VERSION=3.2.1
# Set up sysroot directory for dependencies
ENV PKG_CONFIG_PATH=${SYSROOT}/lib/pkgconfig
ENV PKG_CONFIG_SYSROOT_DIR=${SYSROOT}
WORKDIR /opt
# Build zlib (for compression)
RUN echo "--- Building zlib ---" \
&& wget -c https://www.zlib.net/zlib-${ZLIB_VERSION}.tar.gz \
&& tar -xf zlib-${ZLIB_VERSION}.tar.gz --no-same-permissions --no-same-owner \
&& cd zlib-${ZLIB_VERSION}/ \
&& mkdir build && cd build \
&& ../configure --prefix=${SYSROOT} \
&& make -j$(nproc) && make install \
&& cd ../..
# Build libffi (for ctypes module)
RUN echo "--- Building libffi ---" \
&& wget -c https://github.com/libffi/libffi/releases/download/v${FFI_VERSION}/libffi-${FFI_VERSION}.tar.gz \
&& tar -xf libffi-${FFI_VERSION}.tar.gz --no-same-permissions --no-same-owner \
&& cd libffi-${FFI_VERSION}/ \
&& mkdir build && cd build \
&& ../configure --prefix=${SYSROOT} --host=riscv64-linux-gnu --build=x86_64-linux-gnu \
&& make -j$(nproc) && make install \
&& cd ../..
# Build bzip2 (for bz2 module)
RUN echo "--- Building bzip2 ---" \
&& wget -c https://sourceware.org/pub/bzip2/bzip2-${BZ2_VERSION}.tar.gz \
&& tar -xf bzip2-${BZ2_VERSION}.tar.gz --no-same-permissions --no-same-owner \
&& cd bzip2-${BZ2_VERSION}/ \
&& make CC=riscv64-linux-gnu-gcc-${GCC_VERSION} bzip2 bzip2recover libbz2.a \
&& make CC=riscv64-linux-gnu-gcc-${GCC_VERSION} -f Makefile-libbz2_so \
&& make install PREFIX=${SYSROOT} \
&& cp libbz2.so.${BZ2_VERSION} ${SYSROOT}/lib/ \
&& cd ${SYSROOT}/lib/ \
&& ln -sf libbz2.so.${BZ2_VERSION} libbz2.so.1.0 \
&& ln -sf libbz2.so.1.0 libbz2.so \
&& cd /opt/
# Build xz (for lzma module)
RUN echo "--- Building xz ---" \
&& wget -c https://github.com/tukaani-project/xz/releases/download/v${XZ_VERSION}/xz-${XZ_VERSION}.tar.gz \
&& tar -xf xz-${XZ_VERSION}.tar.gz --no-same-permissions --no-same-owner \
&& cd xz-${XZ_VERSION} \
&& mkdir build && cd build \
&& ../configure --prefix=${SYSROOT} --host=riscv64-linux-gnu --build=x86_64-linux-gnu \
&& make -j$(nproc) && make install \
&& cd ../..
# Build OpenSSL (for ssl module)
RUN echo "--- Building OpenSSL ---" \
&& wget -c https://www.openssl.org/source/openssl-${OPENSSL_VERSION}.tar.gz \
&& tar -xf openssl-${OPENSSL_VERSION}.tar.gz --no-same-permissions --no-same-owner \
&& cd openssl-${OPENSSL_VERSION}/ \
&& mkdir build && cd build \
&& ../Configure linux64-riscv64 --prefix=${SYSROOT} \
&& make -j$(nproc) && make install_sw \
&& cd ../..
# Build SQLite3 (for sqlite3 module)
RUN echo "--- Building SQLite3 ---" \
&& wget -c https://www.sqlite.org/2024/sqlite-autoconf-3450200.tar.gz \
&& tar -xf sqlite-autoconf-3450200.tar.gz --no-same-permissions --no-same-owner \
&& cd sqlite-autoconf-3450200 \
&& mkdir build && cd build \
&& ../configure --prefix=${SYSROOT} --host=riscv64-linux-gnu --build=x86_64-linux-gnu \
&& make -j$(nproc) && make install \
&& cd ../..
# Build and install RISC-V Python with all modules
RUN wget -c https://www.python.org/ftp/python/${PYTHON_VERSION}/Python-${PYTHON_VERSION}.tgz \
&& tar -xf Python-${PYTHON_VERSION}.tgz --no-same-permissions --no-same-owner \
&& cd Python-${PYTHON_VERSION} \
&& mkdir build && cd build \
&& ../configure \
--host=riscv64-linux-gnu \
--build=x86_64-linux-gnu \
--prefix=${SYSROOT} \
--enable-shared \
--disable-ipv6 \
--with-build-python=/usr/bin/python3 \
--with-ensurepip=no \
ac_cv_file__dev_ptmx=yes \
ac_cv_file__dev_ptc=no \
&& make -j$(nproc) \
&& make install
FROM base as final
COPY --from=python /opt/sysroot /opt/sysroot
# Install crossenv and cmake
RUN pip install crossenv cmake==4.0.0 --break-system-packages \
&& /usr/bin/python3 -m crossenv ${SYSROOT}/bin/python3 /opt/riscv-cross-env
# Add pip-installed cmake binaries to PATH
ENV PATH="/usr/local/bin:${PATH}"
# Set up cross Python environment
SHELL ["/bin/bash", "-c"]
RUN source /opt/riscv-cross-env/bin/activate \
&& pip install setuptools pyyaml typing_extensions wheel
# Set default environment variables for PyTorch build
ENV Python_ROOT_DIR=${SYSROOT}
ENV OPENSSL_ROOT_DIR=${SYSROOT}
USER jenkins
CMD ["bash"]

31
.ci/lumen_cli/README.md Normal file
View File

@ -0,0 +1,31 @@
# 🔧 Lumen_cli
A Python CLI tool for building and testing PyTorch-based components, using a YAML configuration file for structured, repeatable workflows.
## Features
- **Build**
- external projects (e.g. vLLM)
## 📦 Installation
at the root of the pytorch repo
```bash
pip install -e .ci/lumen_cli
```
## Run the cli tool
The cli tool must be used at root of pytorch repo, as example to run build external vllm:
```bash
python -m cli.run build external vllm
```
this will run the build steps with default behaviour for vllm project.
to see help messages, run
```bash
python3 -m cli.run --help
```
## Add customized external build logics
To add a new external build, for instance, add a new external build logics:
1. create the build function in cli/lib folder
2. register your target and the main build function at EXTERNAL_BUILD_TARGET_DISPATCH in `cli/build_cli/register_build.py`
3. [optional] create your ci config file in .github/ci_configs/${EXTERNAL_PACKAGE_NAME}.yaml

View File

@ -0,0 +1,37 @@
import argparse
import logging
from cli.lib.common.cli_helper import register_targets, RichHelp, TargetSpec
from cli.lib.core.vllm import VllmBuildRunner
logger = logging.getLogger(__name__)
# Maps targets to their argparse configuration and runner
# it adds new target to path python -m cli.run build external {target} with buildrunner
_TARGETS: dict[str, TargetSpec] = {
"vllm": {
"runner": VllmBuildRunner,
"help": "Build vLLM using docker buildx.",
}
# add yours ...
}
def register_build_commands(subparsers: argparse._SubParsersAction) -> None:
build_parser = subparsers.add_parser(
"build",
help="Build related commands",
formatter_class=RichHelp,
)
build_subparsers = build_parser.add_subparsers(dest="build_command", required=True)
overview = "\n".join(
f" {name:12} {spec.get('help', '')}" for name, spec in _TARGETS.items()
)
external_parser = build_subparsers.add_parser(
"external",
help="Build external targets",
description="Build third-party targets.\n\nAvailable targets:\n" + overview,
formatter_class=RichHelp,
)
register_targets(external_parser, _TARGETS)

View File

@ -0,0 +1,71 @@
"""
Cli Argparser Utility helpers for CLI tasks.
"""
import argparse
from abc import ABC, abstractmethod
try:
from typing import Any, Callable, Required, TypedDict # Python 3.11+
except ImportError:
from typing import Any, Callable, TypedDict
from typing_extensions import Required # Fallback for Python <3.11
class BaseRunner(ABC):
def __init__(self, args: Any) -> None:
self.args = args
@abstractmethod
def run(self) -> None:
"""runs main logics, required"""
# Pretty help: keep newlines + show defaults
class RichHelp(
argparse.ArgumentDefaultsHelpFormatter, argparse.RawDescriptionHelpFormatter
):
pass
class TargetSpec(TypedDict, total=False):
"""CLI subcommand specification with bA."""
runner: Required[type[BaseRunner]]
help: str
description: str
add_arguments: Callable[[argparse.ArgumentParser], None]
def register_targets(
parser: argparse.ArgumentParser,
target_specs: dict[str, TargetSpec],
common_args: Callable[[argparse.ArgumentParser], None] = lambda _: None,
) -> None:
"""Register target subcommands."""
targets = parser.add_subparsers(
dest="target",
required=True,
metavar="{" + ",".join(target_specs.keys()) + "}",
)
for name, spec in target_specs.items():
desc = spec.get("description") or spec["runner"].__doc__ or ""
p = targets.add_parser(
name,
help=spec.get("help", ""),
description=desc.strip(),
formatter_class=RichHelp,
)
p.set_defaults(
func=lambda args, cls=spec["runner"]: cls(args).run(),
_runner_class=spec["runner"],
)
if "add_arguments" in spec and callable(spec["add_arguments"]):
spec["add_arguments"](p)
if common_args:
common_args(p)

View File

@ -0,0 +1,42 @@
"""
Docker Utility helpers for CLI tasks.
"""
import logging
from typing import Optional
import docker
from docker.errors import APIError, NotFound
logger = logging.getLogger(__name__)
# lazy singleton so we don't reconnect every call
_docker_client: Optional[docker.DockerClient] = None
def _get_client() -> docker.DockerClient:
global _docker_client
if _docker_client is None:
_docker_client = docker.from_env()
return _docker_client
def local_image_exists(
image_name: str, client: Optional[docker.DockerClient] = None
) -> bool:
"""Return True if a local Docker image exists."""
if not image_name:
return False
client = client or _get_client()
try:
client.images.get(image_name)
return True
except (NotFound, APIError) as e:
logger.error(
"Error when checking Docker image '%s': %s",
image_name,
e.explanation if hasattr(e, "explanation") else str(e),
)
return False

View File

@ -0,0 +1,110 @@
"""
Environment Variables and Dataclasses Utility helpers for CLI tasks.
"""
import os
from dataclasses import field, fields, is_dataclass, MISSING
from pathlib import Path
from textwrap import indent
from typing import Optional, Union
from cli.lib.common.utils import str2bool
def get_env(name: str, default: str = "") -> str:
"""Get environment variable with default fallback."""
return os.environ.get(name) or default
def env_path_optional(
name: str,
default: Optional[Union[str, Path]] = None,
resolve: bool = True,
) -> Optional[Path]:
"""Get environment variable as optional Path."""
val = get_env(name) or default
if not val:
return None
path = Path(val)
return path.resolve() if resolve else path
def env_path(
name: str,
default: Optional[Union[str, Path]] = None,
resolve: bool = True,
) -> Path:
"""Get environment variable as Path, raise if missing."""
path = env_path_optional(name, default, resolve)
if not path:
raise ValueError(f"Missing path value for {name}")
return path
def env_bool(
name: str,
default: bool = False,
) -> bool:
val = get_env(name)
if not val:
return default
return str2bool(val)
def env_bool_field(
name: str,
default: bool = False,
):
return field(default_factory=lambda: env_bool(name, default))
def env_path_field(
name: str,
default: Union[str, Path] = "",
*,
resolve: bool = True,
) -> Path:
return field(default_factory=lambda: env_path(name, default, resolve=resolve))
def env_str_field(
name: str,
default: str = "",
) -> str:
return field(default_factory=lambda: get_env(name, default))
def generate_dataclass_help(cls) -> str:
"""Auto-generate help text for dataclass fields."""
if not is_dataclass(cls):
raise TypeError(f"{cls} is not a dataclass")
def get_value(f):
if f.default is not MISSING:
return f.default
if f.default_factory is not MISSING:
try:
return f.default_factory()
except Exception as e:
return f"<error: {e}>"
return "<required>"
lines = [f"{f.name:<22} = {repr(get_value(f))}" for f in fields(cls)]
return indent("\n".join(lines), " ")
def with_params_help(params_cls: type, title: str = "Parameter defaults"):
"""
Class decorator that appends a help table generated from another dataclass
(e.g., VllmParameters) to the decorated class's docstring.
"""
if not is_dataclass(params_cls):
raise TypeError(f"{params_cls} must be a dataclass")
def _decorator(cls: type) -> type:
block = generate_dataclass_help(params_cls)
cls.__doc__ = (cls.__doc__ or "") + f"\n\n{title}:\n{block}"
return cls
return _decorator

View File

@ -0,0 +1,69 @@
"""
Git Utility helpers for CLI tasks.
"""
import logging
from pathlib import Path
from cli.lib.common.path_helper import remove_dir
from git import GitCommandError, RemoteProgress, Repo
logger = logging.getLogger(__name__)
class PrintProgress(RemoteProgress):
"""Simple progress logger for git operations."""
def __init__(self, interval: int = 5):
super().__init__()
self._last_percent = -1
self._interval = interval
def update(self, op_code, cur, max=None, message=""):
msg = self._cur_line or message
if max and cur:
percent = int(cur / max * 100)
if percent != self._last_percent and percent % self._interval == 0:
self._last_percent = percent
logger.info("Progress: %d%% - %s", percent, msg)
elif msg:
logger.info(msg)
def clone_external_repo(target: str, repo: str, dst: str = "", update_submodules=False):
"""Clone repository with pinned commit and optional submodules."""
dst = dst or target
try:
logger.info("Cloning %s to %s", target, dst)
# Clone and fetch
remove_dir(dst)
r = Repo.clone_from(repo, dst, progress=PrintProgress())
r.git.fetch("--all", "--tags")
# Checkout pinned commit
commit = get_post_build_pinned_commit(target)
logger.info("Checking out pinned commit %s", commit)
r.git.checkout(commit)
# Update submodules if requested
if update_submodules and r.submodules:
logger.info("Updating %d submodule(s)", len(r.submodules))
for sm in r.submodules:
sm.update(init=True, recursive=True, progress=PrintProgress())
logger.info("Successfully cloned %s", target)
return r
except GitCommandError as e:
logger.error("Git operation failed: %s", e)
raise
def get_post_build_pinned_commit(name: str, prefix=".github/ci_commit_pins") -> str:
path = Path(prefix) / f"{name}.txt"
if not path.exists():
raise FileNotFoundError(f"Pin file not found: {path}")
return path.read_text(encoding="utf-8").strip()

View File

@ -0,0 +1,14 @@
"""
Logger Utility helpers for CLI tasks.
"""
import logging
import sys
def setup_logging(level: int = logging.INFO):
logging.basicConfig(
level=level,
format="%(asctime)s [%(levelname)s] %(name)s: %(message)s",
stream=sys.stdout,
)

View File

@ -0,0 +1,62 @@
"""Path utility helpers for CLI tasks."""
import logging
import shutil
from pathlib import Path
from typing import Union
logger = logging.getLogger(__name__)
def get_path(path: Union[str, Path], resolve: bool = False) -> Path:
"""Convert to Path object, optionally resolving to absolute path."""
if not path:
raise ValueError("Path cannot be None or empty")
result = Path(path)
return result.resolve() if resolve else result
def ensure_dir_exists(path: Union[str, Path]) -> Path:
"""Create directory if it doesn't exist."""
path_obj = get_path(path)
path_obj.mkdir(parents=True, exist_ok=True)
return path_obj
def remove_dir(path: Union[str, Path, None]) -> None:
"""Remove directory if it exists."""
if not path:
return
path_obj = get_path(path)
if path_obj.exists():
shutil.rmtree(path_obj)
def force_create_dir(path: Union[str, Path]) -> Path:
"""Remove directory if exists, then create fresh empty directory."""
remove_dir(path)
return ensure_dir_exists(path)
def copy(src: Union[str, Path], dst: Union[str, Path]) -> None:
"""Copy file or directory from src to dst."""
src_path = get_path(src, resolve=True)
dst_path = get_path(dst, resolve=True)
if not src_path.exists():
raise FileNotFoundError(f"Source does not exist: {src_path}")
dst_path.parent.mkdir(parents=True, exist_ok=True)
if src_path.is_file():
shutil.copy2(src_path, dst_path)
elif src_path.is_dir():
shutil.copytree(src_path, dst_path, dirs_exist_ok=True)
else:
raise ValueError(f"Unsupported path type: {src_path}")
def is_path_exist(path: Union[str, Path, None]) -> bool:
"""Check if path exists."""
return bool(path and get_path(path).exists())

View File

@ -0,0 +1,79 @@
"""
General Utility helpers for CLI tasks.
"""
import logging
import os
import shlex
import subprocess
import sys
from typing import Optional
logger = logging.getLogger(__name__)
def run_command(
cmd: str,
use_shell: bool = False,
log_cmd: bool = True,
cwd: Optional[str] = None,
env: Optional[dict] = None,
check: bool = True,
) -> int:
"""Run a command with optional shell execution."""
if use_shell:
args = cmd
log_prefix = "[shell]"
executable = "/bin/bash"
else:
args = shlex.split(cmd)
log_prefix = "[cmd]"
executable = None
if log_cmd:
display_cmd = cmd if use_shell else " ".join(args)
logger.info("%s %s", log_prefix, display_cmd)
run_env = {**os.environ, **(env or {})}
proc = subprocess.run(
args,
shell=use_shell,
executable=executable,
stdout=sys.stdout,
stderr=sys.stderr,
cwd=cwd,
env=run_env,
check=False,
)
if check and proc.returncode != 0:
logger.error(
"%s Command failed (exit %s): %s", log_prefix, proc.returncode, cmd
)
raise subprocess.CalledProcessError(
proc.returncode, args if not use_shell else cmd
)
return proc.returncode
def str2bool(value: Optional[str]) -> bool:
"""Convert environment variables to boolean values."""
if not value:
return False
if not isinstance(value, str):
raise ValueError(
f"Expected a string value for boolean conversion, got {type(value)}"
)
value = value.strip().lower()
true_value_set = {"1", "true", "t", "yes", "y", "on", "enable", "enabled", "found"}
false_value_set = {"0", "false", "f", "no", "n", "off", "disable"}
if value in true_value_set:
return True
if value in false_value_set:
return False
raise ValueError(f"Invalid string value for boolean conversion: {value}")

View File

@ -0,0 +1,263 @@
import logging
import os
import textwrap
from dataclasses import dataclass
from pathlib import Path
from typing import Optional
from cli.lib.common.cli_helper import BaseRunner
from cli.lib.common.docker_helper import local_image_exists
from cli.lib.common.envs_helper import (
env_bool_field,
env_path_field,
env_str_field,
with_params_help,
)
from cli.lib.common.git_helper import clone_external_repo
from cli.lib.common.path_helper import (
copy,
ensure_dir_exists,
force_create_dir,
get_path,
is_path_exist,
)
from cli.lib.common.utils import run_command
logger = logging.getLogger(__name__)
# Default path for docker build artifacts
_DEFAULT_RESULT_PATH = "./shared"
# Temp folder in vllm work place to cp torch whls in vllm work directory for docker build
_VLLM_TEMP_FOLDER = "tmp"
@dataclass
class VllmBuildParameters:
"""
Parameters defining the vllm external input configurations.
Combine with VllmDockerBuildArgs to define the vllm build environment
"""
# USE_TORCH_WHEEL: when true, use local Torch wheels; requires TORCH_WHEELS_PATH.
# Otherwise docker build pull torch nightly during build
# TORCH_WHEELS_PATH: directory containing local torch wheels when use_torch_whl is True
use_torch_whl: bool = env_bool_field("USE_TORCH_WHEEL", True)
torch_whls_path: Path = env_path_field("TORCH_WHEELS_PATH", "./dist")
# USE_LOCAL_BASE_IMAGE: when true, use an existing local Docker base image; requires BASE_IMAGE
# Otherwise, pull dockerfile's default image remotely
# BASE_IMAGE: name:tag (only needed when use_local_base_image is True)
use_local_base_image: bool = env_bool_field("USE_LOCAL_BASE_IMAGE", True)
base_image: str = env_str_field("BASE_IMAGE")
# USE_LOCAL_DOCKERFILE: when true("1"), use a local Dockerfile; requires DOCKERFILE_PATH.
# otherwise, use vllm's default dockerfile.torch_nightly for build
# DOCKERFILE_PATH: path to Dockerfile used when use_local_dockerfile is True"
use_local_dockerfile: bool = env_bool_field("USE_LOCAL_DOCKERFILE", True)
dockerfile_path: Path = env_path_field(
"DOCKERFILE_PATH", ".github/ci_configs/vllm/Dockerfile.tmp_vllm"
)
# OUTPUT_DIR: where docker buildx (local exporter) will write artifacts
output_dir: Path = env_path_field("OUTPUT_DIR", "external/vllm")
# --- Build args ----------------------------------------------------------
target_stage: str = env_str_field("TARGET_STAGE", "export-wheels")
tag_name: str = env_str_field("TAG", "vllm-wheels")
cuda_version: str = env_str_field("CUDA_VERSION", "12.8.1")
python_version: str = env_str_field("PYTHON_VERSION", "3.12")
max_jobs: str = env_str_field("MAX_JOBS", "64")
sccache_bucket: str = env_str_field("SCCACHE_BUCKET")
sccache_region: str = env_str_field("SCCACHE_REGION")
torch_cuda_arch_list: str = env_str_field("TORCH_CUDA_ARCH_LIST", "8.9")
def __post_init__(self):
checks = [
(
self.use_torch_whl, # flag
True, # trigger_value
"torch_whls_path", # resource
is_path_exist, # check_func
"TORCH_WHEELS_PATH is not provided, but USE_TORCH_WHEEL is set to 1",
),
(
self.use_local_base_image,
True,
"base_image",
local_image_exists,
f"BASE_IMAGE {self.base_image} does not found, but USE_LOCAL_BASE_IMAGE is set to 1",
),
(
self.use_local_dockerfile,
True,
"dockerfile_path",
is_path_exist,
" DOCKERFILE_PATH path does not found, but USE_LOCAL_DOCKERFILE is set to 1",
),
]
for flag, trigger_value, attr_name, check_func, error_msg in checks:
value = getattr(self, attr_name)
if flag == trigger_value:
if not value or not check_func(value):
raise ValueError(error_msg)
else:
logger.info("flag %s is not set", flag)
if not self.output_dir:
raise ValueError("missing required output_dir")
@with_params_help(VllmBuildParameters)
class VllmBuildRunner(BaseRunner):
"""
Build vLLM using docker buildx.
Environment variable options:
"USE_TORCH_WHEEL": "1: use local wheels; 0: pull nightly from pypi",
"TORCH_WHEELS_PATH": "Path to local wheels (when USE_TORCH_WHEEL=1)",
"USE_LOCAL_BASE_IMAGE": "1: use local base image; 0: default image",
"BASE_IMAGE": "name:tag to indicate base image the dockerfile depends on (when USE_LOCAL_BASE_IMAGE=1)",
"USE_LOCAL_DOCKERFILE": "1: use local Dockerfile; 0: vllm repo default dockerfile.torch_nightly",
"DOCKERFILE_PATH": "Path to Dockerfile (when USE_LOCAL_DOCKERFILE=1)",
"OUTPUT_DIR": "e.g. './shared'",
"TORCH_CUDA_ARCH_LIST": "e.g. '8.0' or '8.0;9.0'",
"CUDA_VERSION": "e.g. '12.8.1'",
"PYTHON_VERSION": "e.g. '3.12'",
"MAX_JOBS": "e.g. '64'",
"SCCACHE_BUCKET": "e.g. 'my-bucket'",
"SCCACHE_REGION": "e.g. 'us-west-2'",
"""
def __init__(self, args=None):
self.work_directory = "vllm"
def run(self):
"""
main function to run vllm build
1. prepare vllm build environment
2. prepare the docker build command args
3. run docker build
"""
inputs = VllmBuildParameters()
clone_vllm()
self.cp_dockerfile_if_exist(inputs)
# cp torch wheels from root direct to vllm workspace if exist
self.cp_torch_whls_if_exist(inputs)
ensure_dir_exists(inputs.output_dir)
cmd = self._generate_docker_build_cmd(inputs)
logger.info("Running docker build: \n %s", cmd)
run_command(cmd, cwd="vllm", env=os.environ.copy())
def cp_torch_whls_if_exist(self, inputs: VllmBuildParameters) -> str:
if not inputs.use_torch_whl:
return ""
tmp_dir = f"./{self.work_directory}/{_VLLM_TEMP_FOLDER}"
tmp_path = Path(tmp_dir)
force_create_dir(tmp_path)
copy(inputs.torch_whls_path, tmp_dir)
return tmp_dir
def cp_dockerfile_if_exist(self, inputs: VllmBuildParameters):
if not inputs.use_local_dockerfile:
logger.info("using vllm default dockerfile.torch_nightly for build")
return
dockerfile_path = get_path(inputs.dockerfile_path, resolve=True)
vllm_torch_dockerfile = Path(
f"./{self.work_directory}/docker/Dockerfile.nightly_torch"
)
copy(dockerfile_path, vllm_torch_dockerfile)
def get_result_path(self, path):
"""
Get the absolute path of the result path
"""
if not path:
path = _DEFAULT_RESULT_PATH
abs_path = get_path(path, resolve=True)
return abs_path
def _get_torch_wheel_path_arg(self, torch_whl_dir: Optional[Path]) -> str:
if not torch_whl_dir:
return ""
return f"--build-arg TORCH_WHEELS_PATH={_VLLM_TEMP_FOLDER}"
def _get_base_image_args(self, inputs: VllmBuildParameters) -> tuple[str, str, str]:
"""
Returns:
- base_image_arg: docker buildx arg string for base image
- final_base_image_arg: docker buildx arg string for vllm-base stage
- pull_flag: --pull=true or --pull=false depending on whether the image exists locally
"""
if not inputs.use_local_base_image:
return "", "", ""
base_image = inputs.base_image
# set both base image and final base image to the same local image
base_image_arg = f"--build-arg BUILD_BASE_IMAGE={base_image}"
final_base_image_arg = f"--build-arg FINAL_BASE_IMAGE={base_image}"
if local_image_exists(base_image):
pull_flag = "--pull=false"
return base_image_arg, final_base_image_arg, pull_flag
logger.info(
"[INFO] Local image not found:%s will try to pull from remote", {base_image}
)
return base_image_arg, final_base_image_arg, ""
def _generate_docker_build_cmd(
self,
inputs: VllmBuildParameters,
) -> str:
base_image_arg, final_base_image_arg, pull_flag = self._get_base_image_args(
inputs
)
torch_arg = self._get_torch_wheel_path_arg(inputs.torch_whls_path)
return textwrap.dedent(
f"""
docker buildx build \
--output type=local,dest={inputs.output_dir} \
-f docker/Dockerfile.nightly_torch \
{pull_flag} \
{torch_arg} \
{base_image_arg} \
{final_base_image_arg} \
--build-arg max_jobs={inputs.max_jobs} \
--build-arg CUDA_VERSION={inputs.cuda_version} \
--build-arg PYTHON_VERSION={inputs.python_version} \
--build-arg USE_SCCACHE={int(bool(inputs.sccache_bucket and inputs.sccache_region))} \
--build-arg SCCACHE_BUCKET_NAME={inputs.sccache_bucket} \
--build-arg SCCACHE_REGION_NAME={inputs.sccache_region} \
--build-arg torch_cuda_arch_list='{inputs.torch_cuda_arch_list}' \
--target {inputs.target_stage} \
-t {inputs.tag_name} \
--progress=plain .
"""
).strip()
def clone_vllm():
clone_external_repo(
target="vllm",
repo="https://github.com/vllm-project/vllm.git",
dst="vllm",
update_submodules=True,
)

38
.ci/lumen_cli/cli/run.py Normal file
View File

@ -0,0 +1,38 @@
# main.py
import argparse
import logging
from cli.build_cli.register_build import register_build_commands
from cli.lib.common.logger import setup_logging
logger = logging.getLogger(__name__)
def main():
# Define top-level parser
parser = argparse.ArgumentParser(description="Lumos CLI")
subparsers = parser.add_subparsers(dest="command", required=True)
parser.add_argument(
"--log-level", default="INFO", help="Log level (DEBUG, INFO, WARNING, ERROR)"
)
# registers second-level subcommands
register_build_commands(subparsers)
# parse args after all options are registered
args = parser.parse_args()
# setup global logging
setup_logging(getattr(logging, args.log_level.upper(), logging.INFO))
logger.debug("Parsed args: %s", args)
if hasattr(args, "func"):
args.func(args)
else:
parser.print_help()
if __name__ == "__main__":
main()

View File

@ -0,0 +1,22 @@
[project]
name = "lumen-ci"
version = "0.1.0"
dependencies = [
"pyyaml==6.0.2",
"GitPython==3.1.45",
"docker==7.1.0",
"pytest==7.3.2",
]
[tool.setuptools]
packages = ["cli"]
[tool.setuptools.package-dir]
cli = "cli"
[tool.ruff.lint]
# Enable preview mode for linting
preview = true
# Now you can select your preview rules, like RUF048
extend-select = ["RUF048"]

View File

@ -0,0 +1,47 @@
# tests/test_cli.py
import io
import sys
import unittest
from contextlib import redirect_stderr, redirect_stdout
from unittest.mock import patch
from cli.run import main
class TestArgparseCLI(unittest.TestCase):
@patch("cli.build_cli.register_build.VllmBuildRunner.run", return_value=None)
@patch("cli.build_cli.register_build.VllmBuildRunner.__init__", return_value=None)
def test_cli_run_build_external(self, mock_init, mock_run):
from cli.run import main # import after patches if needed
test_args = ["cli.run", "build", "external", "vllm"]
with patch.object(sys, "argv", test_args):
# argparse may call sys.exit on error; capture to avoid test aborts
try:
main()
except SystemExit:
pass
mock_init.assert_called_once() # got constructed
mock_run.assert_called_once_with() # run() called
def test_build_help(self):
test_args = ["cli.run", "build", "--help"]
with patch.object(sys, "argv", test_args):
stdout = io.StringIO()
stderr = io.StringIO()
# --help always raises SystemExit(0)
with self.assertRaises(SystemExit) as cm:
with redirect_stdout(stdout), redirect_stderr(stderr):
main()
self.assertEqual(cm.exception.code, 0)
output = stdout.getvalue()
self.assertIn("usage", output)
self.assertIn("external", output)
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,115 @@
import argparse
import io
import unittest
from contextlib import redirect_stderr
from unittest.mock import patch
from cli.lib.common.cli_helper import BaseRunner, register_targets, RichHelp, TargetSpec
# ---- Dummy runners for unittests----
class FooRunner(BaseRunner):
"""Foo description from docstring."""
def run(self) -> None: # replaced by mock
pass
class BarRunner(BaseRunner):
def run(self) -> None: # replaced by mock
pass
def add_foo_args(p: argparse.ArgumentParser) -> None:
p.add_argument("--x", type=int, required=True, help="x value")
def common_args(p: argparse.ArgumentParser) -> None:
p.add_argument("--verbose", action="store_true", help="verbose flag")
def build_parser(specs: dict[str, TargetSpec]) -> argparse.ArgumentParser:
parser = argparse.ArgumentParser(prog="app", formatter_class=RichHelp)
register_targets(
parser=parser,
target_specs=specs,
common_args=common_args,
)
return parser
def get_subparser(
parser: argparse.ArgumentParser, name: str
) -> argparse.ArgumentParser:
subparsers_action = next(
a
for a in parser._subparsers._group_actions # type: ignore[attr-defined]
if isinstance(a, argparse._SubParsersAction)
)
return subparsers_action.choices[name]
class TestRegisterTargets(unittest.TestCase):
def test_metavar_lists_targets(self):
specs: dict[str, TargetSpec] = {
"foo": {"runner": FooRunner, "add_arguments": add_foo_args},
"bar": {"runner": BarRunner},
}
parser = build_parser(specs)
subparsers_action = next(
a
for a in parser._subparsers._group_actions # type: ignore[attr-defined]
if isinstance(a, argparse._SubParsersAction)
)
self.assertEqual(subparsers_action.metavar, "{foo,bar}")
def test_add_arguments_and_common_args_present(self):
specs: dict[str, TargetSpec] = {
"foo": {"runner": FooRunner, "add_arguments": add_foo_args},
}
parser = build_parser(specs)
foo = get_subparser(parser, "foo")
help_text = foo.format_help()
self.assertIn("--x", help_text)
self.assertIn("--verbose", help_text)
def test_runner_constructed_with_ns_and_run_called(self):
specs: dict[str, TargetSpec] = {
"foo": {"runner": FooRunner, "add_arguments": add_foo_args},
}
parser = build_parser(specs)
with (
patch.object(FooRunner, "__init__", return_value=None) as mock_init,
patch.object(FooRunner, "run", return_value=None) as mock_run,
):
ns = parser.parse_args(["foo", "--x", "3", "--verbose"])
ns.func(ns) # set by register_targets
# __init__ received the Namespace
self.assertEqual(mock_init.call_count, 1)
(called_ns,), _ = mock_init.call_args
self.assertIsInstance(called_ns, argparse.Namespace)
# run() called with no args
mock_run.assert_called_once_with()
def test_runner_docstring_used_as_description_when_missing(self):
specs: dict[str, TargetSpec] = {
"foo": {"runner": FooRunner, "add_arguments": add_foo_args},
}
parser = build_parser(specs)
foo = get_subparser(parser, "foo")
help_text = foo.format_help()
self.assertIn("Foo description from docstring.", help_text)
def test_missing_target_raises_systemexit_with_usage(self):
specs: dict[str, TargetSpec] = {"foo": {"runner": FooRunner}}
parser = build_parser(specs)
buf = io.StringIO()
with self.assertRaises(SystemExit), redirect_stderr(buf):
parser.parse_args([])
err = buf.getvalue()
self.assertIn("usage:", err)
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,75 @@
import unittest
from unittest import mock
from unittest.mock import MagicMock
import docker.errors as derr
from cli.lib.common.docker_helper import _get_client, local_image_exists
class TestDockerImageHelpers(unittest.TestCase):
def setUp(self):
# Reset the singleton in the target module
patcher = mock.patch("cli.lib.common.docker_helper._docker_client", None)
self.addCleanup(patcher.stop)
patcher.start()
def test_local_image_exists_true(self):
# Mock a docker client whose images.get returns an object (no exception)
mock_client = MagicMock()
mock_client.images.get.return_value = object()
ok = local_image_exists("repo:tag", client=mock_client)
self.assertTrue(ok)
def test_local_image_exists_not_found_false(self):
mock_client = MagicMock()
# Raise docker.errors.NotFound
mock_client.images.get.side_effect = derr.NotFound("nope")
ok = local_image_exists("missing:latest", client=mock_client)
self.assertFalse(ok)
def test_local_image_exists_api_error_false(self):
mock_client = MagicMock()
mock_client.images.get.side_effect = derr.APIError("boom", None)
ok = local_image_exists("broken:tag", client=mock_client)
self.assertFalse(ok)
def test_local_image_exists_uses_lazy_singleton(self):
# Patch docker.from_env used by _get_client()
with mock.patch(
"cli.lib.common.docker_helper.docker.from_env"
) as mock_from_env:
mock_docker_client = MagicMock()
mock_from_env.return_value = mock_docker_client
# First call should create and cache the client
c1 = _get_client()
self.assertIs(c1, mock_docker_client)
mock_from_env.assert_called_once()
# Second call should reuse cached client (no extra from_env calls)
c2 = _get_client()
self.assertIs(c2, mock_docker_client)
mock_from_env.assert_called_once() # still once
def test_local_image_exists_without_client_param_calls_get_client_once(self):
# Ensure _get_client is called and cached; local_image_exists should reuse it
with mock.patch("cli.lib.common.docker_helper._get_client") as mock_get_client:
mock_client = MagicMock()
mock_get_client.return_value = mock_client
# 1st call
local_image_exists("repo:tag")
# 2nd call
local_image_exists("repo:tag2")
# local_image_exists should call _get_client each time,
# but your _get_client itself caches docker.from_env.
self.assertEqual(mock_get_client.call_count, 2)
self.assertEqual(mock_client.images.get.call_count, 2)
mock_client.images.get.assert_any_call("repo:tag")
mock_client.images.get.assert_any_call("repo:tag2")
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,149 @@
import os
import unittest
from dataclasses import dataclass
from pathlib import Path
from unittest.mock import patch
import cli.lib.common.envs_helper as m
class TestEnvHelpers(unittest.TestCase):
def setUp(self):
# Keep a copy of the original environment to restore later
self._env_backup = dict(os.environ)
def tearDown(self):
# Restore environment to original state
os.environ.clear()
os.environ.update(self._env_backup)
# -------- get_env --------
def test_get_env_unset_returns_default(self):
with patch.dict(os.environ, {}, clear=True):
self.assertEqual(m.get_env("FOO", "default"), "default")
def test_get_env_empty_returns_default(self):
with patch.dict(os.environ, {"FOO": ""}, clear=True):
self.assertEqual(m.get_env("FOO", "default"), "default")
def test_get_env_set_returns_value(self):
with patch.dict(os.environ, {"FOO": "bar"}, clear=True):
self.assertEqual(m.get_env("FOO", "default"), "bar")
def test_get_env_not_exist_returns_default(self):
with patch.dict(os.environ, {"FOO": "bar"}, clear=True):
self.assertEqual(m.get_env("TEST_NOT_EXIST", "default"), "default")
def test_get_env_not_exist_without_default(self):
with patch.dict(os.environ, {"FOO": "bar"}, clear=True):
self.assertEqual(m.get_env("TEST_NOT_EXIST"), "")
# -------- env_bool --------
def test_env_bool_uses_default_when_unset(self):
with patch.dict(os.environ, {}, clear=True):
self.assertTrue(m.env_bool("FLAG", default=True))
self.assertFalse(m.env_bool("FLAG", default=False))
def test_env_bool_uses_str2bool_when_set(self):
# Patch str2bool used by env_bool so we don't depend on its exact behavior
def fake_str2bool(s: str) -> bool:
return s.lower() in {"1", "true", "yes", "on", "y"}
with (
patch.dict(os.environ, {"FLAG": "yEs"}, clear=True),
patch.object(m, "str2bool", fake_str2bool),
):
self.assertTrue(m.env_bool("FLAG", default=False))
# -------- env_path_optional / env_path --------
def test_env_path_optional_unset_returns_none_by_default(self):
with patch.dict(os.environ, {}, clear=True):
self.assertIsNone(m.env_path_optional("P"))
def test_env_path_optional_unset_returns_none_when_env_var_is_empty(self):
with patch.dict(os.environ, {"P": ""}, clear=True):
self.assertIsNone(m.env_path_optional("P"))
def test_env_path_optional_unset_returns_default_str(self):
# default as string; resolve=True by default -> absolute path
default_str = "x/y"
with patch.dict(os.environ, {}, clear=True):
p = m.env_path_optional("P", default=default_str)
self.assertIsInstance(p, Path)
self.assertIsNotNone(p)
if p:
self.assertTrue(p.is_absolute())
self.assertEqual(p.parts[-2:], ("x", "y"))
def test_env_path_optional_unset_returns_default_path_no_resolve(self):
d = Path("z")
with patch.dict(os.environ, {}, clear=True):
p = m.env_path_optional("P", default=d, resolve=False)
self.assertEqual(p, d)
def test_env_path_optional_respects_resolve_true(self):
with patch.dict(os.environ, {"P": "a/b"}, clear=True):
p = m.env_path_optional("P", resolve=True)
self.assertIsInstance(p, Path)
if p:
self.assertTrue(p.is_absolute())
def test_env_path_optional_respects_resolve_false(self):
with patch.dict(os.environ, {"P": "rel/dir"}, clear=True):
p = m.env_path_optional("P", resolve=False)
self.assertEqual(p, Path("rel/dir"))
if p:
self.assertFalse(p.is_absolute())
def test_env_path_raises_when_missing_and_default_none(self):
with patch.dict(os.environ, {}, clear=True):
with self.assertRaises(ValueError):
m.env_path("P", None, resolve=True)
def test_env_path_returns_path_when_present(self):
tmp = Path("./b").resolve()
with patch.dict(os.environ, {"P": str(tmp)}, clear=True):
p = m.env_path("P", None, resolve=True)
self.assertEqual(p, tmp)
# -------- dataclass field helpers --------
def test_dataclass_fields_read_env_at_instantiation(self):
@dataclass
class Cfg:
flag: bool = m.env_bool_field("FLAG", default=False)
out: Path = m.env_path_field("OUT", default="ab", resolve=True)
name: str = m.env_str_field("NAME", default="anon")
# First instantiation
with patch.dict(
os.environ, {"FLAG": "true", "OUT": "outdir", "NAME": "alice"}, clear=True
):
cfg1 = Cfg()
self.assertTrue(cfg1.flag)
self.assertIsInstance(cfg1.out, Path)
self.assertTrue(cfg1.out.is_absolute())
self.assertEqual(cfg1.name, "alice")
cfg1.name = "bob" # change instance value
self.assertEqual(cfg1.name, "bob") # change is reflected
# Change env; new instance should reflect new values
with patch.dict(os.environ, {"FLAG": "false", "NAME": ""}, clear=True):
cfg2 = Cfg()
self.assertFalse(cfg2.flag) # str2bool("false") -> False
self.assertTrue("ab" in str(cfg2.out))
self.assertIsInstance(cfg2.out, Path)
self.assertTrue(cfg2.out.is_absolute())
self.assertEqual(cfg2.name, "anon") # empty -> fallback to default
def test_dataclass_path_field_with_default_value(self):
@dataclass
class C2:
out: Path = m.env_path_field("OUT", default="some/dir", resolve=False)
with patch.dict(os.environ, {}, clear=True):
c = C2()
self.assertEqual(c.out, Path("some/dir"))
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,122 @@
# test_path_utils.py
# Run: pytest -q
import os
import unittest
from pathlib import Path
from tempfile import TemporaryDirectory
from cli.lib.common.path_helper import (
copy,
ensure_dir_exists,
force_create_dir,
get_path,
is_path_exist,
remove_dir,
)
class TestPathHelper(unittest.TestCase):
def setUp(self):
self.tmpdir = TemporaryDirectory()
self.tmp_path = Path(self.tmpdir.name)
def tearDown(self):
self.tmpdir.cleanup()
# -------- get_path --------
def test_get_path_returns_path_for_str(self):
# Use relative path to avoid absolute-ness
rel_str = "sub/f.txt"
os.chdir(self.tmp_path)
p = get_path(rel_str, resolve=False)
self.assertIsInstance(p, Path)
self.assertFalse(p.is_absolute())
self.assertEqual(str(p), rel_str)
def test_get_path_resolves(self):
rel_str = "sub/f.txt"
p = get_path(str(self.tmp_path / rel_str), resolve=True)
self.assertTrue(p.is_absolute())
self.assertTrue(str(p).endswith(rel_str))
def test_get_path_with_path_input(self):
p_in = self.tmp_path / "sub/f.txt"
p_out = get_path(p_in, resolve=False)
self.assertTrue(str(p_out) == str(p_in))
def test_get_path_with_none_raises(self):
with self.assertRaises(ValueError):
get_path(None) # type: ignore[arg-type]
def test_get_path_invalid_type_raises(self):
with self.assertRaises(TypeError):
get_path(123) # type: ignore[arg-type]
# -------- ensure_dir_exists / force_create_dir / remove_dir --------
def test_ensure_dir_exists_creates_and_is_idempotent(self):
d = self.tmp_path / "made"
ensure_dir_exists(d)
self.assertTrue(d.exists() and d.is_dir())
ensure_dir_exists(d)
def test_force_create_dir_clears_existing(self):
d = self.tmp_path / "fresh"
(d / "inner").mkdir(parents=True)
(d / "inner" / "f.txt").write_text("x")
force_create_dir(d)
self.assertTrue(d.exists())
self.assertEqual(list(d.iterdir()), [])
def test_remove_dir_none_is_noop(self):
remove_dir(None) # type: ignore[arg-type]
def test_remove_dir_nonexistent_is_noop(self):
ghost = self.tmp_path / "ghost"
remove_dir(ghost)
def test_remove_dir_accepts_str(self):
d = self.tmp_path / "to_rm"
d.mkdir()
remove_dir(str(d))
self.assertFalse(d.exists())
# -------- copy --------
def test_copy_file_to_file(self):
src = self.tmp_path / "src.txt"
dst = self.tmp_path / "out" / "dst.txt"
src.write_text("hello")
copy(src, dst)
self.assertEqual(dst.read_text(), "hello")
def test_copy_dir_to_new_dir(self):
src = self.tmp_path / "srcdir"
(src / "a").mkdir(parents=True)
(src / "a" / "f.txt").write_text("content")
dst = self.tmp_path / "destdir"
copy(src, dst)
self.assertEqual((dst / "a" / "f.txt").read_text(), "content")
def test_copy_dir_into_existing_dir_overwrite_true_merges(self):
src = self.tmp_path / "srcdir"
dst = self.tmp_path / "destdir"
(src / "x").mkdir(parents=True)
(src / "x" / "new.txt").write_text("new")
dst.mkdir()
(dst / "existing.txt").write_text("old")
copy(src, dst)
self.assertEqual((dst / "existing.txt").read_text(), "old")
self.assertEqual((dst / "x" / "new.txt").read_text(), "new")
def test_is_str_path_exist(self):
p = self.tmp_path / "x.txt"
p.write_text("1")
self.assertTrue(is_path_exist(str(p)))
self.assertTrue(is_path_exist(p))
self.assertFalse(is_path_exist(str(self.tmp_path / "missing")))
self.assertFalse(is_path_exist(self.tmp_path / "missing"))
self.assertFalse(is_path_exist(""))
if __name__ == "__main__":
unittest.main()

View File

@ -0,0 +1,181 @@
import os
import tempfile
import unittest
from pathlib import Path
from unittest.mock import MagicMock, patch
import cli.lib.core.vllm as vllm
class TestVllmBuildParameters(unittest.TestCase):
@patch("cli.lib.core.vllm.local_image_exists", return_value=True)
@patch("cli.lib.core.vllm.is_path_exist", return_value=True)
@patch(
"cli.lib.common.envs_helper.env_path_optional",
side_effect=lambda name, default=None, resolve=True: {
"DOCKERFILE_PATH": Path("/abs/vllm/Dockerfile"),
"TORCH_WHEELS_PATH": Path("/abs/dist"),
"OUTPUT_DIR": Path("/abs/shared"),
}.get(name, Path(default) if default is not None else None),
)
@patch.dict(
os.environ,
{
"USE_TORCH_WHEEL": "1",
"USE_LOCAL_BASE_IMAGE": "1",
"USE_LOCAL_DOCKERFILE": "1",
"BASE_IMAGE": "my/image:tag",
"DOCKERFILE_PATH": "vllm/Dockerfile",
"TORCH_WHEELS_PATH": "dist",
"OUTPUT_DIR": "shared",
},
clear=True,
)
def test_params_success_normalizes_and_validates(
self, mock_env_path, mock_is_path, mock_local_img
):
params = vllm.VllmBuildParameters()
self.assertEqual(params.torch_whls_path, Path("/abs/dist"))
self.assertEqual(params.dockerfile_path, Path("/abs/vllm/Dockerfile"))
self.assertEqual(params.output_dir, Path("/abs/shared"))
self.assertEqual(params.base_image, "my/image:tag")
@patch("cli.lib.core.vllm.is_path_exist", return_value=False)
@patch.dict(
os.environ, {"USE_TORCH_WHEEL": "1", "TORCH_WHEELS_PATH": "dist"}, clear=True
)
def test_params_missing_torch_whls_raises(self, _is_path):
with tempfile.TemporaryDirectory() as td:
os.chdir(td)
with self.assertRaises(ValueError) as cm:
vllm.VllmBuildParameters(
use_local_base_image=False,
use_local_dockerfile=False,
)
err = cm.exception
self.assertIn("TORCH_WHEELS_PATH", str(err))
@patch("cli.lib.core.vllm.local_image_exists", return_value=False)
@patch.dict(
os.environ, {"USE_LOCAL_BASE_IMAGE": "1", "BASE_IMAGE": "img:tag"}, clear=True
)
def test_params_missing_local_base_image_raises(self, _local_img):
with tempfile.TemporaryDirectory() as td:
os.chdir(td)
with self.assertRaises(ValueError) as cm:
vllm.VllmBuildParameters(
use_torch_whl=False,
use_local_dockerfile=False,
)
err = cm.exception
self.assertIn("BASE_IMAGE", str(err))
@patch("cli.lib.core.vllm.is_path_exist", return_value=False)
@patch.dict(
os.environ,
{"USE_LOCAL_DOCKERFILE": "1", "DOCKERFILE_PATH": "Dockerfile"},
clear=True,
)
def test_params_missing_dockerfile_raises(self, _is_path):
with tempfile.TemporaryDirectory() as td:
os.chdir(td)
with self.assertRaises(ValueError) as cm:
vllm.VllmBuildParameters(
use_torch_whl=False,
use_local_base_image=False,
)
err = cm.exception
self.assertIn("DOCKERFILE_PATH", str(err))
@patch("cli.lib.core.vllm.is_path_exist", return_value=False)
@patch.dict(
os.environ,
{"OUTPUT_DIR": ""},
clear=True,
)
def test_params_missing_output_dir(self, _is_path):
with self.assertRaises(FileNotFoundError):
vllm.VllmBuildParameters()
class TestBuildCmdAndRun(unittest.TestCase):
@patch("cli.lib.core.vllm.local_image_exists", return_value=True)
def test_generate_docker_build_cmd_includes_bits(self, _exists):
runner = vllm.VllmBuildRunner()
# Craft inputs that simulate a prepared build
inputs = MagicMock()
inputs.output_dir = Path("/abs/out")
inputs.use_local_base_image = True
inputs.base_image = "img:tag"
inputs.torch_whls_path = Path("./vllm/tmp")
inputs.max_jobs = 64
inputs.cuda_version = "12.8.1"
inputs.python_version = "3.12"
inputs.sccache_bucket = "my-bucket"
inputs.sccache_region = "us-west-2"
inputs.torch_cuda_arch_list = "8.0;9.0"
inputs.target_stage = "export-wheels"
inputs.tag_name = "vllm-wheels"
cmd = runner._generate_docker_build_cmd(inputs)
squashed = " ".join(cmd.split()) # normalize whitespace for matching
self.assertIn("--output type=local,dest=/abs/out", squashed)
self.assertIn("-f docker/Dockerfile.nightly_torch", squashed)
self.assertIn("--pull=false", squashed)
self.assertIn("--build-arg TORCH_WHEELS_PATH=tmp", squashed)
self.assertIn("--build-arg BUILD_BASE_IMAGE=img:tag", squashed)
self.assertIn("--build-arg FINAL_BASE_IMAGE=img:tag", squashed)
self.assertIn("--build-arg max_jobs=64", squashed)
self.assertIn("--build-arg CUDA_VERSION=12.8.1", squashed)
self.assertIn("--build-arg PYTHON_VERSION=3.12", squashed)
self.assertIn("--build-arg USE_SCCACHE=1", squashed)
self.assertIn("--build-arg SCCACHE_BUCKET_NAME=my-bucket", squashed)
self.assertIn("--build-arg SCCACHE_REGION_NAME=us-west-2", squashed)
self.assertIn("--build-arg torch_cuda_arch_list='8.0;9.0'", squashed)
self.assertIn("--target export-wheels", squashed)
self.assertIn("-t vllm-wheels", squashed)
@patch("cli.lib.core.vllm.run_command")
@patch("cli.lib.core.vllm.ensure_dir_exists")
@patch("cli.lib.core.vllm.clone_vllm")
@patch.object(
vllm.VllmBuildRunner,
"_generate_docker_build_cmd",
return_value="docker buildx ...",
)
@patch.dict(
os.environ,
{
# Make __post_init__ validations pass cheaply
"USE_TORCH_WHEEL": "0",
"USE_LOCAL_BASE_IMAGE": "0",
"USE_LOCAL_DOCKERFILE": "0",
"OUTPUT_DIR": "shared",
},
clear=True,
)
def test_run_calls_clone_prepare_and_build(
self, mock_gen, mock_clone, mock_ensure, mock_run
):
# Stub parameters instance so we avoid FS/Docker accesses in run()
params = MagicMock()
params.output_dir = Path("shared")
params.use_local_dockerfile = False
params.use_torch_whl = False
with patch("cli.lib.core.vllm.VllmBuildParameters", return_value=params):
runner = vllm.VllmBuildRunner()
runner.run()
mock_clone.assert_called_once()
mock_ensure.assert_called_once_with(Path("shared"))
mock_gen.assert_called_once_with(params)
mock_run.assert_called_once()
# ensure we run in vllm workdir
_, kwargs = mock_run.call_args
assert kwargs.get("cwd") == "vllm"
if __name__ == "__main__":
unittest.main()

View File

@ -5,10 +5,6 @@ set -ex
SCRIPTPATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
case "${GPU_ARCH_TYPE:-BLANK}" in
BLANK)
# Legacy behavior for CircleCI
bash "${SCRIPTPATH}/build_cuda.sh"
;;
cuda)
bash "${SCRIPTPATH}/build_cuda.sh"
;;

View File

@ -134,6 +134,7 @@ if [[ $CUDA_VERSION == 12* ]]; then
"/usr/local/cuda/lib64/libnvrtc-builtins.so"
"/usr/local/cuda/lib64/libcufile.so.0"
"/usr/local/cuda/lib64/libcufile_rdma.so.1"
"/usr/local/cuda/lib64/libnvshmem_host.so.3"
"/usr/local/cuda/extras/CUPTI/lib64/libcupti.so.12"
"/usr/local/cuda/extras/CUPTI/lib64/libnvperf_host.so"
)
@ -152,6 +153,7 @@ if [[ $CUDA_VERSION == 12* ]]; then
"libcudart.so.12"
"libnvrtc.so.12"
"libnvrtc-builtins.so"
"libnvshmem_host.so.3"
"libcufile.so.0"
"libcufile_rdma.so.1"
"libcupti.so.12"

View File

@ -92,6 +92,27 @@ if [[ "$BUILD_ENVIRONMENT" == *aarch64* ]]; then
export ACL_ROOT_DIR=/ComputeLibrary
fi
if [[ "$BUILD_ENVIRONMENT" == *riscv64* ]]; then
if [[ -f /opt/riscv-cross-env/bin/activate ]]; then
# shellcheck disable=SC1091
source /opt/riscv-cross-env/bin/activate
else
echo "Activation file not found"
exit 1
fi
export CMAKE_CROSSCOMPILING=TRUE
export CMAKE_SYSTEM_NAME=Linux
export CMAKE_SYSTEM_PROCESSOR=riscv64
export USE_CUDA=0
export USE_MKLDNN=0
export SLEEF_TARGET_EXEC_USE_QEMU=ON
sudo chown -R jenkins /var/lib/jenkins/workspace /opt
fi
if [[ "$BUILD_ENVIRONMENT" == *libtorch* ]]; then
POSSIBLE_JAVA_HOMES=()
POSSIBLE_JAVA_HOMES+=(/usr/local)
@ -209,7 +230,7 @@ fi
# Do not change workspace permissions for ROCm and s390x CI jobs
# as it can leave workspace with bad permissions for cancelled jobs
if [[ "$BUILD_ENVIRONMENT" != *rocm* && "$BUILD_ENVIRONMENT" != *s390x* && -d /var/lib/jenkins/workspace ]]; then
if [[ "$BUILD_ENVIRONMENT" != *rocm* && "$BUILD_ENVIRONMENT" != *s390x* && "$BUILD_ENVIRONMENT" != *riscv64* && -d /var/lib/jenkins/workspace ]]; then
# Workaround for dind-rootless userid mapping (https://github.com/pytorch/ci-infra/issues/96)
WORKSPACE_ORIGINAL_OWNER_ID=$(stat -c '%u' "/var/lib/jenkins/workspace")
cleanup_workspace() {
@ -254,8 +275,7 @@ else
# XLA test build fails when WERROR=1
# set only when building other architectures
# or building non-XLA tests.
if [[ "$BUILD_ENVIRONMENT" != *rocm* &&
"$BUILD_ENVIRONMENT" != *xla* ]]; then
if [[ "$BUILD_ENVIRONMENT" != *rocm* && "$BUILD_ENVIRONMENT" != *xla* && "$BUILD_ENVIRONMENT" != *riscv64* ]]; then
# Install numpy-2.0.2 for builds which are backward compatible with 1.X
python -mpip install numpy==2.0.2
@ -392,7 +412,7 @@ if [[ "$BUILD_ENVIRONMENT" != *libtorch* && "$BUILD_ENVIRONMENT" != *bazel* ]];
# don't do this for libtorch as libtorch is C++ only and thus won't have python tests run on its build
python tools/stats/export_test_times.py
fi
# don't do this for bazel or s390x as they don't use sccache
if [[ "$BUILD_ENVIRONMENT" != *s390x* && "$BUILD_ENVIRONMENT" != *-bazel-* ]]; then
# don't do this for bazel or s390x or riscv64 as they don't use sccache
if [[ "$BUILD_ENVIRONMENT" != *s390x* && "$BUILD_ENVIRONMENT" != *riscv64* && "$BUILD_ENVIRONMENT" != *-bazel-* ]]; then
print_sccache_stats
fi

View File

@ -175,6 +175,13 @@ checkout_install_torchbench() {
python install.py --continue_on_fail
fi
# soxr comes from https://github.com/huggingface/transformers/pull/39429
pip install transformers==4.54.0 soxr==0.5.0
# https://github.com/pytorch/pytorch/issues/160689 to remove torchao because
# its current version 0.12.0 doesn't work with transformers 4.54.0
pip uninstall -y torchao
echo "Print all dependencies after TorchBench is installed"
python -mpip freeze
popd

View File

@ -1682,7 +1682,6 @@ elif [[ "${TEST_CONFIG}" == verify_cachebench ]]; then
elif [[ "${TEST_CONFIG}" == *torchbench* ]]; then
install_torchaudio
install_torchvision
install_torchao
id=$((SHARD_NUMBER-1))
# https://github.com/opencv/opencv-python/issues/885
pip_install opencv-python==4.8.0.74

View File

@ -61,9 +61,10 @@ if "%USE_XPU%"=="1" (
call "C:\Program Files (x86)\Intel\oneAPI\compiler\latest\env\vars.bat"
call "C:\Program Files (x86)\Intel\oneAPI\ocloc\latest\env\vars.bat"
if errorlevel 1 exit /b 1
:: Reduce build time. Only have MTL self-hosted runner now
SET TORCH_XPU_ARCH_LIST=xe-lpg
SET USE_KINETO=0
:: Reduce build time
SET TORCH_XPU_ARCH_LIST=bmg
:: Re-setup python env for build
call pip install -r requirements.txt
)
@echo on

View File

@ -37,7 +37,7 @@ IF "%CUDA_PATH_V126%"=="" (
)
IF "%BUILD_VISION%" == "" (
set TORCH_CUDA_ARCH_LIST=6.1;7.0;7.5;8.0;8.6;9.0
set TORCH_CUDA_ARCH_LIST=5.0;6.0;6.1;7.0;7.5;8.0;8.6;9.0
set TORCH_NVCC_FLAGS=-Xfatbin -compress-all
) ELSE (
set NVCC_FLAGS=-D__CUDA_NO_HALF_OPERATORS__ --expt-relaxed-constexpr -gencode=arch=compute_50,code=sm_50 -gencode=arch=compute_60,code=sm_60 -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_75,code=sm_75 -gencode=arch=compute_80,code=compute_80 -gencode=arch=compute_86,code=compute_86 -gencode=arch=compute_90,code=compute_90

View File

@ -133,6 +133,25 @@ EXTRA_CONDA_INSTALL_FLAGS=""
CONDA_ENV_CREATE_FLAGS=""
RENAME_WHEEL=true
case $desired_python in
3.14t)
echo "Using 3.14 deps"
SETUPTOOLS_PINNED_VERSION=">=70.1.0"
PYYAML_PINNED_VERSION=">=6.0.1"
NUMPY_PINNED_VERSION="=2.1.0"
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
RENAME_WHEEL=false
;;
3.14)
echo "Using 3.14t deps"
SETUPTOOLS_PINNED_VERSION=">=70.1.0"
PYYAML_PINNED_VERSION=">=6.0.1"
NUMPY_PINNED_VERSION="=2.1.0"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
RENAME_WHEEL=false
;;
3.13t)
echo "Using 3.13 deps"
SETUPTOOLS_PINNED_VERSION=">=70.1.0"

View File

@ -54,6 +54,7 @@ self-hosted-runner:
- linux.rocm.gpu.2
- linux.rocm.gpu.4
# gfx942 runners
- linux.rocm.gpu.gfx942.1
- linux.rocm.gpu.gfx942.2
- linux.rocm.gpu.gfx942.4
- rocm-docker

View File

@ -0,0 +1,80 @@
# .github/workflows/build-external.yml
name: Build External packages
description: build external packages for PyTorch
inputs:
cuda-arch-list:
description: TORCH_CUDA_ARCH_LIST (e.g., "8.0;8.9;9.0")
type: string
required: true
default: ""
docker-image:
description: Base image to use
type: string
required: true
build-targets:
description: Build targets
type: string
required: true
torch-wheel-dir:
description: Directory to built torch wheel
type: string
required: false
default: dist
output-dir:
description: Directory to store build artifact
default: external
type: string
required: false
outputs:
build_time:
description: "Total build time in seconds"
value: ${{ steps.build-external.outputs.build_time }}
output_dir:
description: "Directory where build artifact is stored"
value: ${{ steps.build-external.outputs.output_dir }}
runs:
using: composite
steps:
- name: Build external packages in sequence
id: build-external
env:
SCCACHE_BUCKET: ossci-compiler-cache-circleci-v2
SCCACHE_REGION: us-east-1
TORCH_CUDA_ARCH_LIST: ${{ inputs.cuda-arch-list }}
BASE_IMAGE: ${{ inputs.docker-image }}
BUILD_TARGETS: ${{ inputs.build-targets }}
PARENT_OUTPUT_DIR: ${{ inputs.output-dir}}
shell: bash
run: |
set -euo pipefail
python3 --version
docker images
START_TIME=$(date +%s)
(
cd .ci/lumen_cli
python3 -m pip install -e .
)
MAX_JOBS="$(nproc --ignore=6)"
export MAX_JOBS
# Split the comma-separated list and build each target
IFS=',' read -ra TARGETS <<< "$BUILD_TARGETS"
for target in "${TARGETS[@]}"; do
OUTPUT_DIR="$PARENT_OUTPUT_DIR/$target"
export OUTPUT_DIR
echo "Building external package: $target in directory $OUTPUT_DIR"
python3 -m cli.run build external "$target"
done
END_TIME=$(date +%s)
{
echo "build_time=$((END_TIME - START_TIME))"
if [ -d "$PARENT_OUTPUT_DIR" ]; then
echo "output_dir=$PARENT_OUTPUT_DIR"
fi
} >> "$GITHUB_OUTPUT"

View File

@ -59,11 +59,6 @@ runs:
echo "$msg"
exit 1
fi
if [[ $ngpu -eq 1 ]]; then
echo "Error: only 1 GPU detected, at least 2 GPUs are needed for distributed jobs"
echo "$msg"
exit 1
fi
- name: Runner diskspace health check
uses: pytorch/pytorch/.github/actions/diskspace-cleanup@main

View File

@ -1 +1 @@
e500f0cf88bc57ffd8b0029033da305eef24ae25
f92ceca80df7a36194468665d62b0f791b1826c5

View File

@ -1 +1 @@
35afe1b30b154114dc2ee8329e12f8cf3fe9f576
0ca2393b47e72c4424a49aa3b32c7c5d0e378a72

View File

@ -0,0 +1,414 @@
# TODO(elainwy): remove this file after the torch nightly dockerfile is in sync in vllm repo
# The vLLM Dockerfile is used to construct vLLM image against torch nightly and torch main that can be directly used for testing
ARG CUDA_VERSION=12.8.1
ARG PYTHON_VERSION=3.12
# BUILD_BASE_IMAGE: used to setup python build xformers, and vllm wheels, It can be replaced with a different base image from local machine,
# by default, it uses the torch-nightly-base stage from this docker image
ARG BUILD_BASE_IMAGE=torch-nightly-base
# FINAL_BASE_IMAGE: used to set up vllm-instaled environment and build flashinfer,
# by default, it uses devel-ubuntu22.04 official image.
ARG FINAL_BASE_IMAGE=nvidia/cuda:${CUDA_VERSION}-devel-ubuntu22.04
#################### TORCH NIGHTLY BASE IMAGE ####################
# A base image for building vLLM with devel ubuntu 22.04, this is mainly used to build vllm in vllm builtkite ci
From nvidia/cuda:${CUDA_VERSION}-devel-ubuntu22.04 as torch-nightly-base
ARG CUDA_VERSION=12.8.1
ARG PYTHON_VERSION=3.12
ARG TARGETPLATFORM
ENV DEBIAN_FRONTEND=noninteractive
RUN PYTHON_VERSION_STR=$(echo ${PYTHON_VERSION} | sed 's/\.//g') && \
echo "export PYTHON_VERSION_STR=${PYTHON_VERSION_STR}" >> /etc/environment
# Install Python and other dependencies if it does not existed
RUN if ! command -v python3 >/dev/null || ! python3 --version | grep -q "${PYTHON_VERSION}"; then \
echo "Installing Python ${PYTHON_VERSION}..." && \
echo 'tzdata tzdata/Areas select America' | debconf-set-selections && \
echo 'tzdata tzdata/Zones/America select Los_Angeles' | debconf-set-selections && \
apt-get update -y && \
apt-get install -y ccache software-properties-common git curl sudo && \
for i in 1 2 3; do \
add-apt-repository -y ppa:deadsnakes/ppa && break || \
{ echo "Attempt $i failed, retrying in 5s..."; sleep 5; }; \
done && \
apt-get update -y && \
apt-get install -y python${PYTHON_VERSION} python${PYTHON_VERSION}-dev python${PYTHON_VERSION}-venv && \
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYTHON_VERSION} 1 && \
update-alternatives --set python3 /usr/bin/python${PYTHON_VERSION} && \
ln -sf /usr/bin/python${PYTHON_VERSION}-config /usr/bin/python3-config && \
curl -sS https://bootstrap.pypa.io/get-pip.py | python${PYTHON_VERSION}; \
else \
echo "Python ${PYTHON_VERSION} already present, skipping setup."; \
fi \
&& python3 --version && python3 -m pip --version
# Upgrade to GCC 10 to avoid https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92519
# as it was causing spam when compiling the CUTLASS kernels
# Ensure gcc >= 10 to avoid CUTLASS issues (bug 92519)
RUN current_gcc_version=$(gcc -dumpversion | cut -f1 -d.) && \
if [ "$current_gcc_version" -lt 10 ]; then \
echo "GCC version is $current_gcc_version, installing gcc-10..."; \
apt-get update && \
apt-get install -y gcc-10 g++-10 && \
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-10 100 && \
update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-10 100; \
else \
echo "GCC version is $current_gcc_version, no need to install gcc-10."; \
fi && \
gcc --version && g++ --version
# install uv for faster pip installs
RUN --mount=type=cache,target=/root/.cache/uv \
python3 -m pip install uv==0.8.4
ENV UV_HTTP_TIMEOUT=500
ENV UV_INDEX_STRATEGY="unsafe-best-match"
#################### TORCH NIGHTLY BASE IMAGE ####################
#################### BASE BUILD IMAGE ####################
# A base image for building vLLM with torch nightly or torch wheels
# prepare basic build environment
FROM ${BUILD_BASE_IMAGE} AS base
USER root
# Workaround for https://github.com/openai/triton/issues/2507 and
# https://github.com/pytorch/pytorch/issues/107960 -- hopefully
# this won't be needed for future versions of this docker image
# or future versions of triton.
RUN ldconfig /usr/local/cuda-$(echo $CUDA_VERSION | cut -d. -f1,2)/compat/
# Install uv for faster pip installs if not existed
RUN --mount=type=cache,target=/root/.cache/uv \
if ! python3 -m uv --version >/dev/null 2>&1; then \
python3 -m pip install uv==0.8.4; \
fi
ENV UV_HTTP_TIMEOUT=500
ENV UV_INDEX_STRATEGY="unsafe-best-match"
WORKDIR /workspace
# install build and runtime dependencies
COPY requirements/common.txt requirements/common.txt
COPY use_existing_torch.py use_existing_torch.py
COPY pyproject.toml pyproject.toml
# install build and runtime dependencies without stable torch version
RUN python3 use_existing_torch.py
# default mount file as placeholder, this just avoid the mount error
# change to a different vllm folder if this does not exist anymore
ARG TORCH_WHEELS_PATH="./requirements"
ARG PINNED_TORCH_VERSION
# Install torch, torchaudio and torchvision based on the input
# if TORCH_WHEELS_PATH is default "./requirements", it will pull thethe nightly versions using pip
# otherwise, it will use the whls from TORCH_WHEELS_PATH from the host machine
RUN --mount=type=bind,source=${TORCH_WHEELS_PATH},target=/dist \
--mount=type=cache,target=/root/.cache/uv \
if [ -n "$TORCH_WHEELS_PATH" ] && [ "$TORCH_WHEELS_PATH" != "./requirements" ] && [ -d "/dist" ] && ls /dist/torch*.whl >/dev/null 2>&1; then \
torch_whl=$(find /dist -maxdepth 1 -name 'torch-*.whl' -print -quit); \
vision_whl=$(find /dist/vision -name 'torchvision*.whl' | head -n1 | xargs); \
audio_whl=$(find /dist/audio -name 'torchaudio*.whl' | head -n1 | xargs); \
uv pip install --system "${torch_whl}[opt-einsum]"; \
uv pip install --system "${vision_whl}"; \
uv pip install --system "${audio_whl}"; \
elif [ -n "$PINNED_TORCH_VERSION" ]; then \
echo "[INFO] Installing pinned torch nightly version: $PINNED_TORCH_VERSION"; \
uv pip install --system "$PINNED_TORCH_VERSION" --index-url https://download.pytorch.org/whl/nightly/cu128; \
else \
echo "[INFO] Installing torch nightly with latest one"; \
uv pip install --system torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128; \
fi
# Install numba 0.61.2 for cuda environment
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system numba==0.61.2
# Install common dependencies from vllm common.txt
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system -r requirements/common.txt
# Must put before installing xformers, so it can install the correct version of xfomrers.
ARG torch_cuda_arch_list='8.0;8.6;8.9;9.0'
ENV TORCH_CUDA_ARCH_LIST=${torch_cuda_arch_list}
ARG max_jobs=16
ENV MAX_JOBS=${max_jobs}
# Build xformers with cuda and torch nightly/wheel
# following official xformers guidance: https://github.com/facebookresearch/xformers#build
ARG XFORMERS_COMMIT=f2de641ef670510cadab099ce6954031f52f191c
ENV CCACHE_DIR=/root/.cache/ccache
RUN --mount=type=cache,target=/root/.cache/ccache \
--mount=type=cache,target=/root/.cache/uv \
echo 'git clone xformers...' \
&& git clone https://github.com/facebookresearch/xformers.git --recursive \
&& cd xformers \
&& git checkout ${XFORMERS_COMMIT} \
&& git submodule update --init --recursive \
&& echo 'finish git clone xformers...' \
&& rm -rf build \
&& python3 setup.py bdist_wheel --dist-dir=../xformers-dist --verbose \
&& cd .. \
&& rm -rf xformers
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system xformers-dist/*.whl --verbose
# Build can take a long time, and the torch nightly version fetched from url can be different in next docker stage.
# track the nightly torch version used in the build, when we set up runtime environment we can make sure the version is the same
RUN uv pip freeze | grep -i '^torch\|^torchvision\|^torchaudio' > torch_build_versions.txt
RUN cat torch_build_versions.txt
RUN pip freeze | grep -E 'torch|xformers|torchvision|torchaudio'
#################### BASE BUILD IMAGE ####################
#################### WHEEL BUILD IMAGE ####################
# Image used to build vllm wheel
FROM base AS build
ARG TARGETPLATFORM
ENV UV_HTTP_TIMEOUT=500
ENV UV_INDEX_STRATEGY="unsafe-best-match"
COPY . .
RUN python3 use_existing_torch.py
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system -r requirements/build.txt
ARG GIT_REPO_CHECK=0
RUN --mount=type=bind,source=.git,target=.git \
if [ "$GIT_REPO_CHECK" != "0" ]; then bash tools/check_repo.sh ; fi
# Max jobs used by Ninja to build extensions
ARG max_jobs=16
ENV MAX_JOBS=${max_jobs}
ARG nvcc_threads=2
ENV NVCC_THREADS=$nvcc_threads
ARG torch_cuda_arch_list='8.0;8.6;8.9;9.0'
ENV TORCH_CUDA_ARCH_LIST=${torch_cuda_arch_list}
ARG USE_SCCACHE
ARG SCCACHE_BUCKET_NAME=vllm-build-sccache
ARG SCCACHE_REGION_NAME=us-west-2
ARG SCCACHE_S3_NO_CREDENTIALS=0
# if USE_SCCACHE is set, use sccache to speed up compilation
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=.git,target=.git \
if [ "$USE_SCCACHE" = "1" ]; then \
echo "Installing sccache..." \
&& curl -L -o sccache.tar.gz https://github.com/mozilla/sccache/releases/download/v0.8.1/sccache-v0.8.1-x86_64-unknown-linux-musl.tar.gz \
&& tar -xzf sccache.tar.gz \
&& sudo mv sccache-v0.8.1-x86_64-unknown-linux-musl/sccache /usr/bin/sccache \
&& rm -rf sccache.tar.gz sccache-v0.8.1-x86_64-unknown-linux-musl \
&& export SCCACHE_BUCKET=${SCCACHE_BUCKET_NAME} \
&& export SCCACHE_REGION=${SCCACHE_REGION_NAME} \
&& export SCCACHE_S3_NO_CREDENTIALS=${SCCACHE_S3_NO_CREDENTIALS} \
&& export SCCACHE_IDLE_TIMEOUT=0 \
&& export CMAKE_BUILD_TYPE=Release \
&& sccache --show-stats \
&& python3 setup.py bdist_wheel --dist-dir=vllm-dist --py-limited-api=cp38 \
&& sccache --show-stats; \
fi
ENV CCACHE_DIR=/root/.cache/ccache
RUN --mount=type=cache,target=/root/.cache/ccache \
--mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=.git,target=.git \
if [ "$USE_SCCACHE" != "1" ]; then \
# Clean any existing CMake artifacts
rm -rf .deps && \
mkdir -p .deps && \
python3 setup.py bdist_wheel --dist-dir=vllm-dist --py-limited-api=cp38; \
fi
RUN echo "[DEBUG] Listing current directory:" && \
ls -al && \
echo "[DEBUG] Showing torch_build_versions.txt content:" && \
cat torch_build_versions.txt
#################### WHEEL BUILD IMAGE ####################
################### VLLM INSTALLED IMAGE ####################
# Setup clean environment for vLLM for test and api server using ubuntu22.04 with AOT flashinfer
FROM ${FINAL_BASE_IMAGE} AS vllm-base
USER root
# prepare for environment starts
WORKDIR /workspace
RUN PYTHON_VERSION_STR=$(echo ${PYTHON_VERSION} | sed 's/\.//g') && \
echo "export PYTHON_VERSION_STR=${PYTHON_VERSION_STR}" >> /etc/environment
# Install Python and other dependencies if it does not existed
RUN if ! command -v python3 >/dev/null || ! python3 --version | grep -q "${PYTHON_VERSION}"; then \
echo "Installing Python ${PYTHON_VERSION}..." && \
echo 'tzdata tzdata/Areas select America' | debconf-set-selections && \
echo 'tzdata tzdata/Zones/America select Los_Angeles' | debconf-set-selections && \
apt-get update -y && \
apt-get install -y ccache software-properties-common git curl sudo && \
for i in 1 2 3; do \
add-apt-repository -y ppa:deadsnakes/ppa && break || \
{ echo "Attempt $i failed, retrying in 5s..."; sleep 5; }; \
done && \
apt-get update -y && \
apt-get install -y python${PYTHON_VERSION} python${PYTHON_VERSION}-dev python${PYTHON_VERSION}-venv && \
update-alternatives --install /usr/bin/python3 python3 /usr/bin/python${PYTHON_VERSION} 1 && \
update-alternatives --set python3 /usr/bin/python${PYTHON_VERSION} && \
ln -sf /usr/bin/python${PYTHON_VERSION}-config /usr/bin/python3-config && \
curl -sS https://bootstrap.pypa.io/get-pip.py | python${PYTHON_VERSION}; \
else \
echo "Python ${PYTHON_VERSION} already present, skipping setup."; \
fi \
&& python3 --version && python3 -m pip --version
# Get the torch versions, and whls used in previous stagtes for consistency
COPY --from=base /workspace/torch_build_versions.txt ./torch_build_versions.txt
COPY --from=base /workspace/xformers-dist /wheels/xformers
COPY --from=build /workspace/vllm-dist /wheels/vllm
RUN echo "[DEBUG] Listing current directory before torch install step:" && \
ls -al && \
echo "[DEBUG] Showing torch_build_versions.txt content:" && \
cat torch_build_versions.txt
# Workaround for https://github.com/openai/triton/issues/2507 and
# https://github.com/pytorch/pytorch/issues/107960 -- hopefully
# this won't be needed for future versions of this docker image
# or future versions of triton.
RUN ldconfig /usr/local/cuda-$(echo $CUDA_VERSION | cut -d. -f1,2)/compat/
# Install uv for faster pip installs if not existed
RUN --mount=type=cache,target=/root/.cache/uv \
if ! python3 -m uv --version > /dev/null 2>&1; then \
python3 -m pip install uv==0.8.4; \
fi
ENV UV_HTTP_TIMEOUT=500
ENV UV_INDEX_STRATEGY="unsafe-best-match"
# Default mount file as placeholder, this just avoid the mount error
ARG TORCH_WHEELS_PATH="./requirements"
# Install torch, torchaudio and torchvision
# if TORCH_WHEELS_PATH is default "./requirements", it will pull the nightly versions using pip using torch_build_versions.txt
# otherwise, it will use the whls from TORCH_WHEELS_PATH from the host machine
RUN --mount=type=bind,source=${TORCH_WHEELS_PATH},target=/dist \
--mount=type=cache,target=/root/.cache/uv \
if [ -n "$TORCH_WHEELS_PATH" ] && [ "$TORCH_WHEELS_PATH" != "./requirements" ] && [ -d "/dist" ] && ls /dist/torch*.whl >/dev/null 2>&1; then \
torch_whl=$(find /dist -maxdepth 1 -name 'torch-*.whl' -print -quit); \
vision_whl=$(find /dist/vision -name 'torchvision*.whl' | head -n1 | xargs); \
audio_whl=$(find /dist/audio -name 'torchaudio*.whl' | head -n1 | xargs); \
echo "Found: '${torch_whl}' '${audio_whl}' '${vision_whl}'"; \
uv pip install --system "${torch_whl}[opt-einsum]"; \
uv pip install --system "${vision_whl}"; \
uv pip install --system "${audio_whl}"; \
else \
echo "[INFO] Installing torch versions from torch_build_versions.txt"; \
uv pip install --system $(cat torch_build_versions.txt | xargs) --index-url https://download.pytorch.org/whl/nightly/cu128; \
fi
# Install the vllm wheel from previous stage
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system /wheels/vllm/*.whl --verbose
# Install xformers wheel from previous stage
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system /wheels/xformers/*.whl --verbose
# Build flashinfer from source.
ARG torch_cuda_arch_list='8.0;8.9;9.0a'
# install package for build flashinfer
# see issue: https://github.com/flashinfer-ai/flashinfer/issues/738
RUN pip install build==1.3.0
RUN pip freeze | grep -E 'setuptools|packaging|build'
ENV TORCH_CUDA_ARCH_LIST=${torch_cuda_arch_list}
# Build flashinfer for torch nightly from source around 10 mins
ARG FLASHINFER_GIT_REPO="https://github.com/flashinfer-ai/flashinfer.git"
# Keep this in sync with https://github.com/vllm-project/vllm/blob/main/requirements/cuda.txt
ARG FLASHINFER_GIT_REF="v0.2.9rc2"
RUN --mount=type=cache,target=/root/.cache/uv \
git clone --depth 1 --recursive --shallow-submodules \
--branch ${FLASHINFER_GIT_REF} \
${FLASHINFER_GIT_REPO} flashinfer \
&& echo "Building FlashInfer with AOT for arches: ${torch_cuda_arch_list}" \
&& cd flashinfer \
&& python3 -m flashinfer.aot \
&& python3 -m build --no-isolation --wheel --outdir ../wheels/flashinfer \
&& cd .. \
&& rm -rf flashinfer
# install flashinfer python
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system wheels/flashinfer/*.whl --verbose
# Logging to confirm the torch versions
RUN pip freeze | grep -E 'torch|xformers|vllm|flashinfer'
################### VLLM INSTALLED IMAGE ####################
#################### UNITTEST IMAGE #############################
FROM vllm-base as test
ENV UV_HTTP_TIMEOUT=500
ENV UV_INDEX_STRATEGY="unsafe-best-match"
COPY tests/ tests/
COPY examples examples
COPY benchmarks benchmarks
COPY ./vllm/collect_env.py .
COPY requirements/common.txt requirements/common.txt
COPY use_existing_torch.py use_existing_torch.py
COPY pyproject.toml pyproject.toml
# Install build and runtime dependencies without stable torch version
COPY requirements/nightly_torch_test.txt requirements/nightly_torch_test.txt
RUN python3 use_existing_torch.py
# install packages
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system -r requirements/common.txt
# enable fast downloads from hf (for testing)
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system hf_transfer
ENV HF_HUB_ENABLE_HF_TRANSFER 1
# install development dependencies (for testing)
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system -e tests/vllm_test_utils
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system -r requirements/nightly_torch_test.txt
# Workaround for #17068
# pinned commit for v2.2.4
RUN --mount=type=cache,target=/root/.cache/uv \
uv pip install --system --no-build-isolation "git+https://github.com/state-spaces/mamba@95d8aba8a8c75aedcaa6143713b11e745e7cd0d9#egg=mamba-ssm"
# Logging to confirm the torch versions
RUN pip freeze | grep -E 'torch|xformers|vllm|flashinfer'
# Logging to confirm all the packages are installed
RUN pip freeze
#################### UNITTEST IMAGE #############################
#################### EXPORT STAGE ####################
FROM scratch as export-wheels
# Just copy the wheels we prepared in previous stages
COPY --from=base /workspace/xformers-dist /wheels/xformers
COPY --from=build /workspace/vllm-dist /wheels/vllm
COPY --from=vllm-base /workspace/wheels/flashinfer /wheels/flashinfer-python

View File

@ -22,6 +22,7 @@ ciflow_push_tags:
- ciflow/rocm
- ciflow/rocm-mi300
- ciflow/s390
- ciflow/riscv64
- ciflow/slow
- ciflow/trunk
- ciflow/unstable

View File

@ -1,5 +0,0 @@
# Not pinning certifi so that we can always get the latest certificates
certifi
pip=23.2.1
pkg-config=0.29.2
wheel=0.37.1

View File

@ -1,3 +1,4 @@
#!/bin/bash
set -ex
# Set ROCM_HOME isn't available, use ROCM_PATH if set or /opt/rocm
@ -50,29 +51,15 @@ do
cp $lib $TRITON_ROCM_DIR/lib/
done
# Required ROCm libraries
if [[ "${MAJOR_VERSION}" == "6" ]]; then
libamdhip="libamdhip64.so.6"
else
libamdhip="libamdhip64.so.5"
fi
# Required ROCm libraries - ROCm 6.0
ROCM_SO=(
"${libamdhip}"
"libhsa-runtime64.so.1"
"libdrm.so.2"
"libdrm_amdgpu.so.1"
"libamdhip64.so"
"libhsa-runtime64.so"
"libdrm.so"
"libdrm_amdgpu.so"
"libamd_comgr.so"
"librocprofiler-register.so"
)
if [[ $ROCM_INT -ge 60400 ]]; then
ROCM_SO+=("libamd_comgr.so.3")
else
ROCM_SO+=("libamd_comgr.so.2")
fi
if [[ $ROCM_INT -ge 60100 ]]; then
ROCM_SO+=("librocprofiler-register.so.0")
fi
for lib in "${ROCM_SO[@]}"
do
@ -94,10 +81,6 @@ do
fi
cp $file_path $TRITON_ROCM_DIR/lib
# When running locally, and not building a wheel, we need to satisfy shared objects requests that don't look for versions
LINKNAME=$(echo $lib | sed -e 's/\.so.*/.so/g')
ln -sf $lib $TRITON_ROCM_DIR/lib/$LINKNAME
done
# Copy Include Files

View File

@ -19,15 +19,13 @@ replace_needed_sofiles() {
find $1 -name '*.so*' -o -name 'ld.lld' | while read sofile; do
origname=$2
patchedname=$3
if [[ "$origname" != "$patchedname" ]]; then
set +e
origname=$($PATCHELF_BIN --print-needed $sofile | grep "$origname.*")
ERRCODE=$?
set -e
if [ "$ERRCODE" -eq "0" ]; then
echo "patching $sofile entry $origname to $patchedname"
$PATCHELF_BIN --replace-needed $origname $patchedname $sofile
fi
set +e
origname=$($PATCHELF_BIN --print-needed $sofile | grep "$origname.*")
ERRCODE=$?
set -e
if [ "$ERRCODE" -eq "0" ]; then
echo "patching $sofile entry $origname to $patchedname"
$PATCHELF_BIN --replace-needed $origname $patchedname $sofile
fi
done
}

View File

@ -315,7 +315,7 @@ def generate_wheels_matrix(
if gpu_arch_type == "cpu-s390x" and python_version == "3.13t":
continue
# TODO: Enable python 3.14 on non linux OSes
if os != "linux" and (
if os not in ["linux", "linux-aarch64", "macos-arm64"] and (
python_version == "3.14" or python_version == "3.14t"
):
continue

Binary file not shown.

View File

@ -70,6 +70,9 @@ def mock_query(
if key in mocked_queries:
return mocked_queries[key]
# TODO: Remove me once https://github.com/pytorch/pytorch/issues/160489 is resolved
raise ValueError(f"Key {key} could not be found in gql_mocks")
try:
rc = fallback_function(*args)
except HTTPError as err:

View File

@ -108,10 +108,6 @@ GH_CHECKSUITES_FRAGMENT = """
fragment PRCheckSuites on CheckSuiteConnection {
edges {
node {
app {
name
databaseId
}
workflowRun {
workflow {
name

View File

@ -10,7 +10,7 @@ if "%PY_VERS%" == "3.13t" (
call conda create -n %PYTHON_PREFIX% -y -c=conda-forge python=%PY_VERS%
)
:: Fix cmake version for issue https://github.com/pytorch/pytorch/issues/150480
call conda run -n %PYTHON_PREFIX% pip install wheel pybind11 certifi cython cmake==3.31.6 setuptools==72.1.0 ninja
call conda run -n %PYTHON_PREFIX% pip install wheel pybind11 certifi cython cmake==3.31.6 setuptools==72.1.0 ninja==1.11.1.4
dir "%VC_INSTALL_PATH%"

View File

@ -110,12 +110,33 @@ jobs:
# Create new "clean" conda environment for testing
SMOKE_TEST_PARAMS=""
if [[ $DESIRED_PYTHON == "3.13t" ]]; then
conda create -yn "test_conda_env" python="3.13" python-freethreading -c conda-forge
SMOKE_TEST_PARAMS="--torch-compile-check disabled"
else
conda create -yn "test_conda_env" python="$DESIRED_PYTHON"
fi
EXTRA_CONDA_INSTALL_FLAGS=""
CONDA_ENV_CREATE_FLAGS=""
# shellcheck disable=SC2153
case $DESIRED_PYTHON in
3.14t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.14)
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.13t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge"
desired_python="3.13"
;;
*)
# shellcheck disable=SC2153
desired_python=${DESIRED_PYTHON}
;;
esac
# shellcheck disable=SC2086
conda create -yn "test_conda_env" python="$desired_python" ${CONDA_ENV_CREATE_FLAGS} ${EXTRA_CONDA_INSTALL_FLAGS}
conda activate test_conda_env
pip install "$PYTORCH_FINAL_PACKAGE_DIR"/*.whl numpy -v

View File

@ -287,10 +287,36 @@ jobs:
# comes from https://github.com/pytorch/test-infra/pull/6058
TOTAL_MEMORY_WITH_SWAP=$(("${TOTAL_AVAILABLE_MEMORY_IN_GB%.*}" + 3))
if [[ ${BUILD_ENVIRONMENT} == *"riscv64"* ]]; then
# EC2 specific setup for RISC-V emulation
# Ensure binfmt_misc is available
echo "Mounting binfmt_misc filesystem"
sudo mount binfmt_misc -t binfmt_misc /proc/sys/fs/binfmt_misc 2>/dev/null || true
echo "QEMU registration: multiarch/qemu-user-static"
docker run --rm --privileged multiarch/qemu-user-static --reset -p yes || true
# Final verification
echo "Checking binfmt_misc status:"
ls -la /proc/sys/fs/binfmt_misc/ 2>/dev/null || echo "Cannot access binfmt_misc directory"
if [ -f /proc/sys/fs/binfmt_misc/qemu-riscv64 ]; then
echo "qemu-riscv64 registration successful"
else
echo "qemu-riscv64 registration failed - proceeding without emulation"
echo "This may cause RISC-V builds to fail"
fi
RISCV_DOCKER_ARGS="--privileged"
else
RISCV_DOCKER_ARGS=
fi
# detached container should get cleaned up by teardown_ec2_linux
# Used for JENKINS_USER and DOCKER_SHELL_CMD, which can be empty
# shellcheck disable=SC2086
container_name=$(docker run \
${RISCV_DOCKER_ARGS} \
-e BUILD_ENVIRONMENT \
-e MAX_JOBS="$(nproc --ignore=2)" \
-e PR_NUMBER \

View File

@ -88,6 +88,16 @@ jobs:
- name: Setup ROCm
uses: ./.github/actions/setup-rocm
- name: Runner check GPU count (distributed jobs)
if: ${{ contains(matrix.config, 'distributed') }}
shell: bash
run: |
ngpu=$(rocminfo | grep -c -E 'Name:.*\sgfx')
if [[ $ngpu -lt 4 ]]; then
echo "Error: only $ngpu GPU(s) detected, at least 4 GPUs are needed for distributed jobs"
exit 1
fi
- name: configure aws credentials
id: aws_creds
uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 # v4.1.0

View File

@ -34,8 +34,7 @@ jobs:
contents: read
pull-requests: write
name: Check labels
# Disabling the job until https://github.com/pytorch/pytorch/issues/159825 is resolved
if: github.repository_owner == 'pytorch' && false
if: github.repository_owner == 'pytorch'
runs-on: linux.24_04.4x
steps:
- name: Checkout PyTorch

View File

@ -7,8 +7,7 @@ on:
jobs:
ghstack-mergeability-check:
# Disabling the job until https://github.com/pytorch/pytorch/issues/159825 is resolved
if: github.repository_owner == 'pytorch' && false
if: github.repository_owner == 'pytorch'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2

View File

@ -74,7 +74,8 @@ jobs:
pytorch-linux-jammy-cuda12.8-cudnn9-py3.9-linter,
# Executorch pin needs update
# pytorch-linux-jammy-py3-clang12-executorch,
pytorch-linux-jammy-py3.12-triton-cpu
pytorch-linux-jammy-py3.12-triton-cpu,
pytorch-linux-noble-riscv64-py3.12-gcc14
]
include:
- docker-image-name: pytorch-linux-jammy-aarch64-py3.10-gcc11

View File

@ -712,3 +712,225 @@ jobs:
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_14-cpu-aarch64-build:
if: ${{ github.repository_owner == 'pytorch' }}
uses: ./.github/workflows/_binary-build-linux.yml
needs: get-label-type
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: manywheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: manylinux2_28_aarch64-builder
DOCKER_IMAGE_TAG_PREFIX: cpu-aarch64
DESIRED_PYTHON: "3.14"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runs_on: linux.arm64.m7g.4xlarge.ephemeral
ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_14-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14-cpu-aarch64-test: # Testing
if: ${{ github.repository_owner == 'pytorch' }}
needs:
- manywheel-py3_14-cpu-aarch64-build
- get-label-type
uses: ./.github/workflows/_binary-test-linux.yml
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: manywheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: manylinux2_28_aarch64-builder
DOCKER_IMAGE_TAG_PREFIX: cpu-aarch64
DESIRED_PYTHON: "3.14"
build_name: manywheel-py3_14-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14-cpu-aarch64-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_14-cpu-aarch64-test
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: manywheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: manylinux2_28_aarch64-builder
DOCKER_IMAGE_TAG_PREFIX: cpu-aarch64
DESIRED_PYTHON: "3.14"
build_name: manywheel-py3_14-cpu-aarch64
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_14-cuda-aarch64-12_9-build:
if: ${{ github.repository_owner == 'pytorch' }}
uses: ./.github/workflows/_binary-build-linux.yml
needs: get-label-type
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: manywheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu129
GPU_ARCH_VERSION: 12.9-aarch64
GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: manylinuxaarch64-builder
DOCKER_IMAGE_TAG_PREFIX: cuda12.9
DESIRED_PYTHON: "3.14"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runs_on: linux.arm64.m7g.4xlarge.ephemeral
ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_14-cuda-aarch64-12_9
build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.9.86; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.9.79; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.9.79; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.9.1.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.4.1.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.10.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.5.82; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.10.65; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.9; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.9.79; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.9.86; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.14.1.1; platform_system == 'Linux' and platform_machine == 'x86_64'
timeout-minutes: 420
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14-cuda-aarch64-12_9-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_14-cuda-aarch64-12_9-build
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: manywheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu129
GPU_ARCH_VERSION: 12.9-aarch64
GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: manylinuxaarch64-builder
DOCKER_IMAGE_TAG_PREFIX: cuda12.9
DESIRED_PYTHON: "3.14"
build_name: manywheel-py3_14-cuda-aarch64-12_9
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_14t-cpu-aarch64-build:
if: ${{ github.repository_owner == 'pytorch' }}
uses: ./.github/workflows/_binary-build-linux.yml
needs: get-label-type
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: manywheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: manylinux2_28_aarch64-builder
DOCKER_IMAGE_TAG_PREFIX: cpu-aarch64
DESIRED_PYTHON: "3.14t"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runs_on: linux.arm64.m7g.4xlarge.ephemeral
ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_14t-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14t-cpu-aarch64-test: # Testing
if: ${{ github.repository_owner == 'pytorch' }}
needs:
- manywheel-py3_14t-cpu-aarch64-build
- get-label-type
uses: ./.github/workflows/_binary-test-linux.yml
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: manywheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: manylinux2_28_aarch64-builder
DOCKER_IMAGE_TAG_PREFIX: cpu-aarch64
DESIRED_PYTHON: "3.14t"
build_name: manywheel-py3_14t-cpu-aarch64
build_environment: linux-aarch64-binary-manywheel
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runs_on: linux.arm64.2xlarge
ALPINE_IMAGE: "arm64v8/alpine"
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14t-cpu-aarch64-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_14t-cpu-aarch64-test
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: manywheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu-aarch64
DOCKER_IMAGE: manylinux2_28_aarch64-builder
DOCKER_IMAGE_TAG_PREFIX: cpu-aarch64
DESIRED_PYTHON: "3.14t"
build_name: manywheel-py3_14t-cpu-aarch64
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
uses: ./.github/workflows/_binary-upload.yml
manywheel-py3_14t-cuda-aarch64-12_9-build:
if: ${{ github.repository_owner == 'pytorch' }}
uses: ./.github/workflows/_binary-build-linux.yml
needs: get-label-type
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: manywheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu129
GPU_ARCH_VERSION: 12.9-aarch64
GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: manylinuxaarch64-builder
DOCKER_IMAGE_TAG_PREFIX: cuda12.9
DESIRED_PYTHON: "3.14t"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runs_on: linux.arm64.m7g.4xlarge.ephemeral
ALPINE_IMAGE: "arm64v8/alpine"
build_name: manywheel-py3_14t-cuda-aarch64-12_9
build_environment: linux-aarch64-binary-manywheel
PYTORCH_EXTRA_INSTALL_REQUIREMENTS: nvidia-cuda-nvrtc-cu12==12.9.86; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-runtime-cu12==12.9.79; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cuda-cupti-cu12==12.9.79; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cudnn-cu12==9.10.2.21; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cublas-cu12==12.9.1.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufft-cu12==11.4.1.4; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-curand-cu12==10.3.10.19; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusolver-cu12==11.7.5.82; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparse-cu12==12.5.10.65; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cusparselt-cu12==0.7.1; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nccl-cu12==2.27.5; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvshmem-cu12==3.3.9; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvtx-cu12==12.9.79; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-nvjitlink-cu12==12.9.86; platform_system == 'Linux' and platform_machine == 'x86_64' | nvidia-cufile-cu12==1.14.1.1; platform_system == 'Linux' and platform_machine == 'x86_64'
timeout-minutes: 420
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
manywheel-py3_14t-cuda-aarch64-12_9-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: manywheel-py3_14t-cuda-aarch64-12_9-build
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: manywheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cu129
GPU_ARCH_VERSION: 12.9-aarch64
GPU_ARCH_TYPE: cuda-aarch64
DOCKER_IMAGE: manylinuxaarch64-builder
DOCKER_IMAGE_TAG_PREFIX: cuda12.9
DESIRED_PYTHON: "3.14t"
build_name: manywheel-py3_14t-cuda-aarch64-12_9
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -115,12 +115,33 @@ jobs:
# Create new "clean" conda environment for testing
SMOKE_TEST_PARAMS=""
if [[ $DESIRED_PYTHON == "3.13t" ]]; then
conda create -yn "test_conda_env" python="3.13" python-freethreading -c conda-forge
SMOKE_TEST_PARAMS="--torch-compile-check disabled"
else
conda create -yn "test_conda_env" python="$DESIRED_PYTHON"
fi
EXTRA_CONDA_INSTALL_FLAGS=""
CONDA_ENV_CREATE_FLAGS=""
# shellcheck disable=SC2153
case $DESIRED_PYTHON in
3.14t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.14)
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.13t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge"
desired_python="3.13"
;;
*)
# shellcheck disable=SC2153
desired_python=${DESIRED_PYTHON}
;;
esac
# shellcheck disable=SC2086
conda create -yn "test_conda_env" python="$desired_python" ${CONDA_ENV_CREATE_FLAGS} ${EXTRA_CONDA_INSTALL_FLAGS}
conda activate test_conda_env
pip install "$PYTORCH_FINAL_PACKAGE_DIR"/*.whl numpy -v
@ -239,12 +260,33 @@ jobs:
# Create new "clean" conda environment for testing
SMOKE_TEST_PARAMS=""
if [[ $DESIRED_PYTHON == "3.13t" ]]; then
conda create -yn "test_conda_env" python="3.13" python-freethreading -c conda-forge
SMOKE_TEST_PARAMS="--torch-compile-check disabled"
else
conda create -yn "test_conda_env" python="$DESIRED_PYTHON"
fi
EXTRA_CONDA_INSTALL_FLAGS=""
CONDA_ENV_CREATE_FLAGS=""
# shellcheck disable=SC2153
case $DESIRED_PYTHON in
3.14t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.14)
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.13t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge"
desired_python="3.13"
;;
*)
# shellcheck disable=SC2153
desired_python=${DESIRED_PYTHON}
;;
esac
# shellcheck disable=SC2086
conda create -yn "test_conda_env" python="$desired_python" ${CONDA_ENV_CREATE_FLAGS} ${EXTRA_CONDA_INSTALL_FLAGS}
conda activate test_conda_env
pip install "$PYTORCH_FINAL_PACKAGE_DIR"/*.whl numpy -v
@ -363,12 +405,33 @@ jobs:
# Create new "clean" conda environment for testing
SMOKE_TEST_PARAMS=""
if [[ $DESIRED_PYTHON == "3.13t" ]]; then
conda create -yn "test_conda_env" python="3.13" python-freethreading -c conda-forge
SMOKE_TEST_PARAMS="--torch-compile-check disabled"
else
conda create -yn "test_conda_env" python="$DESIRED_PYTHON"
fi
EXTRA_CONDA_INSTALL_FLAGS=""
CONDA_ENV_CREATE_FLAGS=""
# shellcheck disable=SC2153
case $DESIRED_PYTHON in
3.14t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.14)
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.13t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge"
desired_python="3.13"
;;
*)
# shellcheck disable=SC2153
desired_python=${DESIRED_PYTHON}
;;
esac
# shellcheck disable=SC2086
conda create -yn "test_conda_env" python="$desired_python" ${CONDA_ENV_CREATE_FLAGS} ${EXTRA_CONDA_INSTALL_FLAGS}
conda activate test_conda_env
pip install "$PYTORCH_FINAL_PACKAGE_DIR"/*.whl numpy -v
@ -487,12 +550,33 @@ jobs:
# Create new "clean" conda environment for testing
SMOKE_TEST_PARAMS=""
if [[ $DESIRED_PYTHON == "3.13t" ]]; then
conda create -yn "test_conda_env" python="3.13" python-freethreading -c conda-forge
SMOKE_TEST_PARAMS="--torch-compile-check disabled"
else
conda create -yn "test_conda_env" python="$DESIRED_PYTHON"
fi
EXTRA_CONDA_INSTALL_FLAGS=""
CONDA_ENV_CREATE_FLAGS=""
# shellcheck disable=SC2153
case $DESIRED_PYTHON in
3.14t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.14)
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.13t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge"
desired_python="3.13"
;;
*)
# shellcheck disable=SC2153
desired_python=${DESIRED_PYTHON}
;;
esac
# shellcheck disable=SC2086
conda create -yn "test_conda_env" python="$desired_python" ${CONDA_ENV_CREATE_FLAGS} ${EXTRA_CONDA_INSTALL_FLAGS}
conda activate test_conda_env
pip install "$PYTORCH_FINAL_PACKAGE_DIR"/*.whl numpy -v
@ -611,12 +695,33 @@ jobs:
# Create new "clean" conda environment for testing
SMOKE_TEST_PARAMS=""
if [[ $DESIRED_PYTHON == "3.13t" ]]; then
conda create -yn "test_conda_env" python="3.13" python-freethreading -c conda-forge
SMOKE_TEST_PARAMS="--torch-compile-check disabled"
else
conda create -yn "test_conda_env" python="$DESIRED_PYTHON"
fi
EXTRA_CONDA_INSTALL_FLAGS=""
CONDA_ENV_CREATE_FLAGS=""
# shellcheck disable=SC2153
case $DESIRED_PYTHON in
3.14t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.14)
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.13t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge"
desired_python="3.13"
;;
*)
# shellcheck disable=SC2153
desired_python=${DESIRED_PYTHON}
;;
esac
# shellcheck disable=SC2086
conda create -yn "test_conda_env" python="$desired_python" ${CONDA_ENV_CREATE_FLAGS} ${EXTRA_CONDA_INSTALL_FLAGS}
conda activate test_conda_env
pip install "$PYTORCH_FINAL_PACKAGE_DIR"/*.whl numpy -v
@ -735,12 +840,33 @@ jobs:
# Create new "clean" conda environment for testing
SMOKE_TEST_PARAMS=""
if [[ $DESIRED_PYTHON == "3.13t" ]]; then
conda create -yn "test_conda_env" python="3.13" python-freethreading -c conda-forge
SMOKE_TEST_PARAMS="--torch-compile-check disabled"
else
conda create -yn "test_conda_env" python="$DESIRED_PYTHON"
fi
EXTRA_CONDA_INSTALL_FLAGS=""
CONDA_ENV_CREATE_FLAGS=""
# shellcheck disable=SC2153
case $DESIRED_PYTHON in
3.14t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.14)
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.13t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge"
desired_python="3.13"
;;
*)
# shellcheck disable=SC2153
desired_python=${DESIRED_PYTHON}
;;
esac
# shellcheck disable=SC2086
conda create -yn "test_conda_env" python="$desired_python" ${CONDA_ENV_CREATE_FLAGS} ${EXTRA_CONDA_INSTALL_FLAGS}
conda activate test_conda_env
pip install "$PYTORCH_FINAL_PACKAGE_DIR"/*.whl numpy -v
@ -774,3 +900,293 @@ jobs:
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_14-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
runs-on: macos-14-xlarge
timeout-minutes: 240
env:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
PACKAGE_TYPE: wheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.14"
steps:
# NOTE: These environment variables are put here so that they can be applied on every job equally
# They are also here because setting them at a workflow level doesn't give us access to the
# runner.temp variable, which we need.
- name: Populate binary env
shell: bash
run: |
# shellcheck disable=SC2129
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
# shellcheck disable=SC2129
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
# shellcheck disable=SC2129
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
- name: Install conda and dependencies
run: |
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
curl --retry 3 --retry-all-errors -o "${RUNNER_TEMP}/conda.sh" "https://repo.anaconda.com/miniconda/Miniconda3-py310_23.5.2-0-MacOSX-$(uname -m).sh"
chmod +x "${RUNNER_TEMP}/conda.sh"
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
if [ -d "/Applications/Xcode_14.3.1.app" ]; then
echo "DEVELOPER_DIR=/Applications/Xcode_14.3.1.app/Contents/Developer" >> "${GITHUB_ENV}"
elif [ -d "/Applications/Xcode_13.3.1.app" ]; then
echo "DEVELOPER_DIR=/Applications/Xcode_13.3.1.app/Contents/Developer" >> "${GITHUB_ENV}"
fi
- name: Checkout PyTorch
uses: actions/checkout@v4
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
show-progress: false
- name: Clean PyTorch checkout
run: |
# Remove any artifacts from the previous checkouts
git clean -fxd
working-directory: pytorch
- name: Populate binary env
run: |
# shellcheck disable=SC1091
source "${RUNNER_TEMP}/anaconda/bin/activate"
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
- name: Build PyTorch binary
run: |
# shellcheck disable=SC1091
source "${RUNNER_TEMP}/anaconda/bin/activate"
set -eux -o pipefail
# shellcheck disable=SC1090
source "${BINARY_ENV_FILE:-/Users/distiller/project/env}"
mkdir -p "$PYTORCH_FINAL_PACKAGE_DIR"
# Build
USE_PYTORCH_METAL_EXPORT=1
USE_COREML_DELEGATE=1
TORCH_PACKAGE_NAME="${TORCH_PACKAGE_NAME//-/_}"
export USE_PYTORCH_METAL_EXPORT
export USE_COREML_DELEGATE
export TORCH_PACKAGE_NAME
"${PYTORCH_ROOT}/.ci/wheel/build_wheel.sh"
- name: Test PyTorch wheel
run: |
# shellcheck disable=SC1091
source "${RUNNER_TEMP}/anaconda/bin/activate"
set -eux -o pipefail
# shellcheck disable=SC1090
source "${BINARY_ENV_FILE:-/Users/distiller/project/env}"
pip uninstall -y "$TORCH_PACKAGE_NAME" || true
pip uninstall -y "$TORCH_PACKAGE_NAME" || true
# Create new "clean" conda environment for testing
SMOKE_TEST_PARAMS=""
EXTRA_CONDA_INSTALL_FLAGS=""
CONDA_ENV_CREATE_FLAGS=""
# shellcheck disable=SC2153
case $DESIRED_PYTHON in
3.14t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.14)
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.13t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge"
desired_python="3.13"
;;
*)
# shellcheck disable=SC2153
desired_python=${DESIRED_PYTHON}
;;
esac
# shellcheck disable=SC2086
conda create -yn "test_conda_env" python="$desired_python" ${CONDA_ENV_CREATE_FLAGS} ${EXTRA_CONDA_INSTALL_FLAGS}
conda activate test_conda_env
pip install "$PYTORCH_FINAL_PACKAGE_DIR"/*.whl numpy -v
# shellcheck disable=SC2086
python "${PYTORCH_ROOT}/.ci/pytorch/smoke_test/smoke_test.py" --package torchonly ${SMOKE_TEST_PARAMS}
- uses: actions/upload-artifact@v4.4.0
if: always()
with:
name: wheel-py3_14-cpu
retention-days: 14
if-no-files-found: error
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_14-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_14-cpu-build
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: wheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: manylinux2_28-builder
DOCKER_IMAGE_TAG_PREFIX: cpu
DESIRED_PYTHON: "3.14"
build_name: wheel-py3_14-cpu
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
uses: ./.github/workflows/_binary-upload.yml
wheel-py3_14t-cpu-build:
if: ${{ github.repository_owner == 'pytorch' }}
runs-on: macos-14-xlarge
timeout-minutes: 240
env:
PYTORCH_ROOT: ${{ github.workspace }}/pytorch
PACKAGE_TYPE: wheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
SKIP_ALL_TESTS: 1
DESIRED_PYTHON: "3.14t"
steps:
# NOTE: These environment variables are put here so that they can be applied on every job equally
# They are also here because setting them at a workflow level doesn't give us access to the
# runner.temp variable, which we need.
- name: Populate binary env
shell: bash
run: |
# shellcheck disable=SC2129
echo "BINARY_ENV_FILE=${RUNNER_TEMP}/env" >> "${GITHUB_ENV}"
# shellcheck disable=SC2129
echo "PYTORCH_FINAL_PACKAGE_DIR=${RUNNER_TEMP}/artifacts" >> "${GITHUB_ENV}"
# shellcheck disable=SC2129
echo "MAC_PACKAGE_WORK_DIR=${RUNNER_TEMP}" >> "${GITHUB_ENV}"
- name: Install conda and dependencies
run: |
# Install conda, setup-miniconda messes with the path that messes with the ruby stuff we do later on
curl --retry 3 --retry-all-errors -o "${RUNNER_TEMP}/conda.sh" "https://repo.anaconda.com/miniconda/Miniconda3-py310_23.5.2-0-MacOSX-$(uname -m).sh"
chmod +x "${RUNNER_TEMP}/conda.sh"
/bin/bash "${RUNNER_TEMP}/conda.sh" -b -p "${RUNNER_TEMP}/anaconda"
echo "${RUNNER_TEMP}/anaconda/bin" >> "${GITHUB_PATH}"
if [ -d "/Applications/Xcode_14.3.1.app" ]; then
echo "DEVELOPER_DIR=/Applications/Xcode_14.3.1.app/Contents/Developer" >> "${GITHUB_ENV}"
elif [ -d "/Applications/Xcode_13.3.1.app" ]; then
echo "DEVELOPER_DIR=/Applications/Xcode_13.3.1.app/Contents/Developer" >> "${GITHUB_ENV}"
fi
- name: Checkout PyTorch
uses: actions/checkout@v4
with:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
submodules: recursive
path: pytorch
show-progress: false
- name: Clean PyTorch checkout
run: |
# Remove any artifacts from the previous checkouts
git clean -fxd
working-directory: pytorch
- name: Populate binary env
run: |
# shellcheck disable=SC1091
source "${RUNNER_TEMP}/anaconda/bin/activate"
"${PYTORCH_ROOT}/.circleci/scripts/binary_populate_env.sh"
- name: Build PyTorch binary
run: |
# shellcheck disable=SC1091
source "${RUNNER_TEMP}/anaconda/bin/activate"
set -eux -o pipefail
# shellcheck disable=SC1090
source "${BINARY_ENV_FILE:-/Users/distiller/project/env}"
mkdir -p "$PYTORCH_FINAL_PACKAGE_DIR"
# Build
USE_PYTORCH_METAL_EXPORT=1
USE_COREML_DELEGATE=1
TORCH_PACKAGE_NAME="${TORCH_PACKAGE_NAME//-/_}"
export USE_PYTORCH_METAL_EXPORT
export USE_COREML_DELEGATE
export TORCH_PACKAGE_NAME
"${PYTORCH_ROOT}/.ci/wheel/build_wheel.sh"
- name: Test PyTorch wheel
run: |
# shellcheck disable=SC1091
source "${RUNNER_TEMP}/anaconda/bin/activate"
set -eux -o pipefail
# shellcheck disable=SC1090
source "${BINARY_ENV_FILE:-/Users/distiller/project/env}"
pip uninstall -y "$TORCH_PACKAGE_NAME" || true
pip uninstall -y "$TORCH_PACKAGE_NAME" || true
# Create new "clean" conda environment for testing
SMOKE_TEST_PARAMS=""
EXTRA_CONDA_INSTALL_FLAGS=""
CONDA_ENV_CREATE_FLAGS=""
# shellcheck disable=SC2153
case $DESIRED_PYTHON in
3.14t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.14)
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge/label/python_rc -c conda-forge"
desired_python="3.14.0rc1"
;;
3.13t)
CONDA_ENV_CREATE_FLAGS="python-freethreading"
EXTRA_CONDA_INSTALL_FLAGS="-c conda-forge"
desired_python="3.13"
;;
*)
# shellcheck disable=SC2153
desired_python=${DESIRED_PYTHON}
;;
esac
# shellcheck disable=SC2086
conda create -yn "test_conda_env" python="$desired_python" ${CONDA_ENV_CREATE_FLAGS} ${EXTRA_CONDA_INSTALL_FLAGS}
conda activate test_conda_env
pip install "$PYTORCH_FINAL_PACKAGE_DIR"/*.whl numpy -v
# shellcheck disable=SC2086
python "${PYTORCH_ROOT}/.ci/pytorch/smoke_test/smoke_test.py" --package torchonly ${SMOKE_TEST_PARAMS}
- uses: actions/upload-artifact@v4.4.0
if: always()
with:
name: wheel-py3_14t-cpu
retention-days: 14
if-no-files-found: error
path: "${{ env.PYTORCH_FINAL_PACKAGE_DIR }}"
wheel-py3_14t-cpu-upload: # Uploading
if: ${{ github.repository_owner == 'pytorch' }}
permissions:
id-token: write
contents: read
needs: wheel-py3_14t-cpu-build
with:
PYTORCH_ROOT: /pytorch
PACKAGE_TYPE: wheel
# TODO: This is a legacy variable that we eventually want to get rid of in
# favor of GPU_ARCH_VERSION
DESIRED_CUDA: cpu
GPU_ARCH_TYPE: cpu
DOCKER_IMAGE: manylinux2_28-builder
DOCKER_IMAGE_TAG_PREFIX: cpu
DESIRED_PYTHON: "3.14t"
build_name: wheel-py3_14t-cpu
use_s3: False
secrets:
github-token: ${{ secrets.GITHUB_TOKEN }}
uses: ./.github/workflows/_binary-upload.yml

View File

@ -4,9 +4,12 @@ on:
pull_request:
paths:
- .github/workflows/h100-cutlass-backend.yml
- torch/_inductor/codegen/cuda/**
- test/inductor/test_cutlass_backend.py
- test/inductor/test_cutlass_evt.py
workflow_dispatch:
schedule:
- cron: 22 9 * * * # every 24 hours about 2:22am PDT
- cron: 22 9,21 * * * # every 12 hours
push:
tags:
- ciflow/h100-cutlass-backend/*

View File

@ -88,23 +88,23 @@ jobs:
docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3-benchmarks
test-matrix: |
{ include: [
{ config: "inductor_huggingface_perf_rocm", shard: 1, num_shards: 4, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_huggingface_perf_rocm", shard: 2, num_shards: 4, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_huggingface_perf_rocm", shard: 3, num_shards: 4, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_huggingface_perf_rocm", shard: 4, num_shards: 4, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_timm_perf_rocm", shard: 1, num_shards: 5, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_timm_perf_rocm", shard: 2, num_shards: 5, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_timm_perf_rocm", shard: 3, num_shards: 5, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_timm_perf_rocm", shard: 4, num_shards: 5, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_timm_perf_rocm", shard: 5, num_shards: 5, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_torchbench_perf_rocm", shard: 1, num_shards: 8, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_torchbench_perf_rocm", shard: 2, num_shards: 8, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_torchbench_perf_rocm", shard: 3, num_shards: 8, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_torchbench_perf_rocm", shard: 4, num_shards: 8, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_torchbench_perf_rocm", shard: 5, num_shards: 8, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_torchbench_perf_rocm", shard: 6, num_shards: 8, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_torchbench_perf_rocm", shard: 7, num_shards: 8, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_torchbench_perf_rocm", shard: 8, num_shards: 8, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor_huggingface_perf_rocm", shard: 1, num_shards: 4, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_huggingface_perf_rocm", shard: 2, num_shards: 4, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_huggingface_perf_rocm", shard: 3, num_shards: 4, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_huggingface_perf_rocm", shard: 4, num_shards: 4, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_timm_perf_rocm", shard: 1, num_shards: 5, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_timm_perf_rocm", shard: 2, num_shards: 5, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_timm_perf_rocm", shard: 3, num_shards: 5, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_timm_perf_rocm", shard: 4, num_shards: 5, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_timm_perf_rocm", shard: 5, num_shards: 5, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_torchbench_perf_rocm", shard: 1, num_shards: 8, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_torchbench_perf_rocm", shard: 2, num_shards: 8, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_torchbench_perf_rocm", shard: 3, num_shards: 8, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_torchbench_perf_rocm", shard: 4, num_shards: 8, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_torchbench_perf_rocm", shard: 5, num_shards: 8, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_torchbench_perf_rocm", shard: 6, num_shards: 8, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_torchbench_perf_rocm", shard: 7, num_shards: 8, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor_torchbench_perf_rocm", shard: 8, num_shards: 8, runner: "linux.rocm.gpu.gfx942.1" },
]}
secrets: inherit

View File

@ -77,25 +77,25 @@ jobs:
uses: ./.github/workflows/_linux-build.yml
with:
build-environment: linux-jammy-rocm-py3_10
docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3
docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3-benchmarks
sync-tag: rocm-build
test-matrix: |
{ include: [
{ config: "dynamo_eager_torchbench", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "dynamo_eager_torchbench", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "dynamo_eager_huggingface", shard: 1, num_shards: 1, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "dynamo_eager_timm", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "dynamo_eager_timm", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "aot_eager_torchbench", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "aot_eager_torchbench", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "aot_eager_huggingface", shard: 1, num_shards: 1, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "aot_eager_timm", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "aot_eager_timm", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "dynamic_aot_eager_torchbench", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "dynamic_aot_eager_torchbench", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "dynamic_aot_eager_huggingface", shard: 1, num_shards: 1, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "dynamic_aot_eager_timm", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "dynamic_aot_eager_timm", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "dynamo_eager_torchbench", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "dynamo_eager_torchbench", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "dynamo_eager_huggingface", shard: 1, num_shards: 1, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "dynamo_eager_timm", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "dynamo_eager_timm", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "aot_eager_torchbench", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "aot_eager_torchbench", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "aot_eager_huggingface", shard: 1, num_shards: 1, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "aot_eager_timm", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "aot_eager_timm", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "dynamic_aot_eager_torchbench", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "dynamic_aot_eager_torchbench", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "dynamic_aot_eager_huggingface", shard: 1, num_shards: 1, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "dynamic_aot_eager_timm", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "dynamic_aot_eager_timm", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
]}
secrets: inherit

View File

@ -47,8 +47,8 @@ jobs:
docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3
test-matrix: |
{ include: [
{ config: "inductor", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "inductor", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "inductor", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
]}
secrets: inherit

View File

@ -251,43 +251,6 @@ jobs:
build-environment: linux-jammy-py3.13-clang12
docker-image: ${{ needs.linux-jammy-py3_13-clang12-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-py3_13-clang12-build.outputs.test-matrix }}
timeout-minutes: 600
secrets: inherit
linux-jammy-cuda12_8-py3_10-gcc11-build:
name: linux-jammy-cuda12.8-py3.10-gcc11
uses: ./.github/workflows/_linux-build.yml
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build-environment: linux-jammy-cuda12.8-py3.10-gcc11
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '7.5 8.9'
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g6.4xlarge.experimental.nvidia.gpu" },
{ config: "default", shard: 2, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g6.4xlarge.experimental.nvidia.gpu" },
{ config: "default", shard: 3, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g6.4xlarge.experimental.nvidia.gpu" },
{ config: "default", shard: 4, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g6.4xlarge.experimental.nvidia.gpu" },
{ config: "default", shard: 5, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g6.4xlarge.experimental.nvidia.gpu" },
{ config: "distributed", shard: 1, num_shards: 3, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g4dn.12xlarge.nvidia.gpu" },
{ config: "distributed", shard: 2, num_shards: 3, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g4dn.12xlarge.nvidia.gpu" },
{ config: "distributed", shard: 3, num_shards: 3, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g4dn.12xlarge.nvidia.gpu" },
{ config: "pr_time_benchmarks", shard: 1, num_shards: 1, runner: "linux.g4dn.metal.nvidia.gpu" },
]}
secrets: inherit
linux-jammy-cuda12_8-py3_10-gcc11-test:
name: linux-jammy-cuda12.8-py3.10-gcc11
uses: ./.github/workflows/_linux-test.yml
needs:
- linux-jammy-cuda12_8-py3_10-gcc11-build
- target-determination
with:
timeout-minutes: 360
build-environment: linux-jammy-cuda12.8-py3.10-gcc11
docker-image: ${{ needs.linux-jammy-cuda12_8-py3_10-gcc11-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-cuda12_8-py3_10-gcc11-build.outputs.test-matrix }}
secrets: inherit
linux-jammy-cuda12_8-cudnn9-py3_9-clang12-build:

24
.github/workflows/riscv64.yml vendored Normal file
View File

@ -0,0 +1,24 @@
name: riscv64
on:
push:
tags:
- ciflow/riscv64/*
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
permissions: read-all
jobs:
pytorch-linux-noble-riscv64-py3_12-gcc14-cross-build:
if: github.repository_owner == 'pytorch'
name: pytorch-linux-noble-riscv64-py3_12-gcc14-cross-build
uses: ./.github/workflows/_linux-build.yml
with:
build-environment: linux-noble-riscv64-py3.12-gcc14
docker-image-name: pytorch-linux-noble-riscv64-py3.12-gcc14
runner: linux.2xlarge
secrets: inherit

View File

@ -48,12 +48,12 @@ jobs:
sync-tag: rocm-build
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 6, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "default", shard: 2, num_shards: 6, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "default", shard: 3, num_shards: 6, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "default", shard: 4, num_shards: 6, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "default", shard: 5, num_shards: 6, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "default", shard: 6, num_shards: 6, runner: "linux.rocm.gpu.gfx942.2" },
{ config: "default", shard: 1, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "default", shard: 2, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "default", shard: 3, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "default", shard: 4, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "default", shard: 5, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "default", shard: 6, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1" },
]}
secrets: inherit

70
.github/workflows/tools-unit-tests.yml vendored Normal file
View File

@ -0,0 +1,70 @@
name: test-scripts-and-ci-tools
on:
push:
branches:
- main
paths:
- scripts/lumen_cli/**
- .github/workflows/tools-unit-tests.yml
pull_request:
paths:
- scripts/lumen_cli/**
- .github/workflows/tools-unit-tests.yml
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name == 'workflow_dispatch' }}
cancel-in-progress: true
jobs:
lumen-cli-unit-tests-python312:
permissions:
contents: read
pull-requests: write
if: ${{ github.repository_owner == 'pytorch' }}
runs-on: ubuntu-latest
steps:
- name: Checkout pytorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
with:
submodules: true
fetch-depth: 0
- name: Setup Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.12'
cache: pip
- name: Run tests
continue-on-error: true
run: |
set -ex
python3 -m venv /tmp/venv
source /tmp/venv/bin/activate
pip install -e .ci/lumen_cli/
pytest -v -s .ci/lumen_cli/tests/*
lumen-cli-compatible-python39:
permissions:
contents: read
pull-requests: write
if: ${{ github.repository_owner == 'pytorch' }}
runs-on: ubuntu-latest
steps:
- name: Checkout pytorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
with:
submodules: true
fetch-depth: 0
- name: Setup Python
uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: '3.9'
cache: 'pip'
- name: Run tests
continue-on-error: true
run: |
set -ex
python3 -m venv /tmp/venv
source /tmp/venv/bin/activate
pip install -e .ci/lumen_cli/

View File

@ -63,6 +63,43 @@ jobs:
]}
secrets: inherit
linux-jammy-cuda12_8-py3_10-gcc11-build:
name: linux-jammy-cuda12.8-py3.10-gcc11
uses: ./.github/workflows/_linux-build.yml
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build-environment: linux-jammy-cuda12.8-py3.10-gcc11
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '7.5 8.9'
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g6.4xlarge.experimental.nvidia.gpu" },
{ config: "default", shard: 2, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g6.4xlarge.experimental.nvidia.gpu" },
{ config: "default", shard: 3, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g6.4xlarge.experimental.nvidia.gpu" },
{ config: "default", shard: 4, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g6.4xlarge.experimental.nvidia.gpu" },
{ config: "default", shard: 5, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g6.4xlarge.experimental.nvidia.gpu" },
{ config: "distributed", shard: 1, num_shards: 3, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g4dn.12xlarge.nvidia.gpu" },
{ config: "distributed", shard: 2, num_shards: 3, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g4dn.12xlarge.nvidia.gpu" },
{ config: "distributed", shard: 3, num_shards: 3, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g4dn.12xlarge.nvidia.gpu" },
{ config: "pr_time_benchmarks", shard: 1, num_shards: 1, runner: "linux.g4dn.metal.nvidia.gpu" },
]}
secrets: inherit
linux-jammy-cuda12_8-py3_10-gcc11-test:
name: linux-jammy-cuda12.8-py3.10-gcc11
uses: ./.github/workflows/_linux-test.yml
needs:
- linux-jammy-cuda12_8-py3_10-gcc11-build
- target-determination
with:
timeout-minutes: 360
build-environment: linux-jammy-cuda12.8-py3.10-gcc11
docker-image: ${{ needs.linux-jammy-cuda12_8-py3_10-gcc11-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-cuda12_8-py3_10-gcc11-build.outputs.test-matrix }}
secrets: inherit
# no-ops builds test USE_PER_OPERATOR_HEADERS=0 where ATen/ops is not generated
linux-jammy-cuda12_8-py3_10-gcc11-no-ops-build:
name: linux-jammy-cuda12.8-py3.10-gcc11-no-ops

View File

@ -1196,7 +1196,7 @@ if(APPLE)
string(
APPEND
CMAKE_SHARED_LINKER_FLAGS
" -weak_framework Foundation -weak_framework MetalPerformanceShaders -weak_framework MetalPerformanceShadersGraph -weak_framework Metal"
" -weak_framework Foundation -weak_framework MetalPerformanceShaders -weak_framework MetalPerformanceShadersGraph -weak_framework Metal -weak_framework IOKit"
)
# To suppress MPSGraph availability warnings
append_cxx_flag_if_supported("-Wno-unguarded-availability-new"

View File

@ -1,4 +1,4 @@
![PyTorch Logo](https://github.com/pytorch/pytorch/raw/main/docs/source/_static/img/pytorch-logo-dark.png)
![PyTorch Logo](https://github.com/pytorch/pytorch/blob/9708fcf92db88b80b9010c68662d634434da3106/docs/source/_static/img/pytorch-logo-dark.png)
--------------------------------------------------------------------------------
@ -72,7 +72,7 @@ Elaborating Further:
If you use NumPy, then you have used Tensors (a.k.a. ndarray).
![Tensor illustration](./docs/source/_static/img/tensor_illustration.png)
![Tensor illustration](https://github.com/pytorch/pytorch/blob/9708fcf92db88b80b9010c68662d634434da3106/docs/source/_static/img/tensor_illustration.png)
PyTorch provides Tensors that can live either on the CPU or the GPU and accelerates the
computation by a huge amount.
@ -99,7 +99,7 @@ from several research papers on this topic, as well as current and past work suc
While this technique is not unique to PyTorch, it's one of the fastest implementations of it to date.
You get the best of speed and flexibility for your crazy research.
![Dynamic graph](https://github.com/pytorch/pytorch/raw/main/docs/source/_static/img/dynamic_graph.gif)
![Dynamic graph](https://github.com/pytorch/pytorch/blob/9708fcf92db88b80b9010c68662d634434da3106/docs/source/_static/img/dynamic_graph.gif)
### Python First

View File

@ -2,7 +2,7 @@
## Demo applications and tutorials
Please refer to [pytorch-labs/executorch-examples](https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo) for the Android demo app based on [ExecuTorch](https://github.com/pytorch/executorch).
Please refer to [meta-pytorch/executorch-examples](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo) for the Android demo app based on [ExecuTorch](https://github.com/pytorch/executorch).
Please join our [Discord](https://discord.com/channels/1334270993966825602/1349854760299270284) for any questions.

View File

@ -216,6 +216,7 @@ TORCH_LIBRARY_IMPL(aten, AutocastMPS, m) {
KERNEL_MPS(_convolution, lower_precision_fp)
KERNEL_MPS(conv1d, lower_precision_fp)
KERNEL_MPS(conv2d, lower_precision_fp)
KERNEL_MPS(conv3d, lower_precision_fp)
KERNEL_MPS(conv_tbc, lower_precision_fp)
KERNEL_MPS(conv_transpose1d, lower_precision_fp)
KERNEL_MPS(conv_transpose2d, input, lower_precision_fp)

View File

@ -1,6 +1,7 @@
#pragma once
#include <c10/core/Allocator.h>
#include <c10/core/AllocatorConfig.h>
#include <c10/core/Stream.h>
#include <c10/core/thread_pool.h>
#include <c10/util/flat_hash_map.h>
@ -251,6 +252,7 @@ struct CachingHostAllocatorImpl {
auto* block = reinterpret_cast<B*>(ctx);
std::optional<std::vector<E>> events;
ska::flat_hash_set<S> streams;
{
std::lock_guard<std::mutex> g(block->mutex_);
block->allocated_ = false;
@ -259,14 +261,19 @@ struct CachingHostAllocatorImpl {
} else {
events = std::vector<E>();
events->reserve(block->streams_.size());
for (auto stream : block->streams_) {
record_stream(events, stream);
}
block->event_count_ += events->size();
block->event_count_ += block->streams_.size();
// Move out streams to avoid holding the mutex during event recording
streams = std::move(block->streams_);
block->streams_.clear();
}
}
// Event recording must be done outside the mutex to avoid potential
// deadlocks (e.g., when Python GIL is involved)
for (auto stream : streams) {
record_stream(events, stream);
}
if (!events) {
auto index = size_index(block->size_);
std::lock_guard<std::mutex> g(free_list_[index].mutex_);
@ -345,7 +352,8 @@ struct CachingHostAllocatorImpl {
}
virtual bool pinned_use_background_threads() {
return false;
return c10::CachingAllocator::AcceleratorAllocatorConfig::
pinned_use_background_threads();
}
virtual void copy_data(void* dest [[maybe_unused]], const void* src [[maybe_unused]], std::size_t count [[maybe_unused]]) const {

View File

@ -6,6 +6,8 @@
#include <c10/core/DispatchKeySet.h>
#include <c10/util/TypeList.h>
#include <c10/util/intrusive_ptr.h>
#include <atomic>
#include <memory>
#include <type_traits>
namespace c10 {
@ -17,6 +19,9 @@ class OperatorHandle;
struct OperatorKernel;
class KernelFunction;
class KernelToken;
class SafeKernelFunction;
template <typename T>
using has_symint = std::disjunction<
std::is_same<c10::SymInt, T>,
@ -90,6 +95,12 @@ class TORCH_API KernelFunction final {
BoxedKernel::BoxedKernelFunction_withDispatchKeys;
KernelFunction();
~KernelFunction();
KernelFunction(const KernelFunction&) = default;
KernelFunction& operator=(const KernelFunction&) = default;
KernelFunction(KernelFunction&&) noexcept = default;
// Fast path for dispatch to allow not touching the boxed kernel in
// the common case where unboxed is available.
@ -262,6 +273,13 @@ class TORCH_API KernelFunction final {
// For testing internal invariants only
bool _equalsBoxedAndUnboxed(const KernelFunction&) const;
// Register a token to be invalidated when this KernelFunction is destroyed
void registerToken(std::weak_ptr<KernelToken> token) const;
// List of tokens that need to be invalidated when this KernelFunction is
// destroyed
mutable std::vector<std::weak_ptr<KernelToken>> tokens_;
private:
explicit KernelFunction(
std::unique_ptr<OperatorKernel> functor,
@ -278,6 +296,47 @@ class TORCH_API KernelFunction final {
void* sym_unboxed_kernel_func_;
};
// Token held by SafeKernelFunction that gets invalidated when KernelFunction is
// destroyed
class KernelToken {
public:
bool isValid() const;
void invalidate();
private:
std::atomic<bool> invalid_{false};
};
class SafeKernelFunction {
public:
SafeKernelFunction(
const KernelFunction* kernel,
std::string debug,
std::shared_ptr<OperatorHandle> opHandle);
// Safe callBoxed - checks token validity first
void callBoxed(
const OperatorHandle& opHandle,
DispatchKeySet dispatchKeySet,
Stack* stack) const;
// Get debug information
const std::string& debug() const {
return debug_;
}
// Get the OpHandle that lives on this SafeKernelFunction
const OperatorHandle& opHandle() const {
return *opHandle_;
}
private:
KernelFunction kernel_;
std::shared_ptr<KernelToken> token_;
std::string debug_;
std::shared_ptr<OperatorHandle> opHandle_;
};
} // namespace c10
#include <ATen/core/boxing/KernelFunction_impl.h>

View File

@ -24,6 +24,14 @@ inline KernelFunction::KernelFunction()
unboxed_kernel_func_(nullptr),
sym_unboxed_kernel_func_(nullptr) {}
inline KernelFunction::~KernelFunction() {
for (auto& weak_token : tokens_) {
if (auto token = weak_token.lock()) {
token->invalidate();
}
}
}
inline KernelFunction::KernelFunction(
std::unique_ptr<OperatorKernel> functor,
InternalBoxedKernelFunction* boxed_kernel_func,
@ -157,6 +165,11 @@ C10_ALWAYS_INLINE Return KernelFunction::call(
std::forward<Args>(args)...);
}
inline void KernelFunction::registerToken(
std::weak_ptr<KernelToken> token) const {
tokens_.push_back(std::move(token));
}
inline KernelFunction KernelFunction::makeFromBoxedKernel(
BoxedKernel boxed_fn) {
return KernelFunction(
@ -317,4 +330,38 @@ KernelFunction::makeFromUnboxedLambda(Lambda&& lambda) {
std::forward<Lambda>(lambda)));
}
inline bool KernelToken::isValid() const {
return !invalid_.load(std::memory_order_acquire);
}
inline void KernelToken::invalidate() {
invalid_.store(true, std::memory_order_release);
}
inline SafeKernelFunction::SafeKernelFunction(
const KernelFunction* kernel,
std::string debug,
std::shared_ptr<OperatorHandle> opHandle)
: kernel_(kernel ? *kernel : KernelFunction()),
token_(std::make_shared<KernelToken>()),
debug_(std::move(debug)),
opHandle_(std::move(opHandle)) {
// Register the token with the original kernel so it gets invalidated when the
// kernel is destroyed
if (kernel) {
kernel->registerToken(token_);
}
}
inline void SafeKernelFunction::callBoxed(
const OperatorHandle& opHandle,
DispatchKeySet dispatchKeySet,
Stack* stack) const {
TORCH_CHECK(
token_ && token_->isValid(),
"SafeKernelFunction has been invalidated ",
debug_);
kernel_.callBoxed(opHandle, dispatchKeySet, stack);
}
} // namespace c10

View File

@ -487,6 +487,10 @@ class TORCH_API OperatorHandle {
return operatorDef_->op.hasComputedKernelForDispatchKey(k);
}
SafeKernelFunction getComputedKernelForDispatchKey(DispatchKey k) const {
return operatorDef_->op.getComputedKernelForDispatchKey(k);
}
std::string dumpComputedTable() const {
return operatorDef_->op.dumpComputedTable();
}

View File

@ -315,6 +315,42 @@ const AnnotatedKernel* OperatorEntry::getKernelForDispatchKey(DispatchKey dispat
return nullptr;
}
SafeKernelFunction OperatorEntry::getComputedKernelForDispatchKey(
DispatchKey k) const {
TORCH_CHECK(
!isAliasDispatchKey(k),
"Alias keys do not have runtime kernel registrations.");
const auto dispatch_ix = getDispatchTableIndexForDispatchKey(k);
TORCH_CHECK(
dispatchTable_[dispatch_ix].isValid(),
"no kernel for ",
k,
" for ",
name_);
// Get the KernelFunction object from kernels_ to pass to SafeKernelFunction
// The KernelFunction object in dispatchTable_ is a copy of the KernelFunction
// in the AnnotatedKernel in kernels_. A KernelFunction is only truly
// deregistered when the kernel is removed from kernels_. However, the
// KernelFunction in dispatchTable_ might be removed before it is deregistered
// (when a newer kernel is registered). Therefore, here we want to return a
// SafeKernelFunction that is backed by the original KernelFunction in
// kernels_, so that we only invalidate it when the kernel is deregistered.
auto [annotatedKernel, _] =
computeDispatchTableEntryWithDebug(c10::Dispatcher::singleton(), k);
// Use findSchemaOrThrow to get OpHandle for the OperatorEntry
auto& dispatcher = c10::Dispatcher::singleton();
auto opHandle = dispatcher.findSchemaOrThrow(
name_.name.c_str(), name_.overload_name.c_str());
return SafeKernelFunction(
&annotatedKernel.kernel,
annotatedKernel.debug,
std::make_shared<OperatorHandle>(opHandle));
}
const std::vector<at::Tag>& OperatorEntry::getTags() const {
#if defined C10_MOBILE
TORCH_CHECK(false, "tags are not saved for Mobile");

View File

@ -217,6 +217,8 @@ class TORCH_API OperatorEntry final {
const KernelFunction& kernelForDispatchKey(DispatchKey k) const;
// Returns true if the "computed table" has an entry for a particular key.
bool hasComputedKernelForDispatchKey(DispatchKey k) const;
// Returns a KernelFunction corresponding to the kernel in dispatchTable
SafeKernelFunction getComputedKernelForDispatchKey(DispatchKey k) const;
// Returns all the operator tags added at the time of registration
const std::vector<at::Tag>& getTags() const;
void setReportErrorCallback_(std::unique_ptr<c10::SafePyObject> callback);

View File

@ -161,11 +161,6 @@ struct CUDACachingHostAllocatorImpl
return true;
}
bool pinned_use_background_threads() override {
return c10::CachingAllocator::AcceleratorAllocatorConfig::
pinned_use_background_threads();
}
EventPool::Event create_event_internal(DeviceIndex idx) {
// Leak the event pool to avoid shutdown issue.
static auto* event_pool = new EventPool();

View File

@ -54,7 +54,7 @@
// There were many bc-breaking changes in major version release of CCCL v3.0.0
// Please see https://nvidia.github.io/cccl/cccl/3.0_migration_guide.html
#if CUB_VERSION >= 300000
#if CUB_VERSION >= 200800
#define CUB_V3_PLUS() true
#else
#define CUB_V3_PLUS() false

View File

@ -55,6 +55,17 @@ class TORCH_API MPSDevice {
*/
bool isMacOS13Plus(MacOSVersion version) const;
/**
* Returns device name
*/
std::string getName() const;
/**
* Returns number of GPU cores.
* 1 Core = 16 ExecutionUnit x 8 ALU x 24 threads
*/
unsigned getCoreCount() const;
~MPSDevice();
private:

View File

@ -85,10 +85,36 @@ bool MPSDevice::isMacOS13Plus(MacOSVersion version) const {
}
}
std::string MPSDevice::getName() const {
@autoreleasepool {
return [[_mtl_device name] UTF8String];
}
}
unsigned MPSDevice::getCoreCount() const {
io_iterator_t iterator = 0;
io_registry_entry_t entry = 0;
int core_count = 0;
auto matchingDict = IOServiceMatching("AGXAccelerator");
TORCH_INTERNAL_ASSERT(matchingDict, "Failed to create matching dict");
const auto status = IOServiceGetMatchingServices(kIOMainPortDefault, matchingDict, &iterator);
TORCH_INTERNAL_ASSERT(status == KERN_SUCCESS);
while ((entry = IOIteratorNext(iterator)) != 0) {
auto property = IORegistryEntryCreateCFProperty(entry, CFSTR("gpu-core-count"), kCFAllocatorDefault, 0);
auto found = CFNumberGetValue(static_cast<CFNumberRef>(property), kCFNumberIntType, &core_count);
CFRelease(property);
IOObjectRelease(entry);
if (found) {
break;
}
}
IOObjectRelease(iterator);
return core_count;
}
at::Allocator* GetMPSAllocator(bool useSharedAllocator) {
return getIMPSAllocator(useSharedAllocator);
}
bool is_available() {
return MPSDevice::getInstance()->device() != nil;
}

View File

@ -362,20 +362,24 @@ inline bool miopen_conv_use_channels_last(const at::Tensor& input, const at::Ten
return false;
}
bool can_use_miopen_channels_last_2d = false;
// TODO: Remove PYTORCH_MIOPEN_SUGGEST_NHWC once ROCm officially supports NHWC in MIOpen
// See #64427
static std::optional<bool> PYTORCH_MIOPEN_SUGGEST_NHWC = c10::utils::check_env("PYTORCH_MIOPEN_SUGGEST_NHWC");
static bool suggest_nhwc = PYTORCH_MIOPEN_SUGGEST_NHWC && *PYTORCH_MIOPEN_SUGGEST_NHWC;
auto input_memory_format = input.suggest_memory_format();
auto weight_memory_format = weight.suggest_memory_format();
auto weight_ndim = weight.ndimension();
can_use_miopen_channels_last_2d = PYTORCH_MIOPEN_SUGGEST_NHWC && *PYTORCH_MIOPEN_SUGGEST_NHWC && (
( (input_memory_format == at::MemoryFormat::ChannelsLast) ||
(weight_memory_format == at::MemoryFormat::ChannelsLast) )
);
bool can_use_miopen_channels_last_2d = suggest_nhwc && (weight_ndim == 4) && (
(input_memory_format == at::MemoryFormat::ChannelsLast) ||
(weight_memory_format == at::MemoryFormat::ChannelsLast)
);
bool can_use_miopen_channels_last_3d = false;
bool can_use_miopen_channels_last_3d = suggest_nhwc && (weight_ndim == 5) && (
(input_memory_format == at::MemoryFormat::ChannelsLast3d) ||
(weight_memory_format == at::MemoryFormat::ChannelsLast3d)
);
return can_use_miopen_channels_last_2d || can_use_miopen_channels_last_3d;
}

View File

@ -3,6 +3,7 @@
#include <ATen/Config.h>
#include <ATen/Parallel.h>
#include <ATen/TensorOperators.h>
#include <ATen/native/CanUse32BitIndexMath.h>
#include <ATen/native/ConvolutionMM3d.h>
#include <ATen/native/ConvUtils.h>
#include <ATen/native/Pool.h>
@ -463,7 +464,7 @@ struct ConvParams {
return true;
}
// native kernel doesn't support 64-bit non-splittable case
if (cudnn_enabled && needs_64bit_indexing_no_split(input, weight)) {
if (cudnn_enabled && !(canUse32BitIndexMath(input) && canUse32BitIndexMath(weight))) {
static long cudnn_version = detail::getCUDAHooks().compiledWithCuDNN() ? detail::getCUDAHooks().versionCuDNN() : -1;
if (!(cudnn_version >= 90300 && at::native::cudnnv8_enabled_check_debug())) {
TORCH_WARN_ONCE("cuDNN cannot be used for large non-batch-splittable convolutions"
@ -1421,7 +1422,7 @@ static inline at::MemoryFormat determine_backend_memory_format(
if (detail::getCUDAHooks().compiledWithMIOpen() && miopen_conv_use_channels_last(input, weight)) {
TORCH_INTERNAL_ASSERT((k == 4 || k == 5),
"Expected 4D or 5D input for miopen memory format selection in determine_backend_memory_format()");
backend_memory_format = (k == 5) ? at::MemoryFormat::Contiguous /*at::MemoryFormat::ChannelsLast3d*/ : at::MemoryFormat::ChannelsLast;
backend_memory_format = (k == 5) ? at::MemoryFormat::ChannelsLast3d : at::MemoryFormat::ChannelsLast;
}
break;
case ConvBackend::Mkldnn:

View File

@ -520,6 +520,15 @@ BatchNormBackend _select_batch_norm_backend(
return BatchNormBackend::Cudnn;
}
// TODO: Remove PYTORCH_MIOPEN_SUGGEST_NHWC_BATCHNORM once ROCm officially supports NHWC in MIOpen
// See https://github.com/pytorch/pytorch/issues/64427.
// non static variable is used to be able to change environment variable in runtime for testing
// enabled by default for ROCm >= 7.0.0 with miopen 3.5
int miopen_version = detail::getCUDAHooks().compiledWithMIOpen() ? detail::getCUDAHooks().versionMIOpen() : 0;
bool is_miopen_3_4 = miopen_version >= 30400; // ROCm 6.4
bool is_miopen_3_5 = miopen_version >= 30500; // ROCm 7.0
bool PYTORCH_MIOPEN_SUGGEST_NHWC_BATCHNORM = c10::utils::check_env("PYTORCH_MIOPEN_SUGGEST_NHWC_BATCHNORM").value_or(is_miopen_3_5);
if (
detail::getCUDAHooks().compiledWithMIOpen()
&& cudnn_enabled
@ -527,13 +536,15 @@ BatchNormBackend _select_batch_norm_backend(
&& input.dim() <= MIOPEN_DIM_MAX
&& input.dim() >= 3
&& input.scalar_type() != at::kDouble
&& (detail::getCUDAHooks().versionMIOpen() >= 30400 || input.scalar_type() != at::kBFloat16)
&& (is_miopen_3_4 || input.scalar_type() != at::kBFloat16)
&& weight.scalar_type() == at::kFloat // only FP32 weight for FP32 or FP16/BF16(mixed) input
&& weight.defined() && bias.defined()
&& ((running_mean.defined() && running_var.defined())
|| (!running_mean.defined() && !running_var.defined() && training))
&& input.suggest_memory_format() != MemoryFormat::ChannelsLast
&& input.suggest_memory_format() != MemoryFormat::ChannelsLast3d
&& (input.suggest_memory_format() == MemoryFormat::Contiguous
|| (is_miopen_3_5 && PYTORCH_MIOPEN_SUGGEST_NHWC_BATCHNORM &&
(input.suggest_memory_format() == MemoryFormat::ChannelsLast
|| input.suggest_memory_format() == MemoryFormat::ChannelsLast3d)))
) {
return BatchNormBackend::Miopen;
}

View File

@ -411,7 +411,8 @@ Tensor fbgemm_pack_gemm_matrix_fp16(const Tensor& weight) {
Tensor fbgemm_linear_fp16_weight_fp32_activation(
const Tensor& input,
const Tensor& packed_weight,
const std::optional<Tensor>& bias) {
const std::optional<Tensor>& bias,
at::Tensor& output) {
TORCH_WARN_ONCE("fbgemm_linear_fp16_weight_fp32_activation is deprecated "
"and will be removed in a future PyTorch release.")
@ -436,9 +437,11 @@ Tensor fbgemm_linear_fp16_weight_fp32_activation(
// NOLINTNEXTLINE(bugprone-narrowing-conversions,cppcoreguidelines-narrowing-conversions)
const int64_t M = size_to_dim_(input.dim() - 1, input.sizes());
const int64_t N = packed_weight_fp16.numCols();
std::vector<int64_t> output_size = input.sizes().vec();
output_size.back() = N;
Tensor output = at::empty(output_size, input.options().dtype(at::kFloat));
// Resize output Tensor
output.resize_(output_size);
// Call the fp16 gemm interface
fbgemm::cblas_gemm_compute(
@ -460,6 +463,14 @@ Tensor fbgemm_linear_fp16_weight_fp32_activation(
return output;
}
Tensor fbgemm_linear_fp16_weight_fp32_activation(
const Tensor& input,
const Tensor& packed_weight,
const std::optional<Tensor>& bias) {
at::Tensor output = at::empty({0}, input.options().dtype(at::kFloat));
return at::native::fbgemm_linear_fp16_weight_fp32_activation(input, packed_weight, bias, output);
}
Tensor fbgemm_linear_fp16_weight(
const Tensor& input,
const Tensor& packed_weight,
@ -468,6 +479,15 @@ Tensor fbgemm_linear_fp16_weight(
input, packed_weight, bias);
}
Tensor fbgemm_linear_fp16_weight(
const Tensor& input,
const Tensor& packed_weight,
const Tensor& bias,
at::Tensor& output) {
return at::native::fbgemm_linear_fp16_weight_fp32_activation(
input, packed_weight, bias, output);
}
#else // USE_FBGEMM
Tensor fbgemm_linear_int8_weight_fp32_activation(
@ -554,6 +574,21 @@ Tensor fbgemm_pack_gemm_matrix_fp16(const Tensor& weight) {
false, "This PyTorch installation was not built with FBGEMM operators");
}
Tensor fbgemm_linear_fp16_weight_fp32_activation(
const Tensor& input,
const Tensor& packed_weight,
const std::optional<Tensor>& bias,
at::Tensor& output) {
TORCH_WARN_ONCE("fbgemm_linear_fp16_weight_fp32_activation is deprecated "
"and will be removed in a future PyTorch release.")
// We make a strong guarantee that models using these operators will have the
// same numerics across different machines. Therefore, we do not provide a
// fallback path and rather fail loudly if we cannot run FBGEMM.
TORCH_CHECK(
false, "This PyTorch installation was not built with FBGEMM operators");
}
Tensor fbgemm_linear_fp16_weight_fp32_activation(
const Tensor& input,
const Tensor& packed_weight,
@ -568,6 +603,21 @@ Tensor fbgemm_linear_fp16_weight_fp32_activation(
false, "This PyTorch installation was not built with FBGEMM operators");
}
Tensor fbgemm_linear_fp16_weight(
const Tensor& input,
const Tensor& packed_weight,
const Tensor& bias,
at::Tensor& output) {
TORCH_WARN_ONCE("fbgemm_linear_fp16_weight is deprecated "
"and will be removed in a future PyTorch release.")
// We make a strong guarantee that models using these operators will have the
// same numerics across different machines. Therefore, we do not provide a
// fallback path and rather fail loudly if we cannot run FBGEMM.
TORCH_CHECK(
false, "This PyTorch installation was not built with FBGEMM operators");
}
Tensor fbgemm_linear_fp16_weight(
const Tensor& input,
const Tensor& packed_weight,

View File

@ -220,6 +220,8 @@ static void check_argmax_argmin(
const char* name,
const Tensor& self,
const std::optional<int64_t>& dim) {
TORCH_CHECK(!self.is_complex(), name, ": does not support complex input");
TORCH_CHECK(!(self.scalar_type() == kBool), name, ": does not support bool input");
if (dim.has_value()) {
auto dim_ = maybe_wrap_dim(dim.value(), self.dim());
native::zero_numel_check_dims(self, dim_, name);

View File

@ -59,6 +59,8 @@ TORCH_META_FUNC(topk)
"selected index k out of range");
int64_t sliceSize = self.dim() == 0 ? 1 : self.size(dim);
TORCH_CHECK(k >= 0 && k <= sliceSize, "k not in range for dimension");
TORCH_CHECK(!self.is_complex(), " topk does not support complex dtypes on CPU");
TORCH_CHECK(!(self.scalar_type() == kBool), "topk does not support bool dtypes on CPU");
// Build the output size, which is the dim being selected set to
// size k
@ -74,11 +76,7 @@ TORCH_META_FUNC2(sort, stable)
(const Tensor& self, std::optional<bool> stable, int64_t dim, bool descending) {
maybe_wrap_dim(dim, self.dim());
const auto self_dtype = self.dtype();
TORCH_CHECK_VALUE(
self_dtype != ScalarType::ComplexFloat &&
self_dtype != ScalarType::ComplexDouble,
"Sort currently does not support complex dtypes on CPU.");
TORCH_CHECK(!self.is_complex(), " Sort does not support complex dtypes on CPU");
// See issue: https://github.com/pytorch/pytorch/issues/65863
// Strides should be dense, so as not to allocate too much memory.

View File

@ -226,8 +226,9 @@ C10_LAUNCH_BOUNDS_1(num_threads())
__global__ void vectorized_elementwise_kernel(int N, func_t f, array_t data) {
using traits = function_traits<func_t>;
constexpr auto io_size = calc_io_size<func_t>();
#ifdef __gfx942__
constexpr int tws = (io_size >= 2) ? 8 : 16;
#if defined(USE_ROCM) && defined(__gfx942__)
// Similar check in launch_vectorized_kernel() as well. Both should be in sync.
constexpr int tws = 16;
#else
constexpr int tws = elems_per_thread<io_size>();
#endif
@ -296,7 +297,8 @@ static inline void launch_vectorized_kernel(
int vec_size = memory::can_vectorize_up_to<func_t>(data);
c10::DeviceIndex curDevice = -1;
AT_CUDA_CHECK(c10::cuda::GetDevice(&curDevice));
int tws = at::detail::getCUDAHooks().isGPUArch({"gfx942"}, curDevice) ? ((io_size >= 2) ? 8 : 16) : elems_per_thread<io_size>();
// Similar check in vectorized_elementwise_kernel() as well. Both should be in sync.
int tws = at::detail::getCUDAHooks().isGPUArch({"gfx942"}, curDevice) ? 16 : elems_per_thread<io_size>();
#else
using cpp_type = typename function_traits<func_t>::result_type;
const uint16_t max_vec_size = memory::can_vectorize_up_to<func_t>(data);

View File

@ -282,6 +282,14 @@ __device__ __forceinline__ void opportunistic_fastAtomicAdd(
}
// not coalsced, so now let try to capture lane-matches...
if (numel > 16 /*<-hueristic threshold*/ * 64 ) {
// well shucks, unlikely to capture same-dest atomics in a wave.
// fall back to direct fastAtomic...
fastAtomicAdd(self_ptr, index, numel, value, true);
return;
}
// __activemask() -- finds the set of threads in the warp that are about to perform atomicAdd
// __match_any_sync() -- returns bit mask of the threads that have same dest addr
auto mask = __match_any_sync(__activemask(), (int64_t)dst);

View File

@ -209,6 +209,10 @@ struct ReduceConfig {
int values_per_thread() const {
return div_up(num_inputs, step_input);
}
int mock_values_per_thread(int parallelism) {
return div_up(num_inputs, step_input * parallelism);
}
};
std::ostream& operator<<(std::ostream& out, const ReduceConfig& config);
@ -1058,7 +1062,7 @@ ReduceConfig setReduceConfig(const TensorIterator& iter){
// In such case, values in each loaded vector always correspond to different outputs.
if (fastest_moving_stride == sizeof(scalar_t)) {
#ifdef USE_ROCM
if (reduction_on_fastest_striding_dimension && dim0 > 128 && iter.num_reduce_dims() == 1) {
if (reduction_on_fastest_striding_dimension && dim0 >= 128 && iter.num_reduce_dims() == 1) {
#else
if (reduction_on_fastest_striding_dimension && dim0 > 128 && iter.num_reduce_dims() == 1 && vt0 >= input_vec_size) {
#endif
@ -1166,8 +1170,17 @@ ReduceConfig setReduceConfig(const TensorIterator& iter){
else if (config.ctas_per_output < 16)
config.ctas_per_output = 1;
bool is_channel_last = iter.tensor_base(1).is_contiguous(at::MemoryFormat::ChannelsLast);
if (iter.ndim() == 3 && !reduction_on_fastest_striding_dimension && !is_channel_last)
if (iter.ndim() == 3 && !reduction_on_fastest_striding_dimension && !is_channel_last) {
config.ctas_per_output = 4;
int vpt = config.values_per_thread();
// Capping the number of values per thread to 2048 for now
// based on known use cases.
while (vpt >= 2048) {
config.ctas_per_output *= 2;
// Computes the new values per thread without side effects
vpt = config.mock_values_per_thread(config.ctas_per_output);
}
}
#endif
if (config.ctas_per_output > 1) {
config.input_mult[2] = config.split_input(config.ctas_per_output);

View File

@ -1304,7 +1304,7 @@ at::Tensor _convert_weight_to_int4pack_cuda(
constexpr int32_t kKTileSize = 16;
// GPT-FAST assumes nTileSize of 8 for quantized weight tensor.
// See https://github.com/pytorch-labs/gpt-fast/blob/091515ab5b06f91c0d6a3b92f9c27463f738cc9b/quantize.py#L510
// See https://github.com/meta-pytorch/gpt-fast/blob/091515ab5b06f91c0d6a3b92f9c27463f738cc9b/quantize.py#L510
// Torch dynamo also requires the torch ops has the same output shape for each device.
// See https://github.com/pytorch/pytorch/blob/ec284d3a74ec1863685febd53687d491fd99a161/torch/_meta_registrations.py#L3263
constexpr int32_t kNTileSizeTensor = 8;

View File

@ -148,6 +148,56 @@ namespace fe = cudnn_frontend;
#define MAX_MHA_DIM 4
// Whether we will use ragged offsets in the dense (non-nested) path
// to avoid recompilation
bool use_ragged_in_dense(
const Tensor& q,
const Tensor& k,
const Tensor& v,
const Tensor& o,
bool has_bias) {
static bool flag =
c10::utils::check_env("TORCH_CUDNN_SDPA_AVOID_RECOMPILE") == true;
if (!flag) {
return flag;
}
TORCH_WARN_ONCE(
"TORCH_CUDNN_SDPA_AVOID_RECOMPILE=1 is currently experimental. "
"Please report any issues to https://github.com/pytorch/pytorch/issues.");
if (has_bias) {
TORCH_WARN_ONCE(
"TORCH_CUDNN_SDPA_AVOID_RECOMPILE=1 only works without bias."
"Consider using the is_causal hint instead of bias for causal masking."
"Falling back to regular dense case, which may trigger excessive recompilation.");
return !has_bias;
}
bool all_bshd = q.dim() == 4 && q.transpose(1, 2).is_contiguous() &&
k.dim() == 4 && k.transpose(1, 2).is_contiguous() && v.dim() == 4 &&
v.transpose(1, 2).is_contiguous() && o.dim() == 4 &&
o.transpose(1, 2).is_contiguous();
if (!all_bshd) {
TORCH_WARN_ONCE(
"TORCH_CUDNN_SDPA_AVOID_RECOMPILE=1 only works with Q, K, V, and output in BSHD memory layout,"
"e.g., Q, K, V must be allocated with torch.randn((B, S, H, D).transpose(1, 2)."
"Falling back to regualr dense case, which may trigger excessive recompilation.");
}
return all_bshd;
}
int roundup_power2(int dim) {
if (!dim) {
return 1;
}
dim--;
dim |= dim >> 1;
dim |= dim >> 2;
dim |= dim >> 4;
dim |= dim >> 8;
dim |= dim >> 16;
dim++;
return dim;
}
struct MHAParams {
c10::DeviceIndex device_id;
fe::DataType_t dataType;
@ -171,6 +221,7 @@ struct MHAParams {
// might be redundant if we take 0 dim/stride
// as signaling no-bias
bool has_attn_bias;
bool use_ragged;
};
void setMHAParams(
@ -228,6 +279,20 @@ void setMHAParams(
std::copy(k.strides().begin(), k.strides().end(), params.k_stride.begin());
std::copy(v.sizes().begin(), v.sizes().end(), params.v_dim.begin());
std::copy(v.strides().begin(), v.strides().end(), params.v_stride.begin());
bool use_ragged = use_ragged_in_dense(q, k, v, q, params.has_attn_bias);
params.use_ragged = use_ragged;
if (use_ragged) {
// ignore B - stride in BSHD (THD) avoid-recompile
params.q_stride[0] = INT_MAX;
params.k_stride[0] = INT_MAX;
params.v_stride[0] = INT_MAX;
// fix seqlen to rounded value
params.s_q = roundup_power2(params.s_q);
params.s_kv = roundup_power2(params.s_kv);
params.q_dim[2] = roundup_power2(params.q_dim[2]);
params.k_dim[2] = roundup_power2(params.k_dim[2]);
params.v_dim[2] = roundup_power2(params.v_dim[2]);
}
// uninit is OK as the struct is memset 0'd
if (params.has_attn_bias) {
std::copy(
@ -277,15 +342,29 @@ struct MHACacheKeyWrapper : ParamsWrapper<MHAParams> {
template <typename T, typename KeyType>
struct MHAGraphCache {
std::unordered_map<KeyType, T, ParamsWrapperHash<KeyType>> engine_cache;
int count = 0;
int hits = 0;
// no mutexes here as caches are now thread local for v8, can also return a
// pointer to the Execution Plan if we know it will not be invalidated by
// another thread
T* find(const KeyType& key) {
static bool flag =
c10::utils::check_env("TORCH_CUDNN_SDPA_CACHE_DEBUG") == true;
if (flag && count) {
TORCH_WARN(
"SDPA Cache Called ",
count,
" times. Hit rate: ",
100 * hits / count,
"%");
}
count++;
auto it = engine_cache.find(key);
if (it == engine_cache.end()) {
return nullptr;
}
hits++;
return &(it->second);
}
@ -402,6 +481,25 @@ auto build_graph(
.set_is_inference(return_softmaxstats == false)
.set_causal_mask(is_causal)
.set_attn_scale(attn_scale);
if (use_ragged_in_dense(q, k, v, o, attn_bias.has_value())) {
auto SEQ_LEN_Q_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(SEQ_LEN_Q)
.set_name("Seq_q")
.set_dim({b, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
auto SEQ_LEN_KV_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(SEQ_LEN_KV)
.set_name("Seq_kv")
.set_dim({b, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
scaled_dot_product_flash_attention_options.set_seq_len_q(SEQ_LEN_Q_)
.set_seq_len_kv(SEQ_LEN_KV_)
.set_padding_mask(true);
}
if (dropout_probability != 0.0f) {
auto seed = mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(SEED)
@ -425,23 +523,11 @@ auto build_graph(
dropout_probability, seed, offset);
}
auto Q_ = mha_graph->tensor(
fe::graph::Tensor_attributes()
.set_uid(Q)
.set_name("Q")
.set_dim(q.sizes().vec())
.set_stride(fixSizeOneDimStrideSDPA(q.sizes(), q.strides().vec())));
fe::graph::Tensor_attributes().set_uid(Q).set_name("Q"));
auto K_ = mha_graph->tensor(
fe::graph::Tensor_attributes()
.set_uid(K)
.set_name("K")
.set_dim(k.sizes().vec())
.set_stride(fixSizeOneDimStrideSDPA(k.sizes(), k.strides().vec())));
fe::graph::Tensor_attributes().set_uid(K).set_name("K"));
auto V_ = mha_graph->tensor(
fe::graph::Tensor_attributes()
.set_uid(V)
.set_name("V")
.set_dim(v.sizes().vec())
.set_stride(fixSizeOneDimStrideSDPA(v.sizes(), v.strides().vec())));
fe::graph::Tensor_attributes().set_uid(V).set_name("V"));
std::optional<std::shared_ptr<fe::graph::Tensor_attributes>> bias;
if (attn_bias.has_value()) {
bias =
@ -455,12 +541,90 @@ auto build_graph(
auto [O_, Stats] =
mha_graph->sdpa(Q_, K_, V_, scaled_dot_product_flash_attention_options);
O_->set_uid(O);
O_->set_output(true).set_dim(o.sizes().vec()).set_stride(o.strides().vec());
O_->set_uid(O).set_output(true);
if (Stats) {
Stats->set_uid(LSE);
Stats->set_output(true).set_data_type(fe::DataType_t::FLOAT);
Stats->set_uid(LSE)
.set_output(true)
.set_data_type(fe::DataType_t::FLOAT)
.set_stride(softmaxstats.strides().vec());
}
if (use_ragged_in_dense(q, k, v, o, attn_bias.has_value())) {
auto RAG_Q_OFF_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(RAG_Q_OFF)
.set_name("cum_seq_q")
.set_dim({b + 1, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
auto RAG_K_OFF_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(RAG_K_OFF)
.set_name("cum_seq_k")
.set_dim({b + 1, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
auto RAG_V_OFF_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(RAG_V_OFF)
.set_name("cum_seq_v")
.set_dim({b + 1, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
auto RAG_O_OFF_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(RAG_O_OFF)
.set_name("cum_seq_o")
.set_dim({b + 1, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
auto RAG_STATS_OFF_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(RAG_LSE_OFF)
.set_name("cum_seq_stats")
.set_dim({b + 1, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
O_->set_ragged_offset(RAG_O_OFF_);
Q_->set_ragged_offset(RAG_Q_OFF_);
K_->set_ragged_offset(RAG_K_OFF_);
V_->set_ragged_offset(RAG_V_OFF_);
auto qsizevec = q.sizes().vec();
auto ksizevec = k.sizes().vec();
auto vsizevec = v.sizes().vec();
auto osizevec = o.sizes().vec();
qsizevec[2] = roundup_power2(qsizevec[2]);
ksizevec[2] = roundup_power2(ksizevec[2]);
vsizevec[2] = roundup_power2(vsizevec[2]);
osizevec[2] = roundup_power2(osizevec[2]);
// we checked for BSHD contig., set fake strides as cuDNN will complain
// if e.g., a ragged dim is smaller than a non-ragged one:
// consider HBSD tensor where H is 1
Q_->set_dim(qsizevec).set_stride(
{INT_MAX, qsizevec[3], qsizevec[1] * qsizevec[3], 1});
K_->set_dim(ksizevec).set_stride(
{INT_MAX, ksizevec[3], ksizevec[1] * ksizevec[3], 1});
V_->set_dim(vsizevec).set_stride(
{INT_MAX, vsizevec[3], vsizevec[1] * vsizevec[3], 1});
O_->set_dim(osizevec).set_stride(
{INT_MAX, osizevec[3], osizevec[1] * osizevec[3], 1});
if (Stats) {
Stats->set_ragged_offset(RAG_STATS_OFF_);
auto statssizevec = softmaxstats.sizes().vec();
statssizevec[2] = roundup_power2(statssizevec[2]);
Stats->set_dim(statssizevec);
}
} else {
Q_->set_dim(q.sizes().vec())
.set_stride(fixSizeOneDimStrideSDPA(q.sizes(), q.strides().vec()));
K_->set_dim(k.sizes().vec())
.set_stride(fixSizeOneDimStrideSDPA(k.sizes(), k.strides().vec()));
V_->set_dim(v.sizes().vec())
.set_stride(fixSizeOneDimStrideSDPA(v.sizes(), v.strides().vec()));
O_->set_dim(o.sizes().vec())
.set_stride(fixSizeOneDimStrideSDPA(o.sizes(), o.strides().vec()));
if (Stats) {
Stats->set_dim(softmaxstats.sizes().vec());
}
}
AT_CUDNN_FRONTEND_CHECK(mha_graph->validate());
@ -566,7 +730,7 @@ auto build_graph_nestedtensor(
auto q_strides = q.strides();
auto k_strides = k.strides();
auto v_strides = v.strides();
// NB: cuDNN API shape is transposed
// NB: cuDNN API shape is transposed: we pass it nominally as HTD
constexpr int strideidx0 = 1;
constexpr int strideidx1 = 0;
constexpr int strideidx2 = 2;
@ -724,21 +888,32 @@ auto build_graph_backward(
.set_name("CUDNN_SDPA_BACKWARD")
.set_causal_mask(is_causal)
.set_attn_scale(attn_scale);
auto Q_ = mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(Q)
.set_name("Q")
.set_dim(q.sizes().vec())
.set_stride(q.strides().vec()));
auto K_ = mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(K)
.set_name("K")
.set_dim(k.sizes().vec())
.set_stride(k.strides().vec()));
auto V_ = mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(V)
.set_name("V")
.set_dim(v.sizes().vec())
.set_stride(v.strides().vec()));
if (use_ragged_in_dense(q, k, v, o, attn_bias.has_value())) {
auto SEQ_LEN_Q_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(SEQ_LEN_Q)
.set_name("Seq_q")
.set_dim({b, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
auto SEQ_LEN_KV_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(SEQ_LEN_KV)
.set_name("Seq_kv")
.set_dim({b, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
sdpa_backward_options.set_seq_len_q(SEQ_LEN_Q_)
.set_seq_len_kv(SEQ_LEN_KV_)
.set_padding_mask(true);
}
auto Q_ = mha_graph->tensor(
fe::graph::Tensor_attributes().set_uid(Q).set_name("Q"));
auto K_ = mha_graph->tensor(
fe::graph::Tensor_attributes().set_uid(K).set_name("K"));
auto V_ = mha_graph->tensor(
fe::graph::Tensor_attributes().set_uid(V).set_name("V"));
std::optional<std::shared_ptr<fe::graph::Tensor_attributes>> bias;
if (attn_bias.has_value()) {
bias =
@ -770,31 +945,108 @@ auto build_graph_backward(
: fe::DataType_t::INT64));
sdpa_backward_options.set_dropout(dropout_probability, seed, offset);
}
auto O_ = mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(O)
.set_name("O")
.set_dim(o.sizes().vec())
.set_stride(o.strides().vec()));
auto O_ = mha_graph->tensor(
fe::graph::Tensor_attributes().set_uid(O).set_name("O"));
auto Stats = mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(LSE)
.set_name("Stats")
.set_dim(softmaxstats.sizes().vec())
.set_stride(softmaxstats.strides().vec())
.set_data_type(fe::DataType_t::FLOAT));
auto Do = mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(DO)
.set_name("DO")
.set_dim(dO.sizes().vec())
.set_stride(dO.strides().vec()));
auto Do = mha_graph->tensor(
fe::graph::Tensor_attributes().set_uid(DO).set_name("DO"));
auto [Dq, Dk, Dv] = mha_graph->sdpa_backward(
Q_, K_, V_, O_, Do, Stats, sdpa_backward_options);
Dq->set_uid(DQ);
Dq->set_output(true).set_dim(dQ.sizes().vec()).set_stride(dQ.strides().vec());
Dk->set_uid(DK);
Dk->set_output(true).set_dim(dK.sizes().vec()).set_stride(dK.strides().vec());
Dv->set_uid(DV);
Dv->set_output(true).set_dim(dV.sizes().vec()).set_stride(dV.strides().vec());
Dq->set_uid(DQ).set_output(true);
Dk->set_uid(DK).set_output(true);
Dv->set_uid(DV).set_output(true);
if (use_ragged_in_dense(q, k, v, o, attn_bias.has_value())) {
auto RAG_Q_OFF_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(RAG_Q_OFF)
.set_name("cum_seq_q")
.set_dim({b + 1, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
auto RAG_K_OFF_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(RAG_K_OFF)
.set_name("cum_seq_k")
.set_dim({b + 1, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
auto RAG_V_OFF_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(RAG_V_OFF)
.set_name("cum_seq_v")
.set_dim({b + 1, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
auto RAG_O_OFF_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(RAG_O_OFF)
.set_name("cum_seq_o")
.set_dim({b + 1, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
auto RAG_STATS_OFF_ =
mha_graph->tensor(fe::graph::Tensor_attributes()
.set_uid(RAG_LSE_OFF)
.set_name("cum_seq_stats")
.set_dim({b + 1, 1, 1, 1})
.set_stride({1, 1, 1, 1})
.set_data_type(fe::DataType_t::INT32));
O_->set_ragged_offset(RAG_O_OFF_);
Q_->set_ragged_offset(RAG_Q_OFF_);
K_->set_ragged_offset(RAG_K_OFF_);
V_->set_ragged_offset(RAG_V_OFF_);
Dq->set_ragged_offset(RAG_Q_OFF_);
Dk->set_ragged_offset(RAG_K_OFF_);
Dv->set_ragged_offset(RAG_V_OFF_);
Do->set_ragged_offset(RAG_O_OFF_);
auto qsizevec = q.sizes().vec();
auto ksizevec = k.sizes().vec();
auto vsizevec = v.sizes().vec();
auto osizevec = o.sizes().vec();
qsizevec[2] = roundup_power2(qsizevec[2]);
ksizevec[2] = roundup_power2(ksizevec[2]);
vsizevec[2] = roundup_power2(vsizevec[2]);
osizevec[2] = roundup_power2(osizevec[2]);
// see corresponding section in the forward about the hardcoding
// of strides here
Q_->set_dim(qsizevec).set_stride(
{INT_MAX, qsizevec[3], qsizevec[1] * qsizevec[3], 1});
K_->set_dim(ksizevec).set_stride(
{INT_MAX, ksizevec[3], ksizevec[1] * ksizevec[3], 1});
V_->set_dim(vsizevec).set_stride(
{INT_MAX, vsizevec[3], vsizevec[1] * vsizevec[3], 1});
O_->set_dim(osizevec).set_stride(
{INT_MAX, osizevec[3], osizevec[1] * osizevec[3], 1});
// should be identical to their non-d counterparts
Dq->set_dim(qsizevec).set_stride(
{INT_MAX, qsizevec[3], qsizevec[1] * qsizevec[3], 1});
Dk->set_dim(ksizevec).set_stride(
{INT_MAX, ksizevec[3], ksizevec[1] * ksizevec[3], 1});
Dv->set_dim(vsizevec).set_stride(
{INT_MAX, vsizevec[3], vsizevec[1] * vsizevec[3], 1});
Do->set_dim(osizevec).set_stride(
{INT_MAX, osizevec[3], osizevec[1] * osizevec[3], 1});
Stats->set_ragged_offset(RAG_STATS_OFF_);
auto statssizevec = softmaxstats.sizes().vec();
statssizevec[2] = roundup_power2(statssizevec[2]);
Stats->set_dim(statssizevec);
} else {
O_->set_dim(o.sizes().vec()).set_stride(o.strides().vec());
Q_->set_dim(q.sizes().vec()).set_stride(q.strides().vec());
K_->set_dim(k.sizes().vec()).set_stride(k.strides().vec());
V_->set_dim(v.sizes().vec()).set_stride(v.strides().vec());
Dq->set_dim(dQ.sizes().vec()).set_stride(dQ.strides().vec());
Dk->set_dim(dK.sizes().vec()).set_stride(dK.strides().vec());
Dv->set_dim(dV.sizes().vec()).set_stride(dV.strides().vec());
Do->set_dim(dO.sizes().vec()).set_stride(dO.strides().vec());
Stats->set_dim(softmaxstats.sizes().vec());
}
AT_CUDNN_FRONTEND_CHECK(mha_graph->validate());
AT_CUDNN_FRONTEND_CHECK(mha_graph->build_operation_graph(handle));
AT_CUDNN_FRONTEND_CHECK(
@ -1066,6 +1318,47 @@ void run_cudnn_SDP_fprop(
Tensor& o,
Tensor& dropoutseed,
Tensor& dropoutoffset) {
// do nothing if we got 0-element tensors
if (!q.numel() || !k.numel() || !v.numel()) {
return;
}
Tensor seqlen_q, seqlen_kv;
Tensor rag_off_q, rag_off_k, rag_off_v, rag_off_o, rag_off_lse;
if (!o.defined()) {
// q is passed to us in BHSD dim order
alloc_with_matching_layout(q, o, {b, h, s_q, d_v});
}
bool use_ragged = use_ragged_in_dense(q, k, v, o, attn_bias.has_value());
if (return_softmaxstats && !softmaxstats.defined()) {
// TODO(eqy): investigate why cuDNN doesn't like BSH layout softmaxstats
if (!use_ragged) {
softmaxstats = at::empty({b, h, s_q, 1}, q.options().dtype(kFloat));
} else {
softmaxstats =
at::empty({b, s_q, h, 1}, q.options().dtype(kFloat)).transpose(1, 2);
}
}
if (use_ragged) {
seqlen_q = at::full({b, 1, 1, 1}, s_q, q.options().dtype(kInt));
seqlen_kv = at::full({b, 1, 1, 1}, s_kv, q.options().dtype(kInt));
auto cum_seqlen_q = at::full({b + 1, 1, 1, 1}, s_q, q.options().dtype(kInt))
.cumsum(0, kInt)
.add_(-s_q);
auto cum_seqlen_kv =
at::full({b + 1, 1, 1, 1}, s_kv, q.options().dtype(kInt))
.cumsum(0, kInt)
.add_(-s_kv);
rag_off_q = cum_seqlen_q.mul(q.stride(-2));
rag_off_k = cum_seqlen_kv.mul(k.stride(-2));
rag_off_v = cum_seqlen_kv.mul(v.stride(-2));
rag_off_o = cum_seqlen_q.mul(o.stride(-2));
if (return_softmaxstats) {
rag_off_lse = cum_seqlen_q.mul(softmaxstats.stride(-2));
}
}
const auto dprops = at::cuda::getCurrentDeviceProperties();
auto _dropoutseed = dropoutseed;
auto _dropoutoffset = dropoutoffset;
@ -1076,21 +1369,10 @@ void run_cudnn_SDP_fprop(
}
cudnnHandle_t handle = getCudnnHandle();
if (!o.defined()) {
// q is passed to us in BHSD dim order
alloc_with_matching_layout(q, o, {b, h, s_q, d_v});
}
if (return_softmaxstats && !softmaxstats.defined()) {
// TODO(eqy): verify that this is correct
softmaxstats = at::empty({b, h, s_q}, q.options().dtype(kFloat));
}
// do nothing if we got 0-element tensors
if (!q.numel() || !k.numel() || !v.numel()) {
return;
}
// NB: The key initialization will round up sequence length, stride data etc.
// if use_ragged_in_dense is enabled (to allow multiple sequence lenghths to
// reuse the same cached value/graph)
auto key = MHACacheKeyWrapper(
b,
h,
@ -1147,6 +1429,17 @@ void run_cudnn_SDP_fprop(
variant_pack[SEED] = _dropoutseed.data_ptr();
variant_pack[OFFSET] = _dropoutoffset.data_ptr();
}
if (use_ragged_in_dense(q, k, v, o, attn_bias.has_value())) {
variant_pack[SEQ_LEN_Q] = seqlen_q.data_ptr();
variant_pack[SEQ_LEN_KV] = seqlen_kv.data_ptr();
variant_pack[RAG_Q_OFF] = rag_off_q.data_ptr();
variant_pack[RAG_K_OFF] = rag_off_k.data_ptr();
variant_pack[RAG_V_OFF] = rag_off_v.data_ptr();
variant_pack[RAG_O_OFF] = rag_off_o.data_ptr();
if (return_softmaxstats) {
variant_pack[RAG_LSE_OFF] = rag_off_lse.data_ptr();
}
}
auto workspace_size = mha_graph->get_workspace_size();
auto workspace_ptr =
c10::cuda::CUDACachingAllocator::get()->allocate(workspace_size);
@ -1278,6 +1571,9 @@ void run_cudnn_SDP_bprop(
!softmaxstats.numel()) {
return;
}
Tensor seqlen_q, seqlen_kv;
Tensor rag_off_q, rag_off_k, rag_off_v, rag_off_o, rag_off_lse;
auto dprops = at::cuda::getCurrentDeviceProperties();
auto _dropoutseed = dropoutseed;
auto _dropoutoffset = dropoutoffset;
@ -1304,10 +1600,28 @@ void run_cudnn_SDP_bprop(
"with matching strides...");
#else
const auto innermost_dO_stride = dO.strides()[dO.strides().size() - 1];
if (innermost_dO_stride != 1) {
if (innermost_dO_stride != 1 ||
use_ragged_in_dense(q, k, v, o, attn_bias.has_value())) {
permute_to_matching_layout(o, dO_);
}
#endif
if (use_ragged_in_dense(q, k, v, o, attn_bias.has_value())) {
seqlen_q = at::full({b, 1, 1, 1}, s_q, q.options().dtype(kInt));
seqlen_kv = at::full({b, 1, 1, 1}, s_kv, q.options().dtype(kInt));
auto cum_seqlen_q = at::full({b + 1, 1, 1, 1}, s_q, q.options().dtype(kInt))
.cumsum(0, kInt)
.add_(-s_q);
auto cum_seqlen_kv =
at::full({b + 1, 1, 1, 1}, s_kv, q.options().dtype(kInt))
.cumsum(0, kInt)
.add_(-s_kv);
rag_off_q = cum_seqlen_q.mul(q.stride(-2));
rag_off_k = cum_seqlen_kv.mul(k.stride(-2));
rag_off_v = cum_seqlen_kv.mul(v.stride(-2));
rag_off_o = cum_seqlen_q.mul(o.stride(-2));
rag_off_lse = cum_seqlen_q.mul(softmaxstats.stride(-2));
}
cudnnHandle_t handle = getCudnnHandle();
auto key = MHACacheKeyWrapper(
b,
@ -1372,6 +1686,16 @@ void run_cudnn_SDP_bprop(
if (attn_bias.has_value()) {
variant_pack[BIAS] = attn_bias.value().data_ptr();
}
if (use_ragged_in_dense(q, k, v, o, attn_bias.has_value())) {
variant_pack[SEQ_LEN_Q] = seqlen_q.data_ptr();
variant_pack[SEQ_LEN_KV] = seqlen_kv.data_ptr();
variant_pack[RAG_Q_OFF] = rag_off_q.data_ptr();
variant_pack[RAG_K_OFF] = rag_off_k.data_ptr();
variant_pack[RAG_V_OFF] = rag_off_v.data_ptr();
variant_pack[RAG_O_OFF] = rag_off_o.data_ptr();
variant_pack[RAG_LSE_OFF] = rag_off_lse.data_ptr();
}
auto workspace_size = mha_graph->get_workspace_size();
auto workspace_ptr =
c10::cuda::CUDACachingAllocator::get()->allocate(workspace_size);

View File

@ -762,7 +762,7 @@ Tensor miopen_convolution_forward(
auto memory_format = at::MemoryFormat::Contiguous;
if (miopen_conv_use_channels_last(*input, *weight)) {
memory_format = (weight->ndimension() == 5) ? /*at::MemoryFormat::ChannelsLast3d*/at::MemoryFormat::Contiguous : at::MemoryFormat::ChannelsLast;
memory_format = (weight->ndimension() == 5) ? at::MemoryFormat::ChannelsLast3d : at::MemoryFormat::ChannelsLast;
}
Tensor output_t = at::detail::empty_cuda(
@ -870,7 +870,7 @@ Tensor miopen_depthwise_convolution_forward(
auto memory_format = at::MemoryFormat::Contiguous;
if (miopen_conv_use_channels_last(*input, *weight)) {
memory_format = (weight->ndimension() == 5) ? /*at::MemoryFormat::ChannelsLast3d*/at::MemoryFormat::Contiguous : at::MemoryFormat::ChannelsLast;
memory_format = (weight->ndimension() == 5) ? at::MemoryFormat::ChannelsLast3d : at::MemoryFormat::ChannelsLast;
}
Tensor output_t = at::detail::empty_cuda(
@ -1070,7 +1070,7 @@ Tensor miopen_depthwise_convolution_backward_weight(
auto memory_format = at::MemoryFormat::Contiguous;
if (miopen_conv_use_channels_last(*input, *grad_output)) {
memory_format = (input->ndimension() == 5) ? /*at::MemoryFormat::ChannelsLast3d*/at::MemoryFormat::Contiguous : at::MemoryFormat::ChannelsLast;
memory_format = (input->ndimension() == 5) ? at::MemoryFormat::ChannelsLast3d : at::MemoryFormat::ChannelsLast;
}
Tensor grad_output_contig_t = grad_output->contiguous(memory_format);
@ -1123,7 +1123,7 @@ Tensor miopen_convolution_backward_weight(
auto memory_format = at::MemoryFormat::Contiguous;
if (miopen_conv_use_channels_last(*input, *grad_output)) {
memory_format = (input->ndimension() == 5) ? /*at::MemoryFormat::ChannelsLast3d*/at::MemoryFormat::Contiguous : at::MemoryFormat::ChannelsLast;
memory_format = (input->ndimension() == 5) ? at::MemoryFormat::ChannelsLast3d : at::MemoryFormat::ChannelsLast;
}
Tensor grad_output_contig_t = grad_output->contiguous(memory_format);
@ -1196,7 +1196,7 @@ std::tuple<at::Tensor,at::Tensor,at::Tensor> miopen_convolution_transpose_backwa
IntArrayRef padding, IntArrayRef output_padding, IntArrayRef stride, IntArrayRef dilation, int64_t groups,
bool benchmark, bool deterministic, std::array<bool,3> output_mask) {
Tensor grad_output = grad_output_t.contiguous();
Tensor grad_output = grad_output_t.contiguous(input.suggest_memory_format());
Tensor grad_input, grad_weight, grad_bias;
if (output_mask[0]) {
@ -1276,7 +1276,7 @@ Tensor miopen_convolution_backward_input(
auto memory_format = at::MemoryFormat::Contiguous;
if (miopen_conv_use_channels_last(*grad_output, *weight)) {
memory_format = (weight->ndimension() == 5) ? /*at::MemoryFormat::ChannelsLast3d*/at::MemoryFormat::Contiguous : at::MemoryFormat::ChannelsLast;
memory_format = (weight->ndimension() == 5) ? at::MemoryFormat::ChannelsLast3d : at::MemoryFormat::ChannelsLast;
}
Tensor grad_input_t = at::detail::empty_cuda(
@ -1383,7 +1383,7 @@ Tensor miopen_depthwise_convolution_backward_input(
auto memory_format = at::MemoryFormat::Contiguous;
if (miopen_conv_use_channels_last(*grad_output, *weight)) {
memory_format = (weight->ndimension() == 5) ? /*at::MemoryFormat::ChannelsLast3d*/at::MemoryFormat::Contiguous : at::MemoryFormat::ChannelsLast;
memory_format = (weight->ndimension() == 5) ? at::MemoryFormat::ChannelsLast3d : at::MemoryFormat::ChannelsLast;
}
Tensor grad_input_t = at::detail::empty_cuda(
@ -1446,7 +1446,7 @@ std::tuple<at::Tensor,at::Tensor,at::Tensor> miopen_depthwise_convolution_backwa
IntArrayRef padding, IntArrayRef stride, IntArrayRef dilation, int64_t groups,
bool benchmark, bool deterministic, std::array<bool,3> output_mask) {
Tensor grad_output = grad_output_t.contiguous();
Tensor grad_output = grad_output_t.contiguous(input.suggest_memory_format());
Tensor grad_input, grad_weight, grad_bias;
if (output_mask[0]) {

View File

@ -1,3 +1,4 @@
#include <ATen/Context.h>
#include <ATen/native/mkldnn/xpu/detail/oneDNN.h>
#include <ATen/native/transformers/attention.h>
#include <ATen/native/transformers/sdp_utils.h>
@ -49,7 +50,7 @@ bool check_no_grad(sdp::sdp_params const& params, bool debug) {
return !any_inputs_require_grad || !gradmode_enabled;
}
bool use_overrideable_xpu(sdp::sdp_params const& params, bool debug) {
bool can_use_overrideable_attention(sdp::sdp_params const& params, bool debug) {
constexpr auto supported_dtypes = c10::array_of<at::ScalarType>(
at::kFloat, at::kBFloat16, at::kHalf); // double is not supported
@ -73,6 +74,42 @@ bool use_overrideable_xpu(sdp::sdp_params const& params, bool debug) {
return sdp::check_tensor_dtype(params, supported_dtypes, debug);
}
bool can_use_flash_attention(sdp::sdp_params const& params, bool debug) {
// Currently, XPU fallbacks flash attention to overrideable
return can_use_overrideable_attention(params, debug);
}
bool can_use_cudnn_attention(sdp::sdp_params const& params, bool debug) {
if (debug) {
TORCH_WARN("XPU don't support SDPA cudnn attention backend.");
}
return false;
}
bool can_use_mem_efficien_attention(sdp::sdp_params const& params, bool debug) {
if (debug) {
TORCH_WARN("XPU don't support SDPA mem efficient attention backend.");
}
return false;
}
bool priority_order_init = false;
std::array<sdp::SDPBackend, sdp::num_backends> priority_order(
sdp::sdp_params const& params) {
if (!priority_order_init) {
priority_order_init = true;
const std::vector<int64_t> priority_order = {
static_cast<int64_t>(at::SDPBackend::overrideable),
static_cast<int64_t>(at::SDPBackend::math),
static_cast<int64_t>(at::SDPBackend::flash_attention),
static_cast<int64_t>(at::SDPBackend::efficient_attention),
static_cast<int64_t>(at::SDPBackend::cudnn_attention)};
at::globalContext().setSDPPriorityOrder(priority_order);
}
return at::globalContext().sDPPriorityOrder();
}
sdp::SDPBackend select_sdp_backend_xpu(sdp::sdp_params const& kernel_params) {
// This function defines the priority order of the different sdp backends
// 1. Flash Attention
@ -85,20 +122,16 @@ sdp::SDPBackend select_sdp_backend_xpu(sdp::sdp_params const& kernel_params) {
}
// Get ideal kernel ordering
const std::array<sdp::SDPBackend, 3> priority_order{
sdp::SDPBackend::overrideable,
sdp::SDPBackend::math,
sdp::SDPBackend::flash_attention,
};
const auto ordering = priority_order(kernel_params);
// Because TORCHCHECK checks if condition is true we negate debug so that
// The statements will be printed when debug is true
bool print_debug = false;
for (auto& backend : priority_order) {
for (auto& backend : ordering) {
switch (backend) {
case sdp::SDPBackend::overrideable:
if (ctx.userEnabledOverrideableSDP() &&
use_overrideable_xpu(kernel_params, print_debug)) {
can_use_overrideable_attention(kernel_params, print_debug)) {
return sdp::SDPBackend::overrideable;
}
break;
@ -109,25 +142,43 @@ sdp::SDPBackend select_sdp_backend_xpu(sdp::sdp_params const& kernel_params) {
break;
case sdp::SDPBackend::flash_attention:
if (ctx.userEnabledFlashSDP() &&
use_overrideable_xpu(kernel_params, print_debug)) {
TORCH_WARN(
"Flash Attention is not supported on XPU, falling back to overrideable kernel.");
can_use_flash_attention(kernel_params, print_debug)) {
TORCH_WARN_ONCE(
"SDPA Flash Attention backend is not supported on XPU, falling back to OVERRIDEABLE backend.");
return sdp::SDPBackend::overrideable;
}
break;
case sdp::SDPBackend::cudnn_attention:
if (ctx.userEnabledCuDNNSDP() &&
can_use_cudnn_attention(kernel_params, print_debug)) {
TORCH_CHECK(false, "Invalid backend");
}
break;
case sdp::SDPBackend::efficient_attention:
if (ctx.userEnabledMemEfficientSDP() &&
can_use_mem_efficien_attention(kernel_params, print_debug)) {
TORCH_CHECK(false, "Invalid backend");
}
break;
default:
TORCH_CHECK(false, "Invalid backend");
}
}
// If we have gotten to this point then two things have happened:
// 1. use_overrideable_xpu did not satisfy the constraints to be ran
// 1. can_use_overrideable_attention did not satisfy the constraints to be ran
// 2. The user has explicitly disabled the math kernel
// We then re-run the kernel checks with debug enabled to print out the
// reason why the kernel was not selected
print_debug = true;
TORCH_WARN("OneDNN kernel not used because:");
use_overrideable_xpu(kernel_params, print_debug);
TORCH_WARN("Flash attention kernel not used because:");
can_use_flash_attention(kernel_params, print_debug);
TORCH_WARN("Overrideable attention kernel not used because:");
can_use_overrideable_attention(kernel_params, print_debug);
TORCH_WARN("CuDNN attention kernel not used because:");
can_use_cudnn_attention(kernel_params, print_debug);
TORCH_WARN("Memory Efficient attention kernel not used because:");
can_use_mem_efficien_attention(kernel_params, print_debug);
TORCH_CHECK(!print_debug, "No available kernel. Aborting execution.")
return sdp::SDPBackend::error;
}

View File

@ -0,0 +1,25 @@
#pragma once
#include <c10/metal/common.h>
#ifdef __METAL__
enum class GridSamplerInterpolation { Bilinear, Nearest, Bicubic };
enum class GridSamplerPadding { Zeros, Border, Reflection };
#else
#include <ATen/native/GridSamplerUtils.h>
using at::native::GridSamplerInterpolation;
using at::native::GridSamplerPadding;
#endif
template <unsigned N = 5, typename idx_type_t = int32_t>
struct GridSamplerParams {
int32_t sampler_dims;
::c10::metal::array<idx_type_t, N> output_sizes;
::c10::metal::array<idx_type_t, N> output_strides;
::c10::metal::array<idx_type_t, N> input_sizes;
::c10::metal::array<idx_type_t, N> input_strides;
::c10::metal::array<idx_type_t, N> grid_sizes;
::c10::metal::array<idx_type_t, N> grid_strides;
GridSamplerInterpolation interpolation_mode;
GridSamplerPadding padding_mode;
bool align_corners;
};

View File

@ -0,0 +1,329 @@
#include <ATen/native/mps/kernels/GridSampler.h>
#include <c10/metal/utils.h>
#include <metal_array>
#include <metal_stdlib>
using namespace metal;
using namespace c10::metal;
struct GridSamplerOffsets {
int32_t output;
int32_t input;
int32_t grid;
GridSamplerOffsets() : output(0), input(0), grid(0) {}
};
// Find offsets into the tensors that this thread will operate on,
// based on the thread ID.
static GridSamplerOffsets find_grid_sampler_offsets(
constant int32_t* output_sizes,
constant int32_t* output_strides,
constant int32_t* input_sizes,
constant int32_t* input_strides,
constant int32_t* grid_sizes,
constant int32_t* grid_strides,
int32_t sampler_dims,
uint tid) {
auto dims = sampler_dims + 2;
auto output_idx = static_cast<int32_t>(tid);
GridSamplerOffsets offsets;
for (auto dim = dims - 1; dim >= 0; dim--) {
auto dim_idx = output_idx % output_sizes[dim];
output_idx = output_idx / output_sizes[dim];
// Select the output element that this thread will calculate.
// output shape:
// 2 sampler dims: (N, C, Hout, Wout)
// 3 sampler dims: (N, C, Dout, Hout, Wout)
offsets.output += output_strides[dim] * dim_idx;
// Select the batch and channel for the input.
// input shape:
// 2 sampler dims: (N, C, Hin, Win)
// 3 sampler dims: (N, C, Din, Hin, Win)
if (dim < 2) {
offsets.input += input_strides[dim] * dim_idx;
}
// Select the grid coordinates for the output element.
// grid shape:
// 2 sampler dims: (N, Hout, Wout, 2)
// 3 sampler dims: (N, Dout, Hout, Wout, 3)
if (dim == 0) {
offsets.grid += grid_strides[dim] * dim_idx;
} else if (dim >= 2) {
offsets.grid += grid_strides[dim - 1] * dim_idx;
}
}
return offsets;
}
// Mod function which gives postive output when `a` is negative
static int32_t mod(int32_t a, int32_t b) {
auto r = a % b;
return r + (r < 0 ? b : 0);
}
// Sentinel index value to indicate zero padding
constant int32_t IDX_ZERO = -1;
// Apply padding to an index into the input
static int32_t pad_input_index(
int32_t idx,
int32_t input_size,
GridSamplerPadding padding_mode,
bool align_corners) {
int32_t idx_padded = idx;
if (padding_mode == GridSamplerPadding::Zeros) {
idx_padded = (idx < 0) ? IDX_ZERO : idx_padded;
idx_padded = (idx >= input_size) ? IDX_ZERO : idx_padded;
} else if (padding_mode == GridSamplerPadding::Border) {
idx_padded = (idx < 0) ? 0 : idx_padded;
idx_padded = (idx >= input_size) ? input_size - 1 : idx_padded;
} else if (padding_mode == GridSamplerPadding::Reflection) {
auto scale_length = align_corners ? (input_size - 1) : input_size;
auto idx_mod = mod(idx, scale_length);
auto idx_mod_reverse = (input_size - 1) - idx_mod;
bool is_reverse = (abs(idx - idx_mod) / scale_length) % 2 == 1;
idx_padded = is_reverse ? idx_mod_reverse : idx_mod;
}
return idx_padded;
}
template <int32_t dims, typename T>
T get_tensor_val(
constant T* input,
constant int32_t* input_strides,
int32_t indices[dims]) {
bool found_idx_zero = false;
int32_t offset = 0;
for (auto dim = 0; dim < dims; dim++) {
auto idx = indices[dim];
found_idx_zero = found_idx_zero || (idx == IDX_ZERO);
offset += (found_idx_zero ? 0 : idx) * input_strides[dim];
}
return found_idx_zero ? 0 : input[offset];
}
// This function performs 3D linear interpolation for one value. One way to
// think of how this works is to imagine a unit cube where each corner of the
// cube has one scalar value associated with it. Inside the cube, the values
// change linearly, so the gradient is constant. The values associated with each
// corner are given by the `input`, indexed at all eight different combinations
// of the `left_indices` and `right_indices`. Given a 3D coordinate anywhere
// within the cube, specified by the `scales` argument, we must calculate the
// value associated with that position.
template <typename T>
T interpolate_linear_3d(
constant T* input,
constant int32_t* input_strides,
int32_t left_indices[3],
int32_t right_indices[3],
opmath_t<T> scales[3]) {
int32_t a_idx[3] = {left_indices[0], left_indices[1], left_indices[2]};
int32_t b_idx[3] = {left_indices[0], left_indices[1], right_indices[2]};
int32_t c_idx[3] = {left_indices[0], right_indices[1], left_indices[2]};
int32_t d_idx[3] = {left_indices[0], right_indices[1], right_indices[2]};
int32_t e_idx[3] = {right_indices[0], left_indices[1], left_indices[2]};
int32_t f_idx[3] = {right_indices[0], left_indices[1], right_indices[2]};
int32_t g_idx[3] = {right_indices[0], right_indices[1], left_indices[2]};
int32_t h_idx[3] = {right_indices[0], right_indices[1], right_indices[2]};
auto a =
static_cast<opmath_t<T>>(get_tensor_val<3>(input, input_strides, a_idx));
auto b =
static_cast<opmath_t<T>>(get_tensor_val<3>(input, input_strides, b_idx));
auto c =
static_cast<opmath_t<T>>(get_tensor_val<3>(input, input_strides, c_idx));
auto d =
static_cast<opmath_t<T>>(get_tensor_val<3>(input, input_strides, d_idx));
auto e =
static_cast<opmath_t<T>>(get_tensor_val<3>(input, input_strides, e_idx));
auto f =
static_cast<opmath_t<T>>(get_tensor_val<3>(input, input_strides, f_idx));
auto g =
static_cast<opmath_t<T>>(get_tensor_val<3>(input, input_strides, g_idx));
auto h =
static_cast<opmath_t<T>>(get_tensor_val<3>(input, input_strides, h_idx));
auto scale0_right = scales[0];
auto scale1_right = scales[1];
auto scale2_right = scales[2];
auto scale0_left = 1 - scale0_right;
auto scale1_left = 1 - scale1_right;
auto scale2_left = 1 - scale2_right;
return static_cast<T>(
scale0_left * scale1_left * scale2_left * a +
scale0_left * scale1_left * scale2_right * b +
scale0_left * scale1_right * scale2_left * c +
scale0_left * scale1_right * scale2_right * d +
scale0_right * scale1_left * scale2_left * e +
scale0_right * scale1_left * scale2_right * f +
scale0_right * scale1_right * scale2_left * g +
scale0_right * scale1_right * scale2_right * h);
}
// Calculates a single output element.
// `input` shape:
// 2 sampler dims: (Hin, Win)
// 3 sampler dims: (Din, Hin, Win)
// `coords` values:
// 2 sampler dims: (Wcoord, Hcoord)
// 3 sampler dims: (Wcoord, Hcoord, Dcoord)
template <typename T>
void grid_sampler_single_element(
device T* output,
constant T* input,
constant T* coords,
int32_t dims,
constant int32_t* input_sizes,
constant int32_t* input_strides,
GridSamplerInterpolation interpolation_mode,
GridSamplerPadding padding_mode,
bool align_corners) {
int32_t left_indices[3];
int32_t right_indices[3];
opmath_t<T> scales[3];
// For each dimension, find the pair of indices in the cooresponding dimension
// of `input` which surround the grid coordinate in that dimension. We'll do
// this by mapping different coordiante spaces onto each other. There are
// basically three different coordinate spaces to keep in mind:
//
// * aligned grid space
// - `-1` refers to the leftmost input value.
// - `1` refers to the rightmost input value.
//
// * unaligned grid space
// - `-1` refers to the midpoint between the leftmost input value and
// a padding value to the left of that.
// - `1` refers to the midpoint between the rightmost input value and
// a padding value to the right of that.
//
// * input index space
// - `n` refers to the n-th value of the input.
// - `0` refers to the leftmost input value.
// - `N-1` refers to the rightmost input value.
//
// If `align_corners == False`, then the coordinates are is in unaligned grid
// space, and we will map it onto aligned grid space. If `align_corners ==
// True`, then coordinates are already in aligned grid space.
//
// Then we will map unaligned grid space onto input index space, making it
// relatively simple to find the two input indices that surround the
// coordinate.
for (auto coord_dim = 0; coord_dim < dims; coord_dim++) {
auto input_dim = dims - coord_dim - 1;
auto input_size = input_sizes[input_dim];
auto coord = static_cast<opmath_t<T>>(coords[coord_dim]);
// Interpret nan as -1
coord = isnan(coord) ? -1 : coord;
if (!align_corners) {
// Map unaligned grid space to aligned grid space
auto corner_alignment_factor = static_cast<opmath_t<T>>(input_size) /
static_cast<opmath_t<T>>(input_size - 1);
coord = coord * corner_alignment_factor;
}
// Map aligned grid space to input index space
coord = (coord + 1) * (static_cast<opmath_t<T>>(input_size - 1) / 2);
// Get the input indices surrounding the coordinate, apply padding to them,
// and obtain the scaling factor between the two for interpolation.
auto left_idx = static_cast<int32_t>(floor(coord));
auto right_idx = static_cast<int32_t>(ceil(coord));
left_indices[input_dim] =
pad_input_index(left_idx, input_size, padding_mode, align_corners);
right_indices[input_dim] =
pad_input_index(right_idx, input_size, padding_mode, align_corners);
auto scale = coord - left_idx;
if (interpolation_mode == GridSamplerInterpolation::Nearest) {
// TODO: For some reason, rounding the scale to 0 or 1 and then using
// linear interpolation seems to work perfectly with zero padding mode,
// but we get flaky failures with border and reflection padding modes.
// Need to investigate and fix it.
scale = (scale <= 0.5) ? 0 : 1;
}
scales[input_dim] = scale;
}
// Now that we have the bounding indices and scale factor for each dimension
// of the input, we can interpolate.
if (dims == 3) {
*output = interpolate_linear_3d(
input, input_strides, left_indices, right_indices, scales);
}
}
template <typename T>
kernel void grid_sampler(
device T* output [[buffer(0)]],
constant T* input [[buffer(1)]],
constant T* grid [[buffer(2)]],
constant GridSamplerParams<5>& params [[buffer(3)]],
uint tid [[thread_position_in_grid]]) {
auto output_sizes = params.output_sizes.data();
auto output_strides = params.output_strides.data();
auto input_sizes = params.input_sizes.data();
auto input_strides = params.input_strides.data();
auto grid_sizes = params.grid_sizes.data();
auto grid_strides = params.grid_strides.data();
auto sampler_dims = params.sampler_dims;
auto offsets = find_grid_sampler_offsets(
output_sizes,
output_strides,
input_sizes,
input_strides,
grid_sizes,
grid_strides,
sampler_dims,
tid);
output += offsets.output;
input += offsets.input;
auto coords = grid + offsets.grid;
input_sizes += 2;
input_strides += 2;
auto interpolation_mode = params.interpolation_mode;
auto padding_mode = params.padding_mode;
auto align_corners = params.align_corners;
grid_sampler_single_element(
output,
input,
coords,
sampler_dims,
input_sizes,
input_strides,
interpolation_mode,
padding_mode,
align_corners);
}
#define REGISTER_GRID_SAMPLER_OP(DTYPE) \
template [[host_name("grid_sampler_" #DTYPE)]] \
kernel void grid_sampler<DTYPE>( \
device DTYPE * output [[buffer(0)]], \
constant DTYPE * input [[buffer(1)]], \
constant DTYPE * grid [[buffer(2)]], \
constant GridSamplerParams<5> & params [[buffer(3)]], \
uint tid [[thread_position_in_grid]]);
REGISTER_GRID_SAMPLER_OP(float);
REGISTER_GRID_SAMPLER_OP(half);
REGISTER_GRID_SAMPLER_OP(bfloat);

View File

@ -1,7 +1,10 @@
#define TORCH_ASSERT_ONLY_METHOD_OPERATORS
#include <ATen/mps/MPSProfiler.h>
#include <ATen/native/GridSamplerUtils.h>
#include <ATen/native/Pool.h>
#include <ATen/native/mps/MPSGraphVenturaOps.h>
#include <ATen/native/mps/OperationUtils.h>
#include <ATen/native/mps/kernels/GridSampler.h>
#ifndef AT_PER_OPERATOR_HEADERS
#include <ATen/Functions.h>
@ -9,9 +12,17 @@
#else
#include <ATen/ops/grid_sampler_2d.h>
#include <ATen/ops/grid_sampler_2d_native.h>
#include <ATen/ops/grid_sampler_3d_native.h>
#endif
namespace at::native {
#ifndef PYTORCH_JIT_COMPILE_SHADERS
static auto& lib = mps::MetalShaderLibrary::getBundledLibrary();
#else
#include <ATen/native/mps/GridSampler_metallib.h>
#endif
namespace mps {
static void grid_sampler_2d_mps_impl(Tensor& output,
const Tensor& input,
@ -120,6 +131,96 @@ static void grid_sampler_2d_mps_impl(Tensor& output,
runMPSGraph(stream, cachedGraph->graph(), feeds, outputPlaceholder);
}
}
static void grid_sampler_template(Tensor& output,
const Tensor& input,
const Tensor& grid,
int64_t _interpolation_mode,
int64_t _padding_mode,
bool align_corners,
int32_t sampler_dims,
const std::string& op_name) {
check_grid_sampler_common(input, grid);
switch (sampler_dims) {
case 2:
check_grid_sampler_2d(input, grid);
break;
case 3:
check_grid_sampler_3d(input, grid, _interpolation_mode);
break;
default:
TORCH_INTERNAL_ASSERT(false, "Only 2D and 3D sampling are supported, but got: ", sampler_dims);
}
TORCH_CHECK(input.scalar_type() == grid.scalar_type(),
"expected input and grid to have the same type, but got ",
input.scalar_type(),
" and ",
grid.scalar_type());
auto interpolation_mode = static_cast<GridSamplerInterpolation>(_interpolation_mode);
auto padding_mode = static_cast<GridSamplerPadding>(_padding_mode);
switch (interpolation_mode) {
case GridSamplerInterpolation::Bilinear:
break;
case GridSamplerInterpolation::Nearest:
TORCH_CHECK(false, op_name, ": Unsupported Nearest interpolation");
break;
case GridSamplerInterpolation::Bicubic:
TORCH_CHECK(false, op_name, ": Unsupported Bicubic interpolation");
break;
default:
TORCH_CHECK(false, op_name, ": Unrecognised interpolation mode: ", _interpolation_mode);
}
switch (padding_mode) {
case GridSamplerPadding::Zeros:
case GridSamplerPadding::Border:
case GridSamplerPadding::Reflection:
break;
default:
TORCH_CHECK(false, op_name, ": Unrecognised Padding Mode: ", _padding_mode);
}
auto input_size = input.sizes();
auto grid_size = grid.sizes();
output.resize_({input_size[0], input_size[1], grid_size[1], grid_size[2], grid_size[3]}, MemoryFormat::Contiguous);
auto dims = input.dim();
GridSamplerParams<5> params;
params.sampler_dims = sampler_dims;
params.padding_mode = padding_mode;
params.interpolation_mode = interpolation_mode;
params.align_corners = align_corners;
for (const auto dim : c10::irange(dims)) {
params.output_sizes[dim] = safe_downcast<int32_t, int64_t>(output.size(dim));
params.output_strides[dim] = safe_downcast<int32_t, int64_t>(output.stride(dim));
params.input_sizes[dim] = safe_downcast<int32_t, int64_t>(input.size(dim));
params.input_strides[dim] = safe_downcast<int32_t, int64_t>(input.stride(dim));
params.grid_sizes[dim] = safe_downcast<int32_t, int64_t>(grid.size(dim));
params.grid_strides[dim] = safe_downcast<int32_t, int64_t>(grid.stride(dim));
}
auto num_threads = output.numel();
MPSStream* mpsStream = getCurrentMPSStream();
dispatch_sync_with_rethrow(mpsStream->queue(), ^() {
@autoreleasepool {
id<MTLComputeCommandEncoder> computeEncoder = mpsStream->commandEncoder();
auto pso = lib.getPipelineStateForFunc("grid_sampler_" + scalarToMetalTypeString(input));
getMPSProfiler().beginProfileKernel(pso, op_name, {input, grid});
[computeEncoder setComputePipelineState:pso];
mtl_setArgs(computeEncoder, output, input, grid, params);
mtl_dispatch1DJob(computeEncoder, pso, num_threads);
getMPSProfiler().endProfileKernel(pso);
}
});
}
} // namespace mps
Tensor grid_sampler_2d_mps(const Tensor& input,
@ -135,4 +236,21 @@ Tensor grid_sampler_2d_mps(const Tensor& input,
return output;
}
Tensor grid_sampler_3d_mps(const Tensor& input,
const Tensor& grid,
int64_t interpolation_mode,
int64_t padding_mode,
bool align_corners) {
auto output = at::empty({0}, input.options(), MemoryFormat::Contiguous);
mps::grid_sampler_template(output,
input,
grid,
interpolation_mode,
padding_mode,
align_corners,
/*sampler_dims=*/3,
/*op_name=*/"grid_sampler_3d");
return output;
}
} // namespace at::native

View File

@ -2931,6 +2931,7 @@
dispatch:
CPU: grid_sampler_3d_cpu
CUDA: grid_sampler_3d_cuda
MPS: grid_sampler_3d_mps
autogen: grid_sampler_3d.out
# `grid_sampler_3d_backward` takes in `output_mask` to optimize performance for
@ -3447,8 +3448,12 @@
- func: fbgemm_linear_fp16_weight_fp32_activation(Tensor input, Tensor packed_weight, Tensor? bias) -> Tensor
- func: fbgemm_linear_fp16_weight_fp32_activation.out(Tensor input, Tensor packed_weight, Tensor? bias, Tensor(a!) output) -> Tensor
- func: fbgemm_linear_fp16_weight(Tensor input, Tensor packed_weight, Tensor bias) -> Tensor
- func: fbgemm_linear_fp16_weight.out(Tensor input, Tensor packed_weight, Tensor bias, Tensor(a!) output) -> Tensor
- func: fbgemm_pack_quantized_matrix(Tensor input) -> Tensor
- func: fbgemm_pack_quantized_matrix.KN(Tensor input, int K, int N) -> Tensor
@ -7462,7 +7467,7 @@
- func: indices(Tensor(a) self) -> Tensor(a)
variants: method
dispatch:
SparseCPU, SparseCUDA, SparseMeta: indices_sparse
SparseCPU, SparseCUDA, SparseMPS, SparseMeta: indices_sparse
CompositeExplicitAutograd: indices_default
device_check: NoCheck
device_guard: False
@ -7470,7 +7475,7 @@
- func: values(Tensor(a) self) -> Tensor(a)
variants: method
dispatch:
SparseCPU, SparseCUDA, SparseMeta: values_sparse
SparseCPU, SparseCUDA, SparseMPS, SparseMeta: values_sparse
SparseCsrCPU, SparseCsrCUDA, SparseCsrMeta: values_sparse_csr
NestedTensorCPU, NestedTensorHPU, NestedTensorCUDA: values_nested
CompositeExplicitAutograd: values_default

View File

@ -196,9 +196,17 @@ C10_LAUNCH_BOUNDS_1(num_threads())
__global__ void coalesceValuesKernel(
int64_t *segment_offsets, int64_t *value_indices,
Dtype *values, Dtype *newValues,
int64_t nnz, int64_t newNnz, int64_t stride) {
int64_t nnz, int64_t newNnz,
#ifdef USE_ROCM
int64_t nsegments,
#endif
int64_t stride) {
int seg = blockIdx.x * 4 + threadIdx.y;
#ifdef USE_ROCM
int64_t seg = (blockIdx.x * gridDim.y + blockIdx.y) * 4 + threadIdx.y;
#else
int64_t seg = blockIdx.x * 4 + threadIdx.y;
#endif
// Number of values processed by each thread (grain size)
const int SZ = 4;
@ -207,7 +215,11 @@ __global__ void coalesceValuesKernel(
const int newValueRow = seg * stride;
const int begin = segment_offsets[seg];
const int end = (seg < newNnz - 1) ? segment_offsets[seg + 1] : nnz;
#ifdef USE_ROCM
const int startFeature = threadIdx.x + blockIdx.z * nsegments * SZ;
#else
const int startFeature = threadIdx.x + blockIdx.y * blockDim.x * SZ;
#endif
Acctype tmp[SZ];
#pragma unroll
for (int ii = 0; ii < SZ; ii++) {
@ -250,9 +262,17 @@ C10_LAUNCH_BOUNDS_1(C10_WARP_SIZE*4)
__global__ void coalesceValuesKernel(
int64_t *segment_offsets, int64_t *value_indices,
bool *values, bool *newValues,
int64_t nnz, int64_t newNnz, int64_t stride) {
int64_t nnz, int64_t newNnz,
#ifdef USE_ROCM
int64_t nsegments,
#endif
int64_t stride) {
int seg = blockIdx.x * 4 + threadIdx.y;
#ifdef USE_ROCM
int64_t seg = (blockIdx.x * gridDim.y + blockIdx.y) * 4 + threadIdx.y;
#else
int64_t seg = blockIdx.x * 4 + threadIdx.y;
#endif
// Number of values processed by each thread (grain size)
const int SZ = 4;
@ -261,7 +281,11 @@ __global__ void coalesceValuesKernel(
const int newValueRow = seg * stride;
const int begin = segment_offsets[seg];
const int end = (seg < newNnz - 1) ? segment_offsets[seg + 1] : nnz;
#ifdef USE_ROCM
const int startFeature = threadIdx.x + blockIdx.z * nsegments * SZ;
#else
const int startFeature = threadIdx.x + blockIdx.y * blockDim.x * SZ;
#endif
bool tmp[SZ];
#pragma unroll
for (int ii = 0; ii < SZ; ii++) {

View File

@ -106,8 +106,34 @@ SparseTensor _coalesce_sparse_cuda(const SparseTensor& self) {
values = values.contiguous();
int64_t stride = c10::multiply_integers(values.sizes().slice(1));
int warp_size = at::cuda::warp_size();
#ifdef USE_ROCM
const int64_t BATCHING_SEGMENT = 4096;
int64_t nsegments = ceil_div(newNnz, (int64_t) SZ);
int64_t s_batch = ceil_div(nsegments, BATCHING_SEGMENT);
dim3 grid(s_batch, (s_batch == 1) ? nsegments : BATCHING_SEGMENT, ceil_div(stride, (int64_t) warp_size*SZ));
#else
dim3 grid(ceil_div(newNnz, (int64_t) SZ), ceil_div(stride, (int64_t) warp_size*SZ));
#endif
dim3 block(warp_size, SZ);
#ifdef USE_ROCM
// Must duplicate the whole section otherwise does not compile on Windows
AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND4(
at::ScalarType::ComplexHalf, at::ScalarType::Half, at::ScalarType::BFloat16, at::ScalarType::Bool,
values.scalar_type(), "coalesce_sparse_cuda", [&] {
using cuda_accscalar_t = acc_type<scalar_t, /* is_cuda */ true>;
apply::coalesceValuesKernel<scalar_t, cuda_accscalar_t><<<grid, block, 0, stream>>>(
uniqueOffsets.data_ptr<int64_t>(),
origIndices.data_ptr<int64_t>(),
values.data_ptr<scalar_t>(),
newValues.data_ptr<scalar_t>(),
nnz,
newNnz,
nsegments,
stride
);
C10_CUDA_KERNEL_LAUNCH_CHECK();
});
#else
AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND4(
at::ScalarType::ComplexHalf, at::ScalarType::Half, at::ScalarType::BFloat16, at::ScalarType::Bool,
values.scalar_type(), "coalesce_sparse_cuda", [&] {
@ -123,6 +149,7 @@ SparseTensor _coalesce_sparse_cuda(const SparseTensor& self) {
);
C10_CUDA_KERNEL_LAUNCH_CHECK();
});
#endif
}
// this grid-strided version is slower but probably more flexible

Some files were not shown because too many files have changed in this diff Show More