Compare commits

...

188 Commits

Author SHA1 Message Date
8c3364a810 Update on "test"
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben

[ghstack-poisoned]
2025-11-13 11:06:55 -08:00
0bf7353f8d Update base for Update on "test"
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben

[ghstack-poisoned]
2025-11-13 11:06:55 -08:00
e5eb89e111 remove allocation of new unbacked symbols during mod eval (#167123)
When executing code like torch._check(numel % newsize == 0, ...), we previously allocated a new unbacked symbol due to #113165. However, this allocation is no longer necessary and can cause issues due to inconsistent behavior when tracing torch._check multiple times.

In particular, the allocation can lead to a memo disaster where the previously allocated symbol is returned instead of a new one, causing unexpected behavior.

This PR removes the unnecessary allocation, ensuring consistent behavior and avoiding potential issues. The change is validated by the following code, which now compiles without issues:
```
import torch

def fn(x):
    i0 = x.nonzero().size(0)
    y = torch.zeros((i0, 192))
    return y.view([12, -1, 192])
with torch._dynamo.config.patch({"capture_dynamic_output_shape_ops": True}):
    torch.compile(fn, fullgraph=True)(torch.ones((12,)))
```

By removing this unnecessary allocation, we simplify the code and avoid potential issues."

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167123
Approved by: https://github.com/Lucaskabela
2025-11-13 18:52:41 +00:00
b5e0e6932a Correctly populate storage offset in DTensor constructor (#167597)
The storage offset always matches the local offset because you never have rank dependent offset (your shard may be different, but your view into it will always be the same across all ranks!)

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167597
Approved by: https://github.com/malfet
ghstack dependencies: #166868, #166867, #167076
2025-11-13 18:26:11 +00:00
6ea779188c [DebugMode] torch.hash_tensor option (#167486)
Adds `torch.hash_tensor` (#154149) as tensor hashing variant; allows tuple of hashes in log annotations for more info (e.g. `with DebugMode.log_tensor_hashes(hash_fn=["norm", "hash_tensor"]): ...`)

also fixes some corner cases around norm hashing (preserves NaNs/infs, avoids erroring on smaller dtypes)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167486
Approved by: https://github.com/xmfan
2025-11-13 17:46:09 +00:00
460c7e196c Handle only a Tensor for IntList parsing (#167606)
Fixes https://github.com/pytorch/pytorch/issues/167562

Authored with Claude Code

Signed-off-by: Edward Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167606
Approved by: https://github.com/colesbury
2025-11-13 17:39:38 +00:00
7aac506cdc Revert "[precompile] Integrate AOTI as a backend. (#167338)"
This reverts commit 273babeec3c6211f30b806797f35a6e9c47c737f.

Reverted https://github.com/pytorch/pytorch/pull/167338 on behalf of https://github.com/jeanschmidt due to seems to be breaking internal tests and builds, see D86919103 ([comment](https://github.com/pytorch/pytorch/pull/167338#issuecomment-3528950888))
2025-11-13 17:39:03 +00:00
374ee9e867 Fix missing thrust includes (#167450)
CCCL recently dropped a ton of transient includes that blew up thrust compile times

That means we need to include what we use

Fixes build issues found in internal CI

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167450
Approved by: https://github.com/Skylion007, https://github.com/Aidyn-A
2025-11-13 17:02:43 +00:00
698aa0f3e5 [MPS] sparse_mask_projection (#166260)
Implements sparse mask projection. I'm aware that SparseMPSTensorMath needs some refactoring, which I'll do in a followup PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166260
Approved by: https://github.com/malfet

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-11-13 17:01:54 +00:00
eqy
d3ca4a3a4f [CUDA][64-bit indexing] Handle 64-bit outer dim cumsum case (#167326)
For #167086, same change more or less as #143696

Let's see if CI wants a large tensor test decorator

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167326
Approved by: https://github.com/ngimel, https://github.com/malfet

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-11-13 17:00:00 +00:00
c940b1fbbc address DDE in matmul decomp (#166541)
Address https://github.com/pytorch/pytorch/issues/165081
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166541
Approved by: https://github.com/mlazos
2025-11-13 16:41:35 +00:00
4de24bcc56 [Fix XPU typo] Fix a comment typo of FindSYCLToolkit.cmake (#165884)
The character U+ff1a ":" could be confused with the ASCII character U+003a ":", which is more common in source code.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165884
Approved by: https://github.com/cyyever, https://github.com/guangyey, https://github.com/EikanWang
2025-11-13 12:32:48 +00:00
f2d0a472ef [xpu][feature] Add XPU support on torch.accelerator.get_memory_info (#162564)
# Motivation
Support XPU for `torch.accelerator.get_memory_info`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162564
Approved by: https://github.com/albanD
ghstack dependencies: #156812
2025-11-13 11:03:17 +00:00
9ae0ecec7d Introduce a new API torch.accelerator.get_memory_info (#156812)
# Motivation
`torch.cuda.mem_get_info` and `torch.xpu.mem_get_info` are widely used in other popular repos, such as
- 076313bd09/python/sglang/srt/utils.py (L378),
- 7ecc2d7f39/src/accelerate/utils/modeling.py (L822),
- 7ba34b1241/vllm/worker/worker.py (L150).
-
This PR introduces a unified API `torch.accelerator.get_memory_info` to cover this scenario.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/156812
Approved by: https://github.com/albanD
2025-11-13 11:01:39 +00:00
ce4f31f662 [OpenReg][Feat][Docs] Enrich hook implementation and add focused documentation (#165980)
## Summary
This PR enriches the implementation of `OpenRegHooks.h` and adds focused documentation for `OpenReg` hooks.

## Key Changes
- A new document: `docs/source/accelerator/hooks.md`
- New `OpenReg` hooks like `isBuilt()`, `isAvailable()` and so on...

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165980
Approved by: https://github.com/fffrog

Co-authored-by: Jiawei Li <ljw1101.vip@gmail.com>
2025-11-13 08:36:18 +00:00
2c846bb614 [xpu][test]port embedding indexing and native_mha test files for Intel GPU (#165886)
we port test_indexing, test_native_mha and test_embedding for Intel GPU in this pr.
We could enable Intel GPU with following methods and try the best to keep the original code styles:

Use torch.accelerator for general gpu
Skip the case if running on xpu which has known issues
using torch.nn.attention.sdpa_kernel() to replace torch.backends.cuda.sdp_kernel() for Intel GPU as torch.backends.cuda.sdp_kernel() is depricated and Intel xpu did not support it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165886
Approved by: https://github.com/guangyey, https://github.com/albanD
2025-11-13 08:17:23 +00:00
8c86ccfbc9 [DebugMode] .show_stack_trace inline (#167589)
Shows inline stack traces, with `.debug_string(show_stack_trace=True)`. For bwd ops we use `.fwd_stack_trace` when available.

Needs some improvement for:
- backwards: not all dispatch calls run under an autograd node, so some just have generic traces (e.g. `loss.backward()`)
- compiled regions: stack trace isn't very meaningful to start (e.g. points to codegened line)

Sample for test_nn_module (fwd + bwd):
```
    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:396 in forward, code: return self.l2(self.l1(x))
    aten::t(t: f32[4, 4])
    aten::addmm(t: f32[4], t: f32[4, 4], t: f32[4, 4])
    aten::t(t: f32[4, 4])
    aten::addmm(t: f32[4], t: f32[4, 4], t: f32[4, 4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:405 in forward, code: return self.xyz(self.abc(x))
    aten::t(t: f32[4, 4])
    aten::addmm(t: f32[4], t: f32[4, 4], t: f32[4, 4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:429 in test_nn_module, code: out = mod(inp).sum()
    aten::sum(t: f32[4, 4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:430 in test_nn_module, code: out.backward()
    aten::ones_like(t: f32[], pin_memory=False, memory_format=torch.preserve_format)

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:429 in test_nn_module, code: out = mod(inp).sum()
    aten::expand(t: f32[], [4, 4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:405 in forward, code: return self.xyz(self.abc(x))
    aten::t(t: f32[4, 4])
    aten::mm(t: f32[4, 4], t: f32[4, 4])
    aten::t(t: f32[4, 4])
    aten::mm(t: f32[4, 4], t: f32[4, 4])
    aten::t(t: f32[4, 4])
    aten::sum.dim_IntList(t: f32[4, 4], [0], True)
    aten::view(t: f32[1, 4], [4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:430 in test_nn_module, code: out.backward()
    aten::detach(t: f32[4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:405 in forward, code: return self.xyz(self.abc(x))
    aten::t(t: f32[4, 4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:430 in test_nn_module, code: out.backward()
    aten::detach(t: f32[4, 4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:396 in forward, code: return self.l2(self.l1(x))
    aten::t(t: f32[4, 4])
    aten::mm(t: f32[4, 4], t: f32[4, 4])
    aten::t(t: f32[4, 4])
    aten::mm(t: f32[4, 4], t: f32[4, 4])
    aten::t(t: f32[4, 4])
    aten::sum.dim_IntList(t: f32[4, 4], [0], True)
    aten::view(t: f32[1, 4], [4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:430 in test_nn_module, code: out.backward()
    aten::detach(t: f32[4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:396 in forward, code: return self.l2(self.l1(x))
    aten::t(t: f32[4, 4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:430 in test_nn_module, code: out.backward()
    aten::detach(t: f32[4, 4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:396 in forward, code: return self.l2(self.l1(x))
    aten::t(t: f32[4, 4])
    aten::mm(t: f32[4, 4], t: f32[4, 4])
    aten::t(t: f32[4, 4])
    aten::sum.dim_IntList(t: f32[4, 4], [0], True)
    aten::view(t: f32[1, 4], [4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:430 in test_nn_module, code: out.backward()
    aten::detach(t: f32[4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:396 in forward, code: return self.l2(self.l1(x))
    aten::t(t: f32[4, 4])

    # File: /data/users/pianpwk/pytorch/test/distributed/tensor/debug/test_debug_mode.py:430 in test_nn_module, code: out.backward()
    aten::detach(t: f32[4, 4])
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167589
Approved by: https://github.com/yushangdi
2025-11-13 08:15:27 +00:00
8f96e7bc1d Only remove_noop in pre_grad passes if remove_noop is not in the remove_passes_list (#167479)
Summary: Only remove_noop in pre_grad passes if remove_noop is not in the remove_passes_list

Test Plan:
Tested as part of lowering for ss_omni_exp model.

f825774360

Unit Tests were run and succeeded as well!

Differential Revision: D86694854

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167479
Approved by: https://github.com/mlazos
2025-11-13 07:27:31 +00:00
782fc3c72b [DTensor] Add CPU instruction count benchmark for dispatch (#167394)
Following example from #149932 and doc in
[README.md](benchmarks/dynamo/pr_time_benchmarks/README.md)

cd benchmarks/dynamo/pr_time_benchmarks
`PYTHONPATH=./:../../../ python benchmarks/dtensor.py a`

Currently outputs:

```
collecting instruction count for dtensor_dispatch_detach
instruction count for iteration 0 is 14919468
instruction count for iteration 1 is 136283
instruction count for iteration 2 is 133750
instruction count for iteration 3 is 133757
instruction count for iteration 4 is 133751
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167394
Approved by: https://github.com/laithsakka
2025-11-13 06:54:08 +00:00
1a67403fc6 Move MemPool out of c10 and into ATen. (#167506)
Necessary to allow CachingHostAllocator, which sits in ATen, to
allocate its memory to a memory pool.

Otherwise, we would have a circular dependency, where libtorch_cuda.so
depends upon libc10_cuda.so, but libc10_cuda.so's MemPool object
references CachingHostAllocator symbols in libtorch_cuda.so.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167506
Approved by: https://github.com/ngimel, https://github.com/malfet
2025-11-13 06:18:29 +00:00
3d801a4c01 DTensor fast path: port return_and_correct_aliasing and inplace/out checks (#167475)
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167475
Approved by: https://github.com/ezyang
ghstack dependencies: #167051, #166372, #166808
2025-11-13 06:11:38 +00:00
2034ca99ae extend C++ DTensor fast path to local operator dispatch (#166808)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166808
Approved by: https://github.com/ezyang
ghstack dependencies: #167051, #166372
2025-11-13 06:11:38 +00:00
480b4ff882 Avoid creating Python OpSchema in the DTensor dispatch fast path (#166372)
All we need to do is move a few checks around.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166372
Approved by: https://github.com/ezyang
ghstack dependencies: #167051
2025-11-13 06:11:30 +00:00
f570e589da Add C++ fast path for DTensor.__torch_dispatch__ (#167051)
This patches the `__torch_dispatch__` machinery to detect DTensor and hand over control to a C++ fast path. Unlike #166370 and #166369 (which added a DTensor dispatch key and are intended to be replaced by this PR), this approach fundamentally *is* `__torch_dispatch__`, hopefully sidestepping all manner of thorny "does it work just like `__torch_dispatch__`?" that came up during development and review of #166370.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167051
Approved by: https://github.com/ezyang
2025-11-13 06:11:22 +00:00
f9851af59b Add Attention ops to CI (#165915)
This pull request introduces a new attention operator microbenchmark workflow to the CI system, enabling automated benchmarking and reporting for attention-related operations. The main changes include adding a new GitHub Actions workflow, to add attention benchmarks to the existing Pytorch operator microbenchmark [dashboard](https://hud.pytorch.org/benchmark/v3/dashboard/pytorch_operator_microbenchmark?renderGroupId=main&time.start=2025-10-27T00%3A00%3A00.000Z&time.end=2025-10-29T01%3A00%3A00.000Z&filters.device=cuda&filters.arch=NVIDIA+A100-SXM4-40GB&filters.deviceName=cuda%7C%7CNVIDIA+A100-SXM4-40GB&filters.operatorName=&lcommit.commit=665df0bc7288996d638fcc3da750f8cb2addd6d0&lcommit.workflow_id=18888994873&lcommit.date=2025-10-29T00%3A00%3A00Z&lcommit.branch=refs%2Ftags%2Fciflow%2Fop-benchmark%2F165915&rcommit.commit=665df0bc7288996d638fcc3da750f8cb2addd6d0&rcommit.workflow_id=18888994873&rcommit.date=2025-10-29T00%3A00%3A00Z&rcommit.branch=refs%2Ftags%2Fciflow%2Fop-benchmark%2F165915&lbranch=refs%2Ftags%2Fciflow%2Fop-benchmark%2F165915&rbranch=refs%2Ftags%2Fciflow%2Fop-benchmark%2F165915)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165915
Approved by: https://github.com/jbschlosser
2025-11-13 05:30:04 +00:00
eeebf9f664 [dynamo] [3.14] Update broken numpy test (#167681)
This is related to upgrading numpy versions, not 3.14 specifically.  See https://github.com/numpy/numpy/pull/27148
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167681
Approved by: https://github.com/williamwen42
ghstack dependencies: #167619
2025-11-13 04:27:55 +00:00
d9a50bf9a8 [dynamo] [3.14] Support np._CopyMode (#167619)
Upgrading scipy to 1.16 introduced errors related to the `copy` parameter of
`np.array`.  Add special handling for `np._CopyMode.IF_NEEDED`, which is not
handled correctly, but matches the existing behavior when `copy=None`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167619
Approved by: https://github.com/williamwen42
2025-11-13 04:27:55 +00:00
2984331c87 [inductor][NFC][2/X] extract do_autotuning/autotune/benchmark from AlgorithmSelectorCache.__call__ (#167489)
Summary: see https://github.com/pytorch/pytorch/pull/167487 for context

Test Plan: CI

Differential Revision: D86714833

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167489
Approved by: https://github.com/aorenste
2025-11-13 03:29:39 +00:00
4046999f1c Update on "test"
cc voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx ipiszy chenyang78 kadeng muchulee8 amjames chauhang aakhundov coconutruben

[ghstack-poisoned]
2025-11-12 19:24:27 -08:00
9b68682df2 [ROCm] Enable several DISABLED issues (#167183)
Profiler:
Fixes #166422

Default:
Fixes #165386
Fixes #145019
Fixes #145069
Fixes #165295
Fixes #165294
Fixes #165093
Fixes #164235
Fixes #164194
Fixes #164193
Fixes #155217
Fixes #163918
Fixes #163917
Fixes #155235
Fixes #122352
Fixes #121576
Fixes #121806
Fixes #104366

Inductor:
Fixes #164337
Fixes #148523
Fixes #115002
Fixes #111066
Fixes #107774

Distributed
Fixes #161612
Fixes #161502
Fixes #161459
Fixes #161402
Fixes #155711
Fixes #152201
Fixes #152367
Fixes #152349
Fixes #152168
Fixes #152169
Fixes #151153
Fixes #151077
Fixes #112815

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167183
Approved by: https://github.com/jeffdaily
2025-11-13 02:50:35 +00:00
8f5f89c9a0 Revert "Fix thread safety in getCurrentCUDABlasHandle and getCUDABlasLtWorkspace (#167248)"
This reverts commit 537167aa1e50a4379dca244163aaf369ed8e5161.

Reverted https://github.com/pytorch/pytorch/pull/167248 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/167248#issuecomment-3524925727))
2025-11-13 02:46:35 +00:00
8919f69362 [Inductor][2/2] Decouple flags for optimization and debug symbols (#167575)
Summary:
What: Decouple flags for compile (unoptimized build) and symbols (optimized build)
Why: Reduce confusion around naming and usage

Test Plan: Unit test & CI

Differential Revision: D86683526

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167575
Approved by: https://github.com/jansel, https://github.com/hl475
2025-11-13 00:59:15 +00:00
19c867873a [opqaue obj] Add attribute support (#167230)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167230
Approved by: https://github.com/zou3519
ghstack dependencies: #163284, #163714, #163936
2025-11-13 00:35:20 +00:00
e3dadb1d36 [opaque obj] torch.compile support (#163936)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/163936
Approved by: https://github.com/zou3519
ghstack dependencies: #163284, #163714
2025-11-13 00:35:20 +00:00
c9b09a31e8 [opaque obj] Allow non-effectful scriptobjs (#163714)
Fixes functionalization so that we can run ops using ScriptObjects w/o needing effects. Previously we would run into an error when running functionalization on the TorchBindOpOverloads.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163714
Approved by: https://github.com/zou3519
ghstack dependencies: #163284
2025-11-13 00:35:20 +00:00
35571fe94b [effects] Add register_effectful_op (#163284)
Refactored register_effectful_op to return a handler to match how fake kernels are registered. This makes it easier to deregister effects

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163284
Approved by: https://github.com/zou3519
2025-11-13 00:35:20 +00:00
485f2b607a ProxyTorchDispatchMode: Decomposing missing sympy.SymExpr should handle constant literals (#167585)
The previous work to decompose missing sympy.SymExpr (#164717) handled combinations of sub-nodes (like `s1*s2`) but I forgot to handle explicit literals (like `2*s2`).

Added a unit test based on the report.

Fixes T244632748

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167585
Approved by: https://github.com/bobrenjc93
2025-11-13 00:27:10 +00:00
0c5d5c7e9a [dynamo][invoke_subgraph] Do not restore side effects on invoke_subgraph (#167446)
Test that checks non proxy-able outputs. Also add a test that fails to
be fixed later.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167446
Approved by: https://github.com/zou3519
ghstack dependencies: #167438, #167442
2025-11-13 00:16:40 +00:00
5f98a0363a [dynamo] Make HintsWrapperHigherOrderVariable follow wrap semantics (#167442)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167442
Approved by: https://github.com/zou3519
ghstack dependencies: #167438
2025-11-13 00:16:40 +00:00
2d739001d3 [dynamo] speculate_subgraph_with_auto_output_flattening (#167438)
Summary

  This PR refactors the wrap higher-order operator infrastructure in PyTorch's Dynamo to introduce automatic output flattening for subgraph speculation. The key change is the addition of
  speculate_subgraph_with_auto_output_flattening() which separates the output variable trackers (VTs) that Dynamo continues tracing with from the actual FX graph outputs.

  Key Changes

  New speculate_subgraph_with_auto_output_flattening() function

  - Introduces a new approach for handling HOPs (Higher-Order Operators) that are just "subgraph placeholders", i.e. the HOP essentially just runs the subgraph with inputs (e.g., invoke_subgraph, activation checkpointing,
   autograd.Function)
  - Disentangles output VTs from graph outputs: Allows the subgraph to return complex Python objects (like custom user-defined objects containing tensors) while only registering tensor/symint VTs as actual FX
  graph outputs
  - Mirrors typical Dynamo processing where VTs can "run ahead" for continued tracing while the graph is a side data structure

  Benefits

  1. Handles non-proxyable outputs: Supports HOPs that return custom Python objects containing tensors
  2. Cleaner separation of concerns: Output VTs for continued tracing vs. graph outputs for FX representation
  3. More flexible: Returns graph_output_vts instead of treespec, giving more control over what becomes a graph output

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167438
Approved by: https://github.com/zou3519
2025-11-13 00:16:40 +00:00
273babeec3 [precompile] Integrate AOTI as a backend. (#167338)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167338
Approved by: https://github.com/jamesjwu
2025-11-13 00:02:26 +00:00
a76dd6b7c6 [MPS] SparseMps mv op (#166708)
Should be merged after #166561
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166708
Approved by: https://github.com/malfet
2025-11-12 22:44:29 +00:00
2fa18d1545 [export] Codemod more tests to use dynamo_graph_capture_for_export (#167663)
Summary:
as title.

Test Plan:
CI

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167663
Approved by: https://github.com/tugsbayasgalan
2025-11-12 22:44:18 +00:00
537167aa1e Fix thread safety in getCurrentCUDABlasHandle and getCUDABlasLtWorkspace (#167248)
Summary:
getCurrentCUDABlasHandle() and getCUDABlasLtWorkspace() use static mutable maps that are not protected from concurrent read-and-write. This leads to crashes.

This diff adds mutexes to synchronize access to the static maps.

Test Plan:
Use a GPU OD, run multi-threaded tests with TSAN:
```
buck test fbcode//mode/dev-tsan fbcode//caffe2:cuda_cublas_handle_pool_test  -- --stress-runs 100
```
https://www.internalfb.com/intern/testinfra/testrun/14355223937501118

TSAN: P2026731804

Differential Revision: D86316117

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167248
Approved by: https://github.com/Skylion007, https://github.com/malfet
2025-11-12 22:43:56 +00:00
0dac408f43 MatMal - fix folding logic (#166891)
Summary:
Folding logic on Matmal can be decomposed to BMM or folding + MM.

Current common Training path for 3D * 2D matmul: library will always fold, since Tensor1 or Tensor2 BOTH require a grad, so we fold since Tensor2 has grad.   But reasoning isn't really sound, it was done as a memory optimization - when its also generally same/more performant.

However, in Chemistry / Modular Modeling its common to directly calculate Forces as derivate of Energy (ie. dl/dX, but NOT dl/dW) in inference.  This exposed bug where we only have 1 of 2 Tensors requires grad, and may choose NOT to fold, resulting in 30% regression due to suboptimal BMM decomposition of torch.nn.Linear (-> calls into matmul).

I actually think even in cases we need either dl/dX or dl/dW, we should be folding when working with inputs of [B, M, N] and weights of [N, K].  Its strictly better for memory and same/faster when you consider both forward + backward runtime, and M's that are not multiples of 8 are particularly brutally slow using BMM vs MM.

Also, compiler out of box could not solve this issue, which raise another concern (was actually highlighted 2 years ago in comments, but seems still case today: (https://github.com/pytorch/pytorch/issues/118548#issuecomment-1919528910)

Differential Revision: D86128493

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166891
Approved by: https://github.com/ngimel
2025-11-12 22:18:03 +00:00
158e72427b [torch] Update caffe2/c10/cuda to build under CUDA 13 (#167534)
Summary:
Update caffe2/c10/cuda to build under CUDA 13

As of CUDA 13, the cudaMemAdvise() has been updated to take in `cudaMemLocation` as argument instead of `int` device id

This is needed for building FBGEMM_GPU under CUDA 13 (see D86372925)

Test Plan:
```
# Default build
buck build  @//mode/opt fbcode//caffe2/c10/cuda:cuda

# CUDA 13 build
buck build  @//mode/opt -c fbcode.arch=aarch64 -c fbcode.nvcc_arch=b200 -c fbcode.platform010_cuda_version=13.0  fbcode//caffe2/c10/cuda:cuda

# AMD build
buck build --flagfile fbcode//mode/dev-nosan-amd-gpu fbcode//caffe2/c10/cuda:cuda
```

Reviewed By: atalman

Differential Revision: D86578286

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167534
Approved by: https://github.com/seemethere
2025-11-12 22:12:40 +00:00
0184ef291d [inductor][NFC][1/X] extract create_no_valid_choices from AlgorithmSelectorCache.__call__ (#167487)
Summary:
What: moves `create_no_valid_choices` out of `AlgorithmSelectorCache.__call__` and into the body of `AlgorithmSelectorCache`
Why: nested function definitions make it harder to understand what `AlgorithmSelectorCache.__call__` is doing, on top of making patching/testing/etc more difficult

Test Plan: CI

Differential Revision: D86712921

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167487
Approved by: https://github.com/aorenste
2025-11-12 22:03:37 +00:00
2ca428c721 [CD] Preload libnvrtc-builtinso.so (#167614)
Which is a regression introduced by https://github.com/pytorch/pytorch/pull/167046
That causes CuDNN SDPA fail with actionable `cuDNN Frontend error: [cudnn_frontend] Error: No valid execution plans built.` error

Change `cuda_libs` from dict to list, and add `test_sdpa` regression test to binary smoke tests

Fixes https://github.com/pytorch/pytorch/issues/167602
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167614
Approved by: https://github.com/Aidyn-A, https://github.com/atalman, https://github.com/nWEIdia
2025-11-12 21:50:13 +00:00
1311385f9d Revert "fix failure of exporting compiled model with nested dynamic shapes (#166358)"
This reverts commit 416421c7c455e3befb0772fcc3379661a24aff71.

Reverted https://github.com/pytorch/pytorch/pull/166358 on behalf of https://github.com/jeanschmidt due to seems to be breaking internal signals, see D86790405, @angelayi may you help the author get this change landed? ([comment](https://github.com/pytorch/pytorch/pull/166358#issuecomment-3524052822))
2025-11-12 21:46:38 +00:00
5f0a5b8f87 Revert "Use stable topological sort in fuse_by_partitions (#167397)"
This reverts commit 7886070fc5cdbc9b51b7e2b6432c80ccae01c4fc.

Reverted https://github.com/pytorch/pytorch/pull/167397 on behalf of https://github.com/jeanschmidt due to seems to be breaking executorch signals internally, see D86780724 ([comment](https://github.com/pytorch/pytorch/pull/167397#issuecomment-3523992343))
2025-11-12 21:26:57 +00:00
74e85c6944 Add TORCH_BOX helper for STABLE_TORCH_LIBRARY_IMPL (#167582)
Implementation greatly adapted from @lw's https://github.com/pytorch/pytorch/pull/163505. TORCH_BOX is the StableIValue version of `make_boxed_from_unboxed_functor`.

the differences:
- uses headeronly concepts
- adds an unbox type mapping to support user kernels taking in torch::headeronly::HeaderOnlyArrayRef<T> (by calling to<std::vector<T>> in those cases)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167582
Approved by: https://github.com/swolchok
ghstack dependencies: #167386
2025-11-12 20:29:21 +00:00
a6a0379b9c [caffe2] Address -Wswitch-default warnings in headers (#167563)
Summary: Improve compatibility with projects that have -Wswitch-default errors/warnings enabled by suppressing those errors/warnings in caffe2 headers.

Test Plan: CI Pass

Differential Revision: D86785451

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167563
Approved by: https://github.com/shoumikhin
2025-11-12 19:53:43 +00:00
a95eee68d9 [user-streams] Add API for accessing current stream given a device (#167620)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167620
Approved by: https://github.com/xuanzhang816
2025-11-12 19:32:07 +00:00
2ad70c9446 [CI] manually gen json for segfaults (#167250)
segfaults dont gen xml, so we get no info about them in clickhouse or in the xml or in the json, so this manually generates something and uploads it to s3 to be ingested

at some point some of the existing code for test reports should be changed to just use the json that gets uploaded in the job or something
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167250
Approved by: https://github.com/huydhn
2025-11-12 19:16:12 +00:00
bc09a84150 Hide all symbols (except stable/headeronly/shim) if TORCH_STABLE_ONLY is defined (#167496)
Fixes https://github.com/pytorch/pytorch/issues/161660

This extends the `TORCH_STABLE_ONLY` stopgap added in https://github.com/pytorch/pytorch/pull/161658

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167496
Approved by: https://github.com/janeyx99
ghstack dependencies: #167495
2025-11-12 19:15:52 +00:00
760c901c9a [torch] Update caffe2/torch/csrc to build under CUDA 13 (#167401)
Summary:
Update caffe2/torch/csrc to build under CUDA 13.

As of CUDA 13, CCCL v3 is the default, and as such, nvToolsExt.h has been moved to  nvtx3/nvtx3.hpp.

This is needed for building FBGEMM_GPU under CUDA 13 (see D86372925)

Test Plan:
```
# Default build
buck build --flagfile fbcode//mode/dev-nosan fbcode//caffe2:_C_impl
buck build --flagfile fbcode//mode/dev-nosan fbcode//caffe2:_C_impl_cuda

# CUDA 13 build
buck build  @//mode/opt -c fbcode.arch=aarch64 -c fbcode.nvcc_arch=b200 -c fbcode.platform010_cuda_version=13.0  fbcode//caffe2:_C_impl
buck build  @//mode/opt -c fbcode.arch=aarch64 -c fbcode.nvcc_arch=b200 -c fbcode.platform010_cuda_version=13.0  fbcode//caffe2:_C_impl_cuda
```

Differential Revision: D86517946

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167401
Approved by: https://github.com/Skylion007
2025-11-12 18:54:45 +00:00
d105e3a198 [dynamo][DebugMode] mask python keys in dispatch_key_set guard checks (#164992)
I found that running any compiled function under DebugMode more than once will trigger recompilations, e.g. with the really simple modified test case in `test_compile`:
```
[0/1] [__recompiles] Recompiling function f in /data/users/pianpwk/ptclone/pytorch/test/distributed/tensor/debug/test_debug_mode.py:268
[0/1] [__recompiles]     triggered by the following guard failure(s):
[0/1] [__recompiles]     - 0/0:
[0/2] [__recompiles] Recompiling function f in /data/users/pianpwk/ptclone/pytorch/test/distributed/tensor/debug/test_debug_mode.py:268
[0/2] [__recompiles]     triggered by the following guard failure(s):
[0/2] [__recompiles]     - 0/1:
[0/2] [__recompiles]     - 0/0:
```

Digging deeper, the guard failures were due to TENSOR_MATCH guards failing on dispatch key set checks (seemingly on the Python dispatch key):
5a1fbf45ad/torch/csrc/dynamo/guards.cpp (L199-L203)

This seems to due to the `ignore_compile_internals=True` flag on custom dispatch modes being on, which causes these modes to "hide" themselves during compilation, making dynamo guard on the Python dispatch key being off.

The (maybe imperfect) solution is to mask out the Python keys for guard comparisons, when `_is_in_any_mode_without_ignore_compile_internals` is False.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164992
Approved by: https://github.com/williamwen42
2025-11-12 18:15:07 +00:00
ed79693706 [ROCm][CI] dynamo benchmark repvgg_a2 is flaky (#167660)
Update dynamo results due to flaky model
https://github.com/pytorch/pytorch/actions/runs/19283051320/job/55139788014

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167660
Approved by: https://github.com/jeffdaily

Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-11-12 17:41:19 +00:00
10a1578408 Revert "Update Kineto Submodule (#167343)"
This reverts commit c7007e758478fcac4ed9bb0479d73d6e397e8b8a.

Reverted https://github.com/pytorch/pytorch/pull/167343 on behalf of https://github.com/jeffdaily due to causing ROCm distributed jobs to time out ([comment](https://github.com/pytorch/pytorch/pull/167343#issuecomment-3523053342))
2025-11-12 17:23:28 +00:00
bdb37536be Add Tests (#167392)
Need to wait for:
https://github.com/Dao-AILab/flash-attention/pull/1998 to land

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167392
Approved by: https://github.com/jbschlosser
ghstack dependencies: #167348
2025-11-12 17:13:36 +00:00
dd7a45abc0 [5/N] Use Python 3.10 typing (#167449)
This PR applies new Union and Optional typing syntax to some files.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167449
Approved by: https://github.com/albanD
2025-11-12 17:05:31 +00:00
7557e38e32 [ROCm] hipSPARSELt support - Update cuda_to_hip_mappings.py (#167335)
Modified cuda_to_hip_mappings.py to map cuSPARSELt headers and types to their hipSPARSELt counterparts, improving compatibility and functionality for ROCm users.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167335
Approved by: https://github.com/jeffdaily, https://github.com/Skylion007
2025-11-12 17:04:43 +00:00
c5d91d9e3e Revert "Introduce a new API torch.accelerator.get_memory_info (#156812)"
This reverts commit abf31db2cc039ee299337bad6f7f11577c877481.

Reverted https://github.com/pytorch/pytorch/pull/156812 on behalf of https://github.com/jeanschmidt due to seems to be breaking 1000s of internal build rules, see D86638790 ([comment](https://github.com/pytorch/pytorch/pull/156812#issuecomment-3522729156))
2025-11-12 16:15:06 +00:00
a32832682c Revert "[xpu][feature] Add XPU support on torch.accelerator.get_memory_info (#162564)"
This reverts commit 3cfbf98ea9d937d23f3700168b22706c957308ce.

Reverted https://github.com/pytorch/pytorch/pull/162564 on behalf of https://github.com/jeanschmidt due to seems to be breaking 1000s of internal build rules, see D86638790 ([comment](https://github.com/pytorch/pytorch/pull/156812#issuecomment-3522729156))
2025-11-12 16:15:06 +00:00
4f6aae35fd Revert "[MPS] SparseMps mv op (#166708)"
This reverts commit 406719c3daf84b4ecec98134ef3ad6ca953b86c4.

Reverted https://github.com/pytorch/pytorch/pull/166708 on behalf of https://github.com/jeanschmidt due to breaks internal signals, see D86606212 ([comment](https://github.com/pytorch/pytorch/pull/166708#issuecomment-3522720186))
2025-11-12 16:12:11 +00:00
4cff8b5e07 Add option to disable applying side effects in dynamo (#167239)
There are two motivating use cases for this change:
1) export (when we trace pytree calls into a graph, we don't want to accidentally trace the side effect bytecode which will pollute the initial state) -> We want to warn about side effects and don't want to actually apply them
2) VLLM -> They want to detect side effects and error out.

We implement this with two configs where one config controls whether we want to apply side effects (by default yes) and the warning level for side effects (warning for export and error for VLLM). We intentionally ignore input side effects, because they are captured in the graph and export would never trace the actual dynamo graph module when tracing the pytree calls).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167239
Approved by: https://github.com/williamwen42, https://github.com/anijain2305
2025-11-12 15:36:47 +00:00
4714eb7021 Update dynamic_inductor_timm_training.csv (#167609)
These tests were failing since they were added in
https://github.com/pytorch/pytorch/pull/165381

Evidence: scroll back in HUD, on that commit they were
failing.

I'm going to (1) set the accuracy to get CI green and (2) file an issue
for this.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167609
Approved by: https://github.com/choijon5, https://github.com/desertfire
2025-11-12 15:15:46 +00:00
780e32524c Move XPUEvent to c10 (#158336)
# Motivation
Move `XPUEvent` to `c10/xpu` to keep consistent with `XPUStream`, which is already in `c10/xpu`. The most important thing is that we will leverage `XPUEven`t in our caching allocator instead of a raw sycl event.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158336
Approved by: https://github.com/EikanWang, https://github.com/albanD
2025-11-12 11:29:42 +00:00
6bf51de533 harden backed_size_oblivious and broadcast_shapes (#167232)
We probably need something similar for expand

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167232
Approved by: https://github.com/ColinPeppler
2025-11-12 09:30:24 +00:00
36c43b4f1f test
[ghstack-poisoned]
2025-11-11 23:08:24 -08:00
d33d125c94 [inductor] Remove output copy_ for pallas backend in some cases (#167516)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167516
Approved by: https://github.com/oulgen
2025-11-12 06:18:35 +00:00
dc8bb52f77 Inductor Lite Mode (#167115)
This PR introduces inductor lite mode for opt-in optimizations and numeric correctness guarantees.

Different from default mode that applies all possible fusions, lite mode gives the control back to user and provides guarantee on numeric correctness. Specifically, this mode:

- **Fallback by Default**: Fallback for ALL nodes by default, unless users explicitly mark node for inductor fusion.
- **Selective Decomposition**: Skip decomposition for all nodes except for user marked nodes.
- **Regional inductor compile**
- Skip dead code elimination
- Skip buffer reues
- Skip reorder passes, such as reorder for peak memory, reorder for compute comm overlap, and reorder_for_reducing_graph_partitions.
- Skip all pre-grad, joint-graph, and post-grad passes.

## Example: Flex Attention

```python
import torch
import torch.fx.traceback as fx_traceback
from torch.nn.attention.flex_attention import create_block_mask, flex_attention

def _squared(score, b, h, m, n):
    return score * score

def mask_mod(b, h, q, k):
    return q >= 0

a, b = 12, 64
block_mask = create_block_mask(mask_mod, None, None, a * b, a * b, device="cuda")

def fn(x):
    x = torch.sin(x)
    with fx_traceback.annotate({"compile_with_inductor": 0}):
        x = flex_attention(x, x, x, block_mask=block_mask, score_mod=_squared)
    return torch.cos(x)

x = torch.randn(1, 1, a * b, b, dtype=torch.bfloat16, device="cuda", requires_grad=True)

opt_fn = torch.compile(fn, mode="lite", fullgraph=True,)
opt_fn(x)
```

[code diff](https://www.internalfb.com/intern/diffing/?paste_number=2027441476)

[default mode tlp](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpYAzDxX/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000) vs [lite mode tlp](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpnnuh1W/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000)

## Numerics

Inductor lite mode provides bitwise equivalence with `aot_eager` backend on torchtitan llama3-8b and DeepSeek v3. https://github.com/pytorch/torchtitan/pull/2005

close: #167012

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167115
Approved by: https://github.com/ezyang
2025-11-12 05:36:26 +00:00
9997e853e9 [DebugMode] record triton kernels, run-to-run determinism checks (#167028)
Following up on https://github.com/pytorch/pytorch/pull/166348, extends DebugMode to capture inductor triton kernels at runtime, and adds an API for checking run-to-run determinism based on tensor hashes.

The workflow looks something like...
```python
# do 1st run with hashes, get logs
with DebugMode() as debug_mode, DebugMode.log_tensor_hashes():
    compiled_model(*inputs)
logs1 = debug_mode.logs

# do 2nd run
with DebugMode() as debug_mode, DebugMode.log_tensor_hashes():
    compiled_model(*inputs)
logs2 = debug_mode.logs

# returns list of calls w/ mismatched outputs
mismatches = DebugMode.check_hash_mismatches(logs1, logs2)
```

Example dump off a smaller version of @drisspg's FlexAttention fwd+bwd determinism tests [script](https://gist.github.com/pianpwk/f65cc63811d12853709dcc77d7eb69f1) (without forced reduction order):
```
cfg: TestConfig(name='Standard', B=2, Hq=32, Hkv=32, Q=2048, KV=2048, Dqk=128, Dv=128)
DETERMINISM: fwd: True, bwd_q: False, bwd_k: False, bwd_v: True

$$$ DEBUG MODE DUMP $$$  (this is what the logs look like)

    [triton] triton_tem_fused_0(arg_Q=t: bf16[2, 32, 2048, 128], arg_K=t: bf16[2, 32, 2048, 128], arg_V=t: bf16[2, 32, 2048, 128], arg_LSE=t: f32[2, 32, 2048], arg_MAX=t: f32[2, 32, 2048], arg_KV_NUM_BLKS=t: i32[2, 32, 16], arg_KV_IDX=t: i32[2, 32, 16, 16], arg_FULL_KV_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_KV_IDX=t: i32[2, 32, 16, 16], out_ptr0=t: bf16[2, 32, 2048, 128])
    # post-kernel hashes: {arg_Q: 13385916.068706088, arg_K: 13389356.409105342, arg_V: 13384993.48412523, arg_LSE: 1347168.9026973695, arg_MAX: 81775.3811062593, arg_KV_NUM_BLKS: 1024.0, arg_KV_IDX: 122880.0, arg_FULL_KV_NUM_BLKS: 7680.0, arg_FULL_KV_IDX: 122880.0, out_ptr0: 924917.7918248245}

    [triton] triton_per_fused_zeros_0(in_ptr0=t: bf16[2, 32, 2048, 128], in_ptr1=t: bf16[2, 32, 2048, 128], out_ptr1=t: f32[2, 32, 2048], xnumel=131072, r0_numel=128)
    # post-kernel hashes: {in_ptr0: 924917.7918248245, in_ptr1: 13389213.797377996, out_ptr1: 81775.38106592931}

    [triton] triton_tem_fused_zeros_1(arg_Q=t: bf16[2, 32, 2048, 128], arg_K=t: bf16[2, 32, 2048, 128], arg_V=t: bf16[2, 32, 2048, 128], arg_LSE=t: f32[2, 32, 2048], arg_DELTA=t: f32[2, 32, 2048], arg_DO=t: bf16[2, 32, 2048, 128], arg_DQ=t: bf16[2, 32, 2048, 128], arg_DV=t: bf16[2, 32, 2048, 128], arg_KV_NUM_BLKS=t: i32[2, 32, 16], arg_KV_IDX=t: i32[2, 32, 16, 16], arg_Q_NUM_BLKS=t: i32[2, 32, 16], arg_Q_IDX=t: i32[2, 32, 16, 16], arg_FULL_KV_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_KV_IDX=t: i32[2, 32, 16, 16], arg_FULL_Q_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_Q_IDX=t: i32[2, 32, 16, 16], out_ptr0=t: bf16[2, 32, 2048, 128])
    # post-kernel hashes: {arg_Q: 13385916.068706088, arg_K: 13389356.409105342, arg_V: 13384993.48412523, arg_LSE: 1347168.9026973695, arg_DELTA: 81775.38106592931, arg_DO: 13389213.797377996, arg_DQ: 874474.8084187683, arg_DV: 727742.3138379117, arg_KV_NUM_BLKS: 1024.0, arg_KV_IDX: 122880.0, arg_Q_NUM_BLKS: 1024.0, arg_Q_IDX: 122880.0, arg_FULL_KV_NUM_BLKS: 7680.0, arg_FULL_KV_IDX: 122880.0, arg_FULL_Q_NUM_BLKS: 7680.0, arg_FULL_Q_IDX: 122880.0, out_ptr0: 700542.3431890717}

$$$ MISMATCHES $$$
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_0', 'arg_name': 'arg_MAX', 'pytree_path': None, 'hash1': 0.0, 'hash2': 81775.3811062593, 'rel_diff': 1.0, 'is_input_hash': False}  # I guess this one is misleading? not sure if I'm doing something wrong with waiting for kernel results
mismatch: {'call_type': 'triton kernel', 'call': 'triton_per_fused_zeros_0', 'arg_name': 'out_ptr1', 'pytree_path': None, 'hash1': 81775.3811062593, 'hash2': 81775.38106592931, 'rel_diff': 4.931801261646669e-10, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'arg_DELTA', 'pytree_path': None, 'hash1': 81775.3811062593, 'hash2': 81775.38106592931, 'rel_diff': 4.931801261646669e-10, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'arg_DQ', 'pytree_path': None, 'hash1': 874474.8097136207, 'hash2': 874474.8084187683, 'rel_diff': 1.480720012120795e-09, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'out_ptr0', 'pytree_path': None, 'hash1': 700542.3488049245, 'hash2': 700542.3431890717, 'rel_diff': 8.016435812581196e-09, 'is_input_hash': False}
```

note: current hash implementation is basically tensor norm, so tensor closeness -> hash closeness. This is likely to change soon, e.g. maybe to `torch.hash_tensor` (https://github.com/pytorch/pytorch/pull/154149) by default

Sample paste diff between log dumps from 2 runs:
<img width="1665" height="445" alt="Screenshot 2025-11-05 at 11 27 24 PM" src="https://github.com/user-attachments/assets/41402e37-f50b-4a9e-a17c-bb98b5917076" />

Another case where running this for FSDP2 on Llama3-8B, helped narrow down divergence b/w aot_eager <-> inductor, to inductor's FWD RMSNorm kernels: P2027003180

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167028
Approved by: https://github.com/v0i0
2025-11-12 05:21:07 +00:00
2a09f6e02e [4/N] Use Python 3.10 typing (#167458)
This PR applies new Union and Optional typing syntax to some files.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167458
Approved by: https://github.com/albanD
2025-11-12 05:15:40 +00:00
bf380fbd4c [vision hash update] update the pinned vision hash (#167491)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vision hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167491
Approved by: https://github.com/pytorchbot
2025-11-12 04:40:02 +00:00
148fd9a522 [audio hash update] update the pinned audio hash (#167490)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167490
Approved by: https://github.com/pytorchbot
2025-11-12 04:31:31 +00:00
7bb8d8c200 [ROCm][CI] Add trunk-rocm-mi300.yml to test new MI3xx CI capacity (#167587)
This adds a workflow to run full set of UTs on default and distributed configs on ROCm MI3xx CI runners, to _eventually_ assess if the CI capacity can handle the PR-based workload for trunk.yml. The plan was to keep this workflow in unstable as we test out this new CI capacity, so it wouldn't impact PR merges. However, since upstream maintainers have indicated that, as of today, even unstable workflows will block PR merges, we are going with branch push-based triggers to at least pipeclean this workflow on the new CI capacity.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167587
Approved by: https://github.com/jeffdaily
2025-11-12 03:44:15 +00:00
5ce4a8b49f Revert "fix wrong accuracy_status when exception. (#165731)"
This reverts commit bfcdbd0a970e5ce08cecd0aa33dd389819f0ec4f.

Reverted https://github.com/pytorch/pytorch/pull/165731 on behalf of https://github.com/zou3519 due to broke inductor periodic ([comment](https://github.com/pytorch/pytorch/pull/165731#issuecomment-3519743601))
2025-11-12 03:36:27 +00:00
7dd56474f2 [annotation] Skip copying custom meta for gradient accumulation nodes; tag with is_gradient_acc=True (#167572)
The seq_nr  doesn't always increment for gradient accumulation nodes, and they might be copying annotation from forward nodes.

I'm just going to skip copying the custom meta for any gradient accumulation nodes and give them a special tag e.g. node.meta["is_gradient_acc"]=True

Example repro for deepseek torchtitan (without using DTensor): https://gist.github.com/yushangdi/aae13ea382732f31d0fdfb3ffeda12c8

(side note: if you want some more hints on these gradient acc node: 1) they have torch.ops.aten.add.Tensor op, not add.default. 2) they have the highest seq_nr(s) )

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167572
Approved by: https://github.com/mlazos
2025-11-12 03:35:57 +00:00
3260bf3b19 [export] stop gap strict export v2 enable and testing. (#167236)
Summary:
Added a new flag called "use_legacy_dynamo_graph_capture" which defaults to True and only False with the updated test_strict_export_v2.py

In addiotion to this flag, we also use legacy tracer when the following features are used:
1. dynamic shape
2. preserve module call signature
3. retracing.
4. draft mode.

Test Plan:
test_strict_export_v2.py

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167236
Approved by: https://github.com/tugsbayasgalan
2025-11-12 03:33:40 +00:00
05c6a06b2b Add FA4 to sdpa (#167348)
# Summary
See title ;)

## Design

Currently once you install there is no going back in the same python process, this need not be the case, cc @mikaylagawarecki's work on being able to grab original impl. I'll leave for follow up.

Okay I added an open reg, but I really want the backends to be found so some weird typing but we get
<img width="523" height="197" alt="Screenshot 2025-11-07 at 3 30 32 PM" src="https://github.com/user-attachments/assets/586de943-bbed-40cf-abd1-131f747a4cf1" />

## Overheads:
<img width="799" height="735" alt="Screenshot 2025-11-07 at 2 35 04 PM" src="https://github.com/user-attachments/assets/f9217f31-3e42-4816-8fb3-29ea8b49d735" />
First call to forward -> majority of time is spent in jit for FA

First call to backward, 3sec interestingly it doesn't appear that with_stack gets events in the backwards loop @albanD is this expected?
<img width="948" height="385" alt="Screenshot 2025-11-07 at 2 35 50 PM" src="https://github.com/user-attachments/assets/a40bacd0-3fb0-4bd8-b33e-bec8fb3f36c0" />

Getting form Pt op to impl is about 43 us which is dwarfed by other cpu overheads
<img width="1227" height="649" alt="Screenshot 2025-11-07 at 2 37 41 PM" src="https://github.com/user-attachments/assets/51da0615-facd-41e1-a6e2-fb7778079ab6" />

Just invoking the jit object from cutesl is 100s of us
<img width="545" height="414" alt="Screenshot 2025-11-07 at 2 38 19 PM" src="https://github.com/user-attachments/assets/d20345a0-6c47-4dcb-892f-9ef9894a1cf5" />

### Example usage
```Py
#!/usr/bin/env python3

"""Minimal FA4 smoke test for scaled dot product attention."""

from __future__ import annotations

import sys
from jsonargparse import CLI

import torch
import torch.nn.functional as F
from torch.nn.attention import (
    install_flash_attention_impl,
    sdpa_kernel,
    SDPBackend,
)

def _map_dtype(kind: str) -> torch.dtype:
    return torch.bfloat16 if kind == "bf16" else torch.float16

# To infinity and beyond
install_flash_attention_impl("FA4")

@sdpa_kernel([SDPBackend.FLASH_ATTENTION])
def main(
    module_path: str = "flash_attn.cute.interface",
    batch: int = 4,
    seq: int = 81292,
    heads: int = 16,
    head_dim: int = 128,
    device: int = 0,
    dtype: str = "bf16"
    ) -> None:
    if not torch.cuda.is_available():
        sys.exit("CUDA is required for FA4 smoke testing")
    torch.cuda.set_device(device)
    dtype = _map_dtype(dtype)
    generator = torch.Generator(device="cuda").manual_seed(0)
    q = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    k = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    v = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    from transformer_nuggets.utils.benchmark import profiler
    with profiler("sdpa_FA4", with_stack=False):
        for _ in range(3):
            out = F.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
            loss = out.real.sum()
            loss.backward()
    print("Scaled dot product attention output norm:", out.norm().item())
    print("dq norm:", q.grad.norm().item())

if __name__ == "__main__":
    CLI(main)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167348
Approved by: https://github.com/albanD, https://github.com/malfet
2025-11-12 03:29:07 +00:00
25e9d8124c Revert "Use c7i.2xlarge for B200 build (#167078)"
This reverts commit bb3748346484d49ace45dcc92b72c12b2ba30d98.

Reverted https://github.com/pytorch/pytorch/pull/167078 on behalf of https://github.com/zxiiro due to This seems to be breaking build when compile is not using sscache. Needs more investigation. ([comment](https://github.com/pytorch/pytorch/pull/167078#issuecomment-3519717750))
2025-11-12 03:22:48 +00:00
bc882f8284 Support Python 3.14 Lazy Function Annotations on FX graph (#167573)
This was intersting to debug:
(1) Python 3.14 ships with a lazy way for retrieving annotations. The annotations field can be a callable that lazily evaluates it
(2) On the dynamo side, `SET_FUNCTION_ATTRIBUTE` needs to handle an extra flag value (0x10)
(3) The decorator `functools.wraps` used extensively in the codebase (e.g. `make_fx`, `functionalize`) doesn't copy the `__annotations__` attribute by default. To correctly retrieve an annotatin, we need to walk on the chain of `__wrapped__` objects and retrieve the attribute from the first function. Fortunately, there are functions on the stdlib to do this.

Fixes:
```
'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_fx_out_op_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_fx_transpose_simple_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_optional_tensorlist2_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_fx_multi_out_op_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_fx_reapply_views_simple_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_fx_simple_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_nonfunctional_output_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_opt_tensor_list_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_optional_tensorlist1_cpu'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167573
Approved by: https://github.com/williamwen42
2025-11-12 02:55:57 +00:00
edd365ed4a [MemTracker] Fix: Remove monkey patching DTensor dispatch (#167580)
Fixes `MemTracker` for #167051

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167580
Approved by: https://github.com/anshul-si
2025-11-12 02:51:40 +00:00
1366a2fa55 [ROCm][CI] Enable uploading of artifacts from docker-builds.yml (#167379)
Needed for https://github.com/pytorch/pytorch/pull/167554 so that we can enable docker caching for ROCm MI3xx runners

Replaces https://github.com/pytorch/pytorch/pull/167378 (by filing from pytorch/pytorch branch so OIDC login doesn't fail due to forked repo)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167379
Approved by: https://github.com/jeffdaily
2025-11-12 02:24:05 +00:00
91f0c5a9da [simplefsdp] add manual bucketing pass (#165487)
As titled, this PR adds manual bucketing pass to SimpleFSDP. Users will need to parse FQNs they wanted to bucket together using `module_bucket_plans`. Then, `_manual_bucket_collectives` will get the node of the subgraphs correspond to each `bucket_module`, and bucket bucketable (FSDP-style) AG/RS together. `_manual_reorder_graph` reorders them for overlapping.

For detailed performance, see this torchtitan PR: https://github.com/pytorch/torchtitan/pull/1881.

There are a few todo items isted in torchtitan PR. Let's start with this PR that implements FSDP+TP+llama3 manual bucketing. I will fix/add the rest in follow up PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165487
Approved by: https://github.com/ezyang
2025-11-12 02:18:34 +00:00
67390692c5 [ROCm][CI] Restrict docker-cache-rocm.yml to main/release branches (#167593)
Follow-up to https://github.com/pytorch/pytorch/pull/167554

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167593
Approved by: https://github.com/jeffdaily
2025-11-12 02:08:23 +00:00
1debfd44fd Revert "Add FA4 to sdpa (#167348)"
This reverts commit cdf0a9c21f7c27298a5bc71620206353125c5494.

Reverted https://github.com/pytorch/pytorch/pull/167348 on behalf of https://github.com/malfet due to Looks like it broke lint? ([comment](https://github.com/pytorch/pytorch/pull/167348#issuecomment-3519549113))
2025-11-12 02:05:30 +00:00
cdf0a9c21f Add FA4 to sdpa (#167348)
# Summary
See title ;)

## Design

Currently once you install there is no going back in the same python process, this need not be the case, cc @mikaylagawarecki's work on being able to grab original impl. I'll leave for follow up.

Okay I added an open reg, but I really want the backends to be found so some weird typing but we get
<img width="523" height="197" alt="Screenshot 2025-11-07 at 3 30 32 PM" src="https://github.com/user-attachments/assets/586de943-bbed-40cf-abd1-131f747a4cf1" />

## Overheads:
<img width="799" height="735" alt="Screenshot 2025-11-07 at 2 35 04 PM" src="https://github.com/user-attachments/assets/f9217f31-3e42-4816-8fb3-29ea8b49d735" />
First call to forward -> majority of time is spent in jit for FA

First call to backward, 3sec interestingly it doesn't appear that with_stack gets events in the backwards loop @albanD is this expected?
<img width="948" height="385" alt="Screenshot 2025-11-07 at 2 35 50 PM" src="https://github.com/user-attachments/assets/a40bacd0-3fb0-4bd8-b33e-bec8fb3f36c0" />

Getting form Pt op to impl is about 43 us which is dwarfed by other cpu overheads
<img width="1227" height="649" alt="Screenshot 2025-11-07 at 2 37 41 PM" src="https://github.com/user-attachments/assets/51da0615-facd-41e1-a6e2-fb7778079ab6" />

Just invoking the jit object from cutesl is 100s of us
<img width="545" height="414" alt="Screenshot 2025-11-07 at 2 38 19 PM" src="https://github.com/user-attachments/assets/d20345a0-6c47-4dcb-892f-9ef9894a1cf5" />

### Example usage
```Py
#!/usr/bin/env python3

"""Minimal FA4 smoke test for scaled dot product attention."""

from __future__ import annotations

import sys
from jsonargparse import CLI

import torch
import torch.nn.functional as F
from torch.nn.attention import (
    install_flash_attention_impl,
    sdpa_kernel,
    SDPBackend,
)

def _map_dtype(kind: str) -> torch.dtype:
    return torch.bfloat16 if kind == "bf16" else torch.float16

# To infinity and beyond
install_flash_attention_impl("FA4")

@sdpa_kernel([SDPBackend.FLASH_ATTENTION])
def main(
    module_path: str = "flash_attn.cute.interface",
    batch: int = 4,
    seq: int = 81292,
    heads: int = 16,
    head_dim: int = 128,
    device: int = 0,
    dtype: str = "bf16"
    ) -> None:
    if not torch.cuda.is_available():
        sys.exit("CUDA is required for FA4 smoke testing")
    torch.cuda.set_device(device)
    dtype = _map_dtype(dtype)
    generator = torch.Generator(device="cuda").manual_seed(0)
    q = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    k = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    v = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    from transformer_nuggets.utils.benchmark import profiler
    with profiler("sdpa_FA4", with_stack=False):
        for _ in range(3):
            out = F.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
            loss = out.real.sum()
            loss.backward()
    print("Scaled dot product attention output norm:", out.norm().item())
    print("dq norm:", q.grad.norm().item())

if __name__ == "__main__":
    CLI(main)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167348
Approved by: https://github.com/albanD
2025-11-12 01:07:59 +00:00
115016f1a2 [Device Mesh][ez] Clean up unused parameters and duplicate codes (#167581)
While refactoring the code, I found we re-init `_flatten_mapping` and still keep `_flatten_mesh_list ` inside code which is not needed anymore. Let's remove it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167581
Approved by: https://github.com/fegin
2025-11-12 00:59:32 +00:00
971e6ca434 fix sym_size_, sym_stride lowering (#167565)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167565
Approved by: https://github.com/bobrenjc93, https://github.com/Microve, https://github.com/Skylion007
ghstack dependencies: #167345
2025-11-12 00:53:36 +00:00
e8d411e7f7 FSDPMemTracker fix with multihander hooks. (#165662)
Fixes #164663

## Issue
The torch model with multiple layers that is wrapped with fsdp2 registers pre and post forward hooks in a group using `_MultiHandler`. This becomes an issue during the context manager of the tracker where the hooks are reset and replaced. The hooks are all using the same fsdp state pointer so one reset will reset all.  So when the output layer was modified with a new pre and post forward hook it would delete the previous layer's initialization causing `KeyError` for the Norm layer as it is nonexistent.

## The Fix
Check to see if there are multiple `_MultiHandler` objects and `RemoveHandler` objects and only execute the remove hook once.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165662
Approved by: https://github.com/sanketpurandare
2025-11-11 23:49:36 +00:00
2e5233d7bd Revert "Support AC in default partitioner when functionalization is enabled (#166610)"
This reverts commit de773364be041ca7fd2dcaf35ca15c093fc9370b.

Reverted https://github.com/pytorch/pytorch/pull/166610 on behalf of https://github.com/soulitzer due to breaking internal tests ([comment](https://github.com/pytorch/pytorch/pull/166610#issuecomment-3519047226))
2025-11-11 23:01:09 +00:00
514dd96376 Remove --no-use-pep517 flag (#167096)
In pip 25.3 and newer, use of --no-use-pep517 has been removed (https://pip.pypa.io/en/stable/news/). In builds with pip 25.2, a warning message notes:

> DEPRECATION: Building 'torchvision' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'torchvision'. Discussion can be found at https://github.com/pypa/pip/issues/6334

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167096
Approved by: https://github.com/atalman
2025-11-11 23:00:35 +00:00
9ae62fcc18 [ROCm][CI] dynamo benchmarks update ci expected accuracy (#167574)
repvgg_a2 IMPROVED: accuracy=pass, expected=fail_accuracy

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167574
Approved by: https://github.com/jeffdaily

Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-11-11 22:54:55 +00:00
ae71b0e163 Fix typo in torch._refs (#167310)
Should be a typo here, but it doesn't raise an error because the inner function splits it into `a` and `,`, and the `,` case check is skipped.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167310
Approved by: https://github.com/eellison
2025-11-11 22:31:09 +00:00
5b6ff8148d Revert "[ARM] Improve LLM performance & mem usage using int4-bf16 KleidiAI kernels (#158250)"
This reverts commit 402c46503002f98ccfc023a733081fb0719223a1.

Reverted https://github.com/pytorch/pytorch/pull/158250 on behalf of https://github.com/izaitsevfb due to Broke some torch.compile jobs ([comment](https://github.com/pytorch/pytorch/pull/158250#issuecomment-3518944863))
2025-11-11 22:27:51 +00:00
1f7e4343e7 [ROCm][CI] Add docker-cache-rocm.yml to test MI3xx CI docker caching (#167554)
* Trigger this workflow on every completed run of `docker-builds.yml`
* Uses `ubuntu-latest` for downloading artifacts from `docker-build` workflow run
* Uses `linux.rocm.gfx942.docker-cache` to cache docker images as tarballs for MI3xx CI

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167554
Approved by: https://github.com/jeffdaily
2025-11-11 21:32:22 +00:00
b21856f5fc Revert "[DebugMode] record triton kernels, run-to-run determinism checks (#167028)"
This reverts commit 259ba0ecabd809edd35d12b4f992777cb5923b68.

Reverted https://github.com/pytorch/pytorch/pull/167028 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/167028#issuecomment-3518811298))
2025-11-11 21:31:12 +00:00
259ba0ecab [DebugMode] record triton kernels, run-to-run determinism checks (#167028)
Following up on https://github.com/pytorch/pytorch/pull/166348, extends DebugMode to capture inductor triton kernels at runtime, and adds an API for checking run-to-run determinism based on tensor hashes.

The workflow looks something like...
```python
# do 1st run with hashes, get logs
with DebugMode() as debug_mode, DebugMode.log_tensor_hashes():
    compiled_model(*inputs)
logs1 = debug_mode.logs

# do 2nd run
with DebugMode() as debug_mode, DebugMode.log_tensor_hashes():
    compiled_model(*inputs)
logs2 = debug_mode.logs

# returns list of calls w/ mismatched outputs
mismatches = DebugMode.check_hash_mismatches(logs1, logs2)
```

Example dump off a smaller version of @drisspg's FlexAttention fwd+bwd determinism tests [script](https://gist.github.com/pianpwk/f65cc63811d12853709dcc77d7eb69f1) (without forced reduction order):
```
cfg: TestConfig(name='Standard', B=2, Hq=32, Hkv=32, Q=2048, KV=2048, Dqk=128, Dv=128)
DETERMINISM: fwd: True, bwd_q: False, bwd_k: False, bwd_v: True

$$$ DEBUG MODE DUMP $$$  (this is what the logs look like)

    [triton] triton_tem_fused_0(arg_Q=t: bf16[2, 32, 2048, 128], arg_K=t: bf16[2, 32, 2048, 128], arg_V=t: bf16[2, 32, 2048, 128], arg_LSE=t: f32[2, 32, 2048], arg_MAX=t: f32[2, 32, 2048], arg_KV_NUM_BLKS=t: i32[2, 32, 16], arg_KV_IDX=t: i32[2, 32, 16, 16], arg_FULL_KV_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_KV_IDX=t: i32[2, 32, 16, 16], out_ptr0=t: bf16[2, 32, 2048, 128])
    # post-kernel hashes: {arg_Q: 13385916.068706088, arg_K: 13389356.409105342, arg_V: 13384993.48412523, arg_LSE: 1347168.9026973695, arg_MAX: 81775.3811062593, arg_KV_NUM_BLKS: 1024.0, arg_KV_IDX: 122880.0, arg_FULL_KV_NUM_BLKS: 7680.0, arg_FULL_KV_IDX: 122880.0, out_ptr0: 924917.7918248245}

    [triton] triton_per_fused_zeros_0(in_ptr0=t: bf16[2, 32, 2048, 128], in_ptr1=t: bf16[2, 32, 2048, 128], out_ptr1=t: f32[2, 32, 2048], xnumel=131072, r0_numel=128)
    # post-kernel hashes: {in_ptr0: 924917.7918248245, in_ptr1: 13389213.797377996, out_ptr1: 81775.38106592931}

    [triton] triton_tem_fused_zeros_1(arg_Q=t: bf16[2, 32, 2048, 128], arg_K=t: bf16[2, 32, 2048, 128], arg_V=t: bf16[2, 32, 2048, 128], arg_LSE=t: f32[2, 32, 2048], arg_DELTA=t: f32[2, 32, 2048], arg_DO=t: bf16[2, 32, 2048, 128], arg_DQ=t: bf16[2, 32, 2048, 128], arg_DV=t: bf16[2, 32, 2048, 128], arg_KV_NUM_BLKS=t: i32[2, 32, 16], arg_KV_IDX=t: i32[2, 32, 16, 16], arg_Q_NUM_BLKS=t: i32[2, 32, 16], arg_Q_IDX=t: i32[2, 32, 16, 16], arg_FULL_KV_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_KV_IDX=t: i32[2, 32, 16, 16], arg_FULL_Q_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_Q_IDX=t: i32[2, 32, 16, 16], out_ptr0=t: bf16[2, 32, 2048, 128])
    # post-kernel hashes: {arg_Q: 13385916.068706088, arg_K: 13389356.409105342, arg_V: 13384993.48412523, arg_LSE: 1347168.9026973695, arg_DELTA: 81775.38106592931, arg_DO: 13389213.797377996, arg_DQ: 874474.8084187683, arg_DV: 727742.3138379117, arg_KV_NUM_BLKS: 1024.0, arg_KV_IDX: 122880.0, arg_Q_NUM_BLKS: 1024.0, arg_Q_IDX: 122880.0, arg_FULL_KV_NUM_BLKS: 7680.0, arg_FULL_KV_IDX: 122880.0, arg_FULL_Q_NUM_BLKS: 7680.0, arg_FULL_Q_IDX: 122880.0, out_ptr0: 700542.3431890717}

$$$ MISMATCHES $$$
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_0', 'arg_name': 'arg_MAX', 'pytree_path': None, 'hash1': 0.0, 'hash2': 81775.3811062593, 'rel_diff': 1.0, 'is_input_hash': False}  # I guess this one is misleading? not sure if I'm doing something wrong with waiting for kernel results
mismatch: {'call_type': 'triton kernel', 'call': 'triton_per_fused_zeros_0', 'arg_name': 'out_ptr1', 'pytree_path': None, 'hash1': 81775.3811062593, 'hash2': 81775.38106592931, 'rel_diff': 4.931801261646669e-10, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'arg_DELTA', 'pytree_path': None, 'hash1': 81775.3811062593, 'hash2': 81775.38106592931, 'rel_diff': 4.931801261646669e-10, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'arg_DQ', 'pytree_path': None, 'hash1': 874474.8097136207, 'hash2': 874474.8084187683, 'rel_diff': 1.480720012120795e-09, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'out_ptr0', 'pytree_path': None, 'hash1': 700542.3488049245, 'hash2': 700542.3431890717, 'rel_diff': 8.016435812581196e-09, 'is_input_hash': False}
```

note: current hash implementation is basically tensor norm, so tensor closeness -> hash closeness. This is likely to change soon, e.g. maybe to `torch.hash_tensor` (https://github.com/pytorch/pytorch/pull/154149) by default

Sample paste diff between log dumps from 2 runs:
<img width="1665" height="445" alt="Screenshot 2025-11-05 at 11 27 24 PM" src="https://github.com/user-attachments/assets/41402e37-f50b-4a9e-a17c-bb98b5917076" />

Another case where running this for FSDP2 on Llama3-8B, helped narrow down divergence b/w aot_eager <-> inductor, to inductor's FWD RMSNorm kernels: P2027003180

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167028
Approved by: https://github.com/v0i0
2025-11-11 20:37:53 +00:00
051f1fe8e3 Revert "[ROCm][CI] Update docker-cache-mi300.yml to test MI300 CI docker caching (#167554)"
This reverts commit ee387c43feada1cc2049b42a970ec4e2f12f210e.

Reverted https://github.com/pytorch/pytorch/pull/167554 on behalf of https://github.com/jithunnair-amd due to workflow had failure 'Unexpected input(s) 'run_id'' ([comment](https://github.com/pytorch/pytorch/pull/167554#issuecomment-3518642191))
2025-11-11 20:34:44 +00:00
ee387c43fe [ROCm][CI] Update docker-cache-mi300.yml to test MI300 CI docker caching (#167554)
Trigger this workflow on every completed run of `docker-builds.yml` and run on `ubuntu-latest` so it doesn't queue infinitely for `rocm-docker` label

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167554
Approved by: https://github.com/jeffdaily
2025-11-11 19:49:00 +00:00
3a944661d6 Cpython test_math.FMATests (#167217)
Resolves issues running the dynamo cpython math.fma tests.

Though math.fma is enabled to perform a multiply add in dynamo, torch.addcmul is currently used which doesn't guarantee the user request for fma. It was decided to not use inductor fma prim as it would break the contract of using aten/core ir in dynamo output - otherwise export=True may have issues.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167217
Approved by: https://github.com/guilhermeleobas
2025-11-11 19:26:18 +00:00
56034074ca Revert "[Inductor] Naive foreach autotune support (#162053)"
This reverts commit 6c5db82584bf71f5b1db3b598bbd00f44140c28d.

Reverted https://github.com/pytorch/pytorch/pull/162053 on behalf of https://github.com/mlazos due to Sorry, there's an internal slowdown due to the extra triton configs you added ([comment](https://github.com/pytorch/pytorch/pull/162053#issuecomment-3518423369))
2025-11-11 19:23:40 +00:00
8def619bbe [user-streams] wait_stream op (#167512)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167512
Approved by: https://github.com/williamwen42
ghstack dependencies: #167510, #167511
2025-11-11 19:18:03 +00:00
61883a5787 [user-streams] Allow new streams to be created and registered during compilation (#167511)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167511
Approved by: https://github.com/williamwen42
ghstack dependencies: #167510
2025-11-11 19:18:03 +00:00
d8ada1ee76 [user-streams] Allow new events to be created and registered during compilation (#167510)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167510
Approved by: https://github.com/williamwen42
2025-11-11 19:18:03 +00:00
fe841a1db4 [DeviceMesh] Log DeviceMesh.__init__ usage (#167375)
Adds (meta-internal-only) API usage logging for DeviceMesh creation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167375
Approved by: https://github.com/fduwjj
ghstack dependencies: #167374
2025-11-11 19:15:47 +00:00
b65829b84f [DTensor] Log API usage metrics for DTensor and DeviceMesh (#167374)
Logging propagate_op_sharding_non_cached is a compromise between
 - logging in DTensor.__init__ to catch ALL DTensor usage
 - sparing the overhead in a latency-senstitive region like
   DTensor.__init__
 - and 'real' DTensor usage should incur at least one call to sharding
   propagation

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167374
Approved by: https://github.com/zpcore
2025-11-11 19:15:47 +00:00
b0e0ae97ba include thrust/distance.h explicitly in cuda sparse softmax (#167436)
`thrust::distance` is defined there
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167436
Approved by: https://github.com/Skylion007
2025-11-11 19:10:55 +00:00
f44a1ddcb2 Revert "[ROCm][CI] Update docker-cache-mi300.yml to test MI300 CI docker caching (#167554)"
This reverts commit 184e2cbc89570e1bf466b15d70fc36ed71be0eb9.

Reverted https://github.com/pytorch/pytorch/pull/167554 on behalf of https://github.com/jithunnair-amd due to Need to fix lint ([comment](https://github.com/pytorch/pytorch/pull/167554#issuecomment-3518382341))
2025-11-11 19:09:45 +00:00
184e2cbc89 [ROCm][CI] Update docker-cache-mi300.yml to test MI300 CI docker caching (#167554)
Trigger this workflow on every completed run of `docker-builds.yml` and run on `ubuntu-latest` so it doesn't queue infinitely for `rocm-docker` label

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167554
Approved by: https://github.com/jeffdaily
2025-11-11 19:07:19 +00:00
416421c7c4 fix failure of exporting compiled model with nested dynamic shapes (#166358)
## Problems
When exporting a compiled model with nested input like below
```python
import torch
from torch.export import export, Dim

def test_export_compiled_model_with_nested_dynamic_shapes():
   """Test exporting a compiled model with nested dict inputs and dynamic shapes."""
   print("Running test_export_compiled_model_with_nested_dynamic_shapes...")

   class M(torch.nn.Module):
       def forward(self, data_batch):
           return data_batch["a1"] + data_batch["a2"]

   m = M()
   compiled_m = torch.compile(m)
   example_args = ({
       "a1": torch.ones(3, 3),
       "a2": torch.ones(3, 3),
   },)
   dynamic_shapes = ({
       "a1": {0: Dim.DYNAMIC},
       "a2": {0: Dim.DYNAMIC},
   },)

   try:
       ep = export(compiled_m, example_args, dynamic_shapes=dynamic_shapes, strict=True)
       gm = ep.module()
       result_exported = gm(*example_args)
       result_compiled = compiled_m(*example_args)

       assert torch.allclose(result_exported, result_compiled), "Results don't match!"
       print("✓ test_export_compiled_model_with_nested_dynamic_shapes PASSED")
       return True
   except Exception as e:
       print(f"✗ test_export_compiled_model_with_nested_dynamic_shapes FAILED")
       print(f"Error: {e}")
       import traceback
       traceback.print_exc()
       return False

def test_export_compiled_model_with_kwargs_dynamic_shapes():
   """Test exporting a compiled model with kwargs and dynamic shapes."""
   print("\nRunning test_export_compiled_model_with_kwargs_dynamic_shapes...")

   class M(torch.nn.Module):
       def forward(self, a1, a2):
           return a1 + a2

   m = M()
   compiled_m = torch.compile(m)
   example_args = ()
   example_kwargs = {
       "a1": torch.ones(3, 3),
       "a2": torch.ones(3, 3),
   }
   dynamic_shapes = {
       "a1": {0: Dim.DYNAMIC},
       "a2": {0: Dim.DYNAMIC},
   }

   try:
       ep = export(compiled_m, example_args, kwargs=example_kwargs, dynamic_shapes=dynamic_shapes, strict=True)
       gm = ep.module()
       result_exported = gm(**example_kwargs)
       result_compiled = compiled_m(**example_kwargs)

       assert torch.allclose(result_exported, result_compiled), "Results don't match!"
       print("✓ test_export_compiled_model_with_kwargs_dynamic_shapes PASSED")
       return True
   except Exception as e:
       print(f"✗ test_export_compiled_model_with_kwargs_dynamic_shapes FAILED")
       print(f"Error: {e}")
       import traceback
       traceback.print_exc()
       return False

if __name__ == "__main__":
   print("Testing export of compiled models with dynamic shapes\n")
   print("=" * 70)

   results = []
   results.append(test_export_compiled_model_with_nested_dynamic_shapes())
   results.append(test_export_compiled_model_with_kwargs_dynamic_shapes())

   print("\n" + "=" * 70)
   print(f"\nResults: {sum(results)}/{len(results)} tests passed")

   if all(results):
       print("✓ All tests passed!")
   else:
       print("✗ Some tests failed")
       exit(1)
```

It will report
```
======================================================================
Running test_export_compiled_model_with_nested_dynamic_shapes...
✗ test_export_compiled_model_with_nested_dynamic_shapes FAILED
Error: Detected mismatch between the structure of `inputs` and `dynamic_shapes`: `inputs[0]` is a <class 'tuple'>, but `dynamic_shapes[0]` is a <class 'dict'>
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation

The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.
Traceback (most recent call last):
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 614, in _tree_map_with_path
    return tree_map_with_path(f, tree, *dynamic_shapes, is_leaf=is_leaf)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/utils/_pytree.py", line 2055, in tree_map_with_path
    all_keypath_leaves = keypath_leaves + [treespec.flatten_up_to(r) for r in rests]
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/utils/_pytree.py", line 2055, in <listcomp>
    all_keypath_leaves = keypath_leaves + [treespec.flatten_up_to(r) for r in rests]
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/utils/_pytree.py", line 1188, in flatten_up_to
    helper(self, tree, subtrees)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/utils/_pytree.py", line 1185, in helper
    helper(subspec, subtree, subtrees)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/utils/_pytree.py", line 1141, in helper
    raise ValueError(
ValueError: Node type mismatch; expected <class 'tuple'>, but got <class 'dict'>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/chzhu/infinitrain/test_exprot.py", line 25, in test_export_compiled_model_with_nested_dynamic_shapes
    ep = export(compiled_m, example_args, dynamic_shapes=dynamic_shapes, strict=True)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/__init__.py", line 311, in export
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/__init__.py", line 277, in export
    return _export(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1163, in wrapper
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1129, in wrapper
    ep = fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/exported_program.py", line 124, in wrapper
    return fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 2255, in _export
    ep = _export_for_training(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1163, in wrapper
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1129, in wrapper
    ep = fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/exported_program.py", line 124, in wrapper
    return fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 2071, in _export_for_training
    export_artifact = export_func(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1415, in _strict_export
    gm_torch_level = _export_to_torch_ir(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 785, in _export_to_torch_ir
    _check_dynamic_shapes(combined_args, dynamic_shapes)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 1031, in _check_dynamic_shapes
    _tree_map_with_path(check_shape, combined_args, dynamic_shapes, tree_name="inputs")
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 686, in _tree_map_with_path
    _compare(tree_spec, other_tree_spec, [])
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 677, in _compare
    _compare(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 652, in _compare
    raise_mismatch_error(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 634, in raise_mismatch_error
    raise UserError(
torch._dynamo.exc.UserError: Detected mismatch between the structure of `inputs` and `dynamic_shapes`: `inputs[0]` is a <class 'tuple'>, but `dynamic_shapes[0]` is a <class 'dict'>
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation

The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.

Running test_export_compiled_model_with_kwargs_dynamic_shapes...
✗ test_export_compiled_model_with_kwargs_dynamic_shapes FAILED
Error: When `dynamic_shapes` is specified as a dict, its top-level keys must be the arg names ['kwargs'] of `inputs`, but here they are ['a1', 'a2']. Since here `inputs` is a list/tuple enclosing a single dict, maybe you just forgot to enclose `dynamic_shapes` in a list/tuple?
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation

The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.
Traceback (most recent call last):
  File "/home/chzhu/infinitrain/test_exprot.py", line 62, in test_export_compiled_model_with_kwargs_dynamic_shapes
    ep = export(compiled_m, example_args, kwargs=example_kwargs, dynamic_shapes=dynamic_shapes, strict=True)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/__init__.py", line 311, in export
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/__init__.py", line 277, in export
    return _export(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1163, in wrapper
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1129, in wrapper
    ep = fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/exported_program.py", line 124, in wrapper
    return fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 2255, in _export
    ep = _export_for_training(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1163, in wrapper
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1129, in wrapper
    ep = fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/exported_program.py", line 124, in wrapper
    return fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 2071, in _export_for_training
    export_artifact = export_func(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1415, in _strict_export
    gm_torch_level = _export_to_torch_ir(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 785, in _export_to_torch_ir
    _check_dynamic_shapes(combined_args, dynamic_shapes)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 1007, in _check_dynamic_shapes
    raise UserError(
torch._dynamo.exc.UserError: When `dynamic_shapes` is specified as a dict, its top-level keys must be the arg names ['kwargs'] of `inputs`, but here they are ['a1', 'a2']. Since here `inputs` is a list/tuple enclosing a single dict, maybe you just forgot to enclose `dynamic_shapes` in a list/tuple?
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation

The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.

======================================================================
```
## Torch Version
(reproducible nightly version)

## Other Behavior
The model can export regularly when we test without compiling the model
```python
import torch
from torch.export import export, Dim

def test_export_compiled_model_with_nested_dynamic_shapes():
   """Test exporting a compiled model with nested dict inputs and dynamic shapes."""
   print("Running test_export_compiled_model_with_nested_dynamic_shapes...")

   class M(torch.nn.Module):
       def forward(self, data_batch):
           return data_batch["a1"] + data_batch["a2"]

   m = M()
   example_args = ({
       "a1": torch.ones(3, 3),
       "a2": torch.ones(3, 3),
   },)
   dynamic_shapes = ({
       "a1": {0: Dim.DYNAMIC},
       "a2": {0: Dim.DYNAMIC},
   },)

   try:
       ep = export(m, example_args, dynamic_shapes=dynamic_shapes, strict=True)
       gm = ep.module()
       result_exported = gm(*example_args)
       result_compiled = m(*example_args)

       assert torch.allclose(result_exported, result_compiled), "Results don't match!"
       print("✓ test_export_compiled_model_with_nested_dynamic_shapes PASSED")
       return True
   except Exception as e:
       print(f"✗ test_export_compiled_model_with_nested_dynamic_shapes FAILED")
       print(f"Error: {e}")
       import traceback
       traceback.print_exc()
       return False

def test_export_compiled_model_with_kwargs_dynamic_shapes():
   """Test exporting a compiled model with kwargs and dynamic shapes."""
   print("\nRunning test_export_compiled_model_with_kwargs_dynamic_shapes...")

   class M(torch.nn.Module):
       def forward(self, a1, a2):
           return a1 + a2

   m = M()
   example_args = ()
   example_kwargs = {
       "a1": torch.ones(3, 3),
       "a2": torch.ones(3, 3),
   }
   dynamic_shapes = {
       "a1": {0: Dim.DYNAMIC},
       "a2": {0: Dim.DYNAMIC},
   }

   try:
       ep = export(m, example_args, kwargs=example_kwargs, dynamic_shapes=dynamic_shapes, strict=True)
       gm = ep.module()
       result_exported = gm(**example_kwargs)
       result_compiled = m(**example_kwargs)

       assert torch.allclose(result_exported, result_compiled), "Results don't match!"
       print("✓ test_export_compiled_model_with_kwargs_dynamic_shapes PASSED")
       return True
   except Exception as e:
       print(f"✗ test_export_compiled_model_with_kwargs_dynamic_shapes FAILED")
       print(f"Error: {e}")
       import traceback
       traceback.print_exc()
       return False

if __name__ == "__main__":
   print("Testing export of compiled models with dynamic shapes\n")
   print("=" * 70)

   results = []
   results.append(test_export_compiled_model_with_nested_dynamic_shapes())
   results.append(test_export_compiled_model_with_kwargs_dynamic_shapes())

   print("\n" + "=" * 70)
   print(f"\nResults: {sum(results)}/{len(results)} tests passed")

   if all(results):
       print("✓ All tests passed!")
   else:
       print("✗ Some tests failed")
       exit(1)

```
## Root Cause

This is because of a side effect of torch.compile(model). When the model is being compiled, the input signature will become (*args, **kwargs) automatically. In the above example, the `data_batch` will be added into `args` in combined_args [here](dc011d3203/torch/export/dynamic_shapes.py (L720)), and it will look like
```
{'args': ({'a1': tensor([[1., 1., 1.]... 1., 1.]]), 'a2': tensor([[1., 1., 1.]... 1., 1.]])},)}
```
Without the compiling, the combined args will look like
```
{'data_batch': {'a1': tensor([[1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.]]), 'a2': tensor([[1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.]])}}

```
Thus causing the mismatch when we use treemap to match the dynamic shape with the input argos

The error is also reproducible when we setup kwargs as example argos (see the 2nd test above)
## Fix
Proposed fix: In [_combine_args](dc011d3203/torch/export/dynamic_shapes.py (L720)) we explicitly flatten out the kwargs and args into combined args.
## Side Effects
There are 2 existing tests that assume this behavior and
1. add `args` explicitly to dynamic shapes
2. wrap args into nested format in dynamic_shape

I have modified those test to make args and dynamic_shapes to be in consistent format.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166358
Approved by: https://github.com/angelayi
2025-11-11 19:04:58 +00:00
bd99ae3315 [Docs] Add warning that torch.export.load uses pickle (#167557)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167557
Approved by: https://github.com/zhxchen17, https://github.com/angelayi
2025-11-11 18:47:14 +00:00
ce8672c24f Fix use of TORCH_CHECK in torch/csrc/stable (#167495)
Tested by above PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167495
Approved by: https://github.com/janeyx99
ghstack dependencies: #166579, #166694, #166695, #167362
2025-11-11 17:58:30 +00:00
402c465030 [ARM] Improve LLM performance & mem usage using int4-bf16 KleidiAI kernels (#158250)
Co-authored-by: Nikhil Gupta [nikhil.gupta2@arm.com](mailto:nikhil.gupta2@arm.com)

This PR enables the use of KleidiAI INT4 kernels that directly produce BF16 outputs within PyTorch to boost LLM prefill & decode performance

**This change improves decode throughput by ~15% & reduces memory required to inference the model by 50%**

### Benchmark Setup
```
Model: meta-llama/Llama-3.1-8B
Test Platform: Neoverse V2
```
### Detailed Results

| Metric                           | With `--compile`         | Without `--compile`      |
|----------------------------------|---------------------------|---------------------------|
| Quantization Scheme              | INT4 symmetric channelwise | INT4 symmetric channelwise |
| Input Precision                  | BF16                      | BF16                      |
| Number of Layers Quantized       | 32                        | 32                        |
| Average Compression Ratio        | 87.49%                    | 87.49%                    |
| Total Quantization Time (s)      | 9.62                      | 10.32                     |
| Compile Time (First) (s)         | 134.48                    | 1.69                      |
| Compile Time (Second) (s)        | 80.44                     | 1.60                      |
| Compile Time (Subsequent) (s)    | 0.19                      | 0.22                      |
| Prefill Tokens                   | 54                        | 54                        |
| Decoded Tokens                   | 33                        | 33                        |
| Prefill Time (s)                 | 0.19                      | 0.22                      |
| Decode Time (s)                  | 0.76                      | 1.38                      |
| E2E Generation Time (s)          | 0.95                      | 1.60                      |
| Prefill Throughput (tokens/s)    | 288.13                    | 249.91                    |
| Decode Throughput (tokens/s)     | 43.42                     | 23.83                     |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158250
Approved by: https://github.com/malfet, https://github.com/aditew01, https://github.com/fadara01

Co-authored-by: Nikhil Gupta <nikhil.gupta2@arm.com>
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-11-11 17:50:22 +00:00
573a79fffa [OpenReg] Initialize device stream states for all devices in initOpenRegStreamsOnce (#167528)
Fixes #167527

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167528
Approved by: https://github.com/fffrog
2025-11-11 16:53:22 +00:00
4945180468 Add empty tensor check for _pad_packed_sequence (#167521)
That prevents null pointer dereference

Fixes https://github.com/pytorch/pytorch/issues/149622
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167521
Approved by: https://github.com/albanD
2025-11-11 16:46:13 +00:00
1df723e6f5 [inductor] Fix constant creation (#167398)
We ran into this issue when debugging inductor-lite. Calling `torch.tensor` within a fake mode (which is the case inside of inductor) will create a FakeTensor, which causes this FakeTensor to be used as a constant within inductor.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167398
Approved by: https://github.com/eellison, https://github.com/BoyuanFeng
2025-11-11 16:30:46 +00:00
f9b81e23e4 [ROCm] Disable group gemm CK path when composable kernel (CK) is not enabled (#167403)
For ROCm builds without CK support, ensure use_fast_path is false so that the CK path is not triggered, since CK is currently not available in this configuration.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167403
Approved by: https://github.com/Skylion007, https://github.com/ScottTodd, https://github.com/jeffdaily
2025-11-11 16:15:51 +00:00
ffe6cc39c7 [inductor] Optimize cold compile time when cudagraphs-partition is enabled (#167132)
Summary: When cudagraphs-parittion is enabled, we have seen an increase of cold compile time in the vllm benchmark (see https://github.com/vllm-project/vllm/issues/27080). After some profiling, we found Triton compilation time increased the most. Further investigation reveals it was caused by duplicated Triton kernels not being shared among different partitions. This PR fixes the issue by reusing the Trition kernel source code cache at the top-level PythonWrapperCodegen.

In theory we could further reduce the compilation time by completely skipping compiling duplicated partitions. That can come as a furture improvement.

Some vllm benchmarking data,

```
VLLM_USE_STANDALONE_COMPILE=0 VLLM_DISABLE_COMPILE_CACHE=1 vllm bench latency -O.cudagraph_mode=PIECEWISE -O.use_inductor_graph_partition=True --model meta-llama/Meta-Llama-3.1-8
```
Before:
```
torch.compile takes 69.18 s in total
```
After:
```
torch.compile takes 26.81 s in total
```

As a refrence, this is the compile time when turning off inductor graph partition. Looks like we still have some gap to close.
```
VLLM_USE_STANDALONE_COMPILE=0 VLLM_DISABLE_COMPILE_CACHE=1 vllm bench latency -O.cudagraph_mode=PIECEWISE -O.use_inductor_graph_partition=False --model meta-llama/Meta-Llama-3.1-8B

torch.compile takes 19.41 s in total
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167132
Approved by: https://github.com/eellison
ghstack dependencies: #167131
2025-11-11 15:54:31 +00:00
db1f3f6901 [inductor] Only generate compile-time auto-tuning block in the main graph (#167131)
Summary: When cudagraphs partition and autotune_at_compile_time are enabled, currently each subgraph will generate its own auto-tuning code block and run them once by one. This PR improves it by only generating one auto-tuning code block at the main graph level and execute it once time to auto-tune all the kernels.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167131
Approved by: https://github.com/eellison
2025-11-11 15:54:31 +00:00
43041f0a43 Remove superflous/misplaced TestFailure specs (#165989)
The tests are in class `TestInductorDynamic` which isn't affected by the `test_failures` dict which is only used as an argument to `copy_tests` for the `CommonTemplate` defined in another file.

So those have no effect.

Idea: Enhance `copy_tests` to detect unused keys

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165989
Approved by: https://github.com/benjaminglass1, https://github.com/ezyang
2025-11-11 15:36:43 +00:00
dc00842b81 [ROCm][CI] trigger magma build with gfx950 for ROCm7.1 (#167390)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167390
Approved by: https://github.com/jeffdaily
2025-11-11 15:17:37 +00:00
f1a129a6d0 Clarify that crashes/OOB accesses and not security threats (#167519)
Added note on crashes and out of bounds access in PyTorch.

Addresses https://github.com/pytorch/pytorch/issues/166881#issuecomment-3513245388

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167519
Approved by: https://github.com/albanD
2025-11-11 15:14:51 +00:00
fad48ffa62 [ROCm][CI] Match workflow names with workflow file names (#167483)
Fixes issue with uploading artifacts, which was inadvertently disabled for some renamed workflows via https://github.com/pytorch/pytorch/pull/167225

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167483
Approved by: https://github.com/jeffdaily
2025-11-11 14:45:44 +00:00
3e7a66fae1 [BugFix][Refactor] fix several instances which use f = open(...) without a corresponding f.close() (#167423)
This pattern can lead to potential file descriptor leaks, which can cause resource exhaustion or other unpredictable issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167423
Approved by: https://github.com/cyyever, https://github.com/Skylion007
2025-11-11 11:27:59 +00:00
5f0a563dc8 [pallas backend] implement complex indexing (#167493)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167493
Approved by: https://github.com/jansel
ghstack dependencies: #167426
2025-11-11 10:32:37 +00:00
678915d5f1 Update Arm copyright dates in LICENSE file (#167529)
Arm has made contributions to PyTorch this year however the top-level `LICENSE` file has not been updated yet to reflect this, which this PR addresses.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167529
Approved by: https://github.com/mlazos
2025-11-11 10:25:08 +00:00
daed97afff [Inductor] fix CppTile2DKernel for fp8 datatype (#167451)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167451
Approved by: https://github.com/Xia-Weiwen, https://github.com/jansel
2025-11-11 09:25:14 +00:00
53947adb1f [Inductor] optimize the heuristics of sum reduction (#163144)
Fix https://github.com/pytorch/pytorch/issues/151400.
**Summary:**
Optimize the heuristics of sum reduction, reduce the chunk size of cascade sum to improve numerical stability.
I ran the Inductor benchmark with this PR on CPU, and no performance regression is seen.

**Example:**
Take https://github.com/pytorch/pytorch/issues/151400 as an example:
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config

config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)

class Model(torch.nn.Module):

    def __init__(self):
        super().__init__()

    def forward(self, x):
        vec = x.flatten()
        vec_one = torch.ones_like(vec)
        x = torch.outer(vec, vec_one)
        return torch.mean(x, dim=1)

model = Model()

x = torch.randn(3, 8, 64, 64)  # error will be amplified as the input tensor gets larger

inputs = [x]

def run_test(model, inputs, backend):
    if backend != "eager":
        model = torch.compile(model, backend=backend)
    torch.manual_seed(0)
    output = model(*inputs)
    return output

output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
fp64 = run_test(model.to(dtype=torch.float64), [inputs[0].to(dtype=torch.float64)], 'eager')

print(torch.allclose(output, c_output, rtol=1e-3, atol=1e-3))
print(torch.max(torch.abs(c_output - output)))
print(torch._dynamo.utils.same(output, c_output, fp64))

```

**logs:**
- Before
```
False
tensor(0.0052)
False
```

- After
```
True
tensor(0.0004)
True
```

-
**Generated code:**
- Before
```
cpp_fused_mean_mul_ones_like_view_0 = async_compile.cpp_pybinding(['float*', 'const float*'], '''
#include <torch/csrc/inductor/cpp_prefix.h>
extern "C"  void  kernel(float* in_out_ptr0,
                       const float* in_ptr0)
{
    auto out_ptr0 = in_out_ptr0;
    #pragma omp parallel num_threads(240)
    {
        int tid = omp_get_thread_num();
        {
            #pragma omp for
            for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(98304L); x0+=static_cast<int64_t>(16L))
            {
                {
                    float tmp_acc0 = 0;
                    at::vec::Vectorized<float> tmp_acc0_vec = at::vec::Vectorized<float>(0);
                    for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(98304L); x1+=static_cast<int64_t>(1L))
                    {
                        {
                            if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                            {
                                auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                                auto tmp1 = static_cast<float>(1.0);
                                auto tmp2 = at::vec::Vectorized<float>(tmp1);
                                auto tmp3 = tmp0 * tmp2;
                                tmp_acc0_vec = tmp_acc0_vec + tmp3;
                            }
                        }
                    }
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        tmp_acc0_vec.store(out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
                {
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        auto tmp0 = at::vec::Vectorized<float>::loadu(out_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                        auto tmp1 = static_cast<float>(98304.0);
                        auto tmp2 = at::vec::Vectorized<float>(tmp1);
                        auto tmp3 = tmp0 / tmp2;
                        tmp3.store(in_out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
            }
        }
    }
}
''')

async_compile.wait(globals())
del async_compile

class Runner:
    def __init__(self, partitions):
        self.partitions = partitions

    def recursively_apply_fns(self, fns):
        new_callables = []
        for fn, c in zip(fns, self.partitions):
            new_callables.append(fn(c))
        self.partitions = new_callables

    def call(self, args):
        arg0_1, = args
        args.clear()
        assert_size_stride(arg0_1, (3, 8, 64, 64), (32768, 4096, 64, 1))
        buf0 = empty_strided_cpu((98304, ), (1, ), torch.float32)
        buf1 = buf0; del buf0  # reuse
        # [Provenance debug handles] cpp_fused_mean_mul_ones_like_view_0:1
        cpp_fused_mean_mul_ones_like_view_0(buf1, arg0_1)
        del arg0_1
        return (buf1, )
```

- After
```
cpp_fused_mean_mul_ones_like_view_0 = async_compile.cpp_pybinding(['float*', 'const float*'], '''
#include <torch/csrc/inductor/cpp_prefix.h>
extern "C"  void  kernel(float* in_out_ptr0,
                       const float* in_ptr0)
{
    auto out_ptr0 = in_out_ptr0;
    #pragma omp parallel num_threads(240)
    {
        int tid = omp_get_thread_num();
        {
            #pragma omp for
            for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(98304L); x0+=static_cast<int64_t>(16L))
            {
                {
                    float tmp_acc0 = 0;
                    at::vec::Vectorized<float> tmp_acc0_vec = at::vec::Vectorized<float>(0);
                    at::vec::Vectorized<float> masked_tmp_acc0_vec = at::vec::Vectorized<float>(0);
                    CascadeSumHelper<float, 4096> scalar_cascade_helper0(static_cast<int64_t>(98304L));
                    CascadeSumHelper<at::vec::Vectorized<float>, 4096> cascade_helper0(static_cast<int64_t>(98304L));
                    CascadeSumHelper<at::vec::Vectorized<float>, 4096> masked_cascade_helper0(static_cast<int64_t>(0L));
                    for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(98304L); x1+=static_cast<int64_t>(1L))
                    {
                        {
                            if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                            {
                                auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                                auto tmp1 = static_cast<float>(1.0);
                                auto tmp2 = at::vec::Vectorized<float>(tmp1);
                                auto tmp3 = tmp0 * tmp2;
                                tmp_acc0_vec = cascade_sum_combine(tmp3, &cascade_helper0);
                            }
                        }
                    }
                    tmp_acc0 = cascade_sum_final(&scalar_cascade_helper0);
                    tmp_acc0_vec = cascade_sum_final(&cascade_helper0);
                    masked_tmp_acc0_vec = cascade_sum_final(&masked_cascade_helper0);
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        tmp_acc0_vec = tmp_acc0_vec + masked_tmp_acc0_vec;
                        tmp_acc0_vec.store(out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
                {
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        auto tmp0 = at::vec::Vectorized<float>::loadu(out_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                        auto tmp1 = static_cast<float>(98304.0);
                        auto tmp2 = at::vec::Vectorized<float>(tmp1);
                        auto tmp3 = tmp0 / tmp2;
                        tmp3.store(in_out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
            }
        }
    }
}
''')

async_compile.wait(globals())
del async_compile

class Runner:
    def __init__(self, partitions):
        self.partitions = partitions

    def recursively_apply_fns(self, fns):
        new_callables = []
        for fn, c in zip(fns, self.partitions):
            new_callables.append(fn(c))
        self.partitions = new_callables

    def call(self, args):
        arg0_1, = args
        args.clear()
        assert_size_stride(arg0_1, (3, 8, 64, 64), (32768, 4096, 64, 1))
        buf0 = empty_strided_cpu((98304, ), (1, ), torch.float32)
        buf1 = buf0; del buf0  # reuse
        # [Provenance debug handles] cpp_fused_mean_mul_ones_like_view_0:1
        cpp_fused_mean_mul_ones_like_view_0(buf1, arg0_1)
        del arg0_1
        return (buf1, )
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163144
Approved by: https://github.com/CaoE, https://github.com/mingfeima, https://github.com/jansel
2025-11-11 09:25:00 +00:00
c297b02f12 [DTensor] statically_known_true for slice strategy (#166990)
Avoids data-dependent errors for out-of-bounds & redundant slice checks.

The sharding logic that immediately depends on this only checks for redundant slices, and is saying: "it's safe to reuse the input placements if a) the slicing dimension isn't sharded, or b) the slice is redundant, so just pretend this op didn't happen".

This has a slight effect on output placements, when a slice is performed on a shared dim, and dynamic shapes are involved (size/start/end/step). Now if the slice isn't obviously redundant, we won't immediately consider the input placements valid (even if they could be for very particular runtime shapes), and select strategies valid for the general case - in this case I guess unsharding the slicing dim.

For backed symbols, we could choose to recompile when the redundant case is hit, by switching to `guard_or_false`, but it's not obvious how desirable this is.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166990
Approved by: https://github.com/laithsakka
2025-11-11 08:04:09 +00:00
bd24774f50 [XPU][Test] Enable XPU tests in inductor/test_analysis.py (#166840)
This PR enables XPU devices in test_analysis.py.

For performance reason, it skips some slow tests, so a full scope should be enabled by using:

```
export PYTORCH_TEST_WTH_SLOW=1
```

**PR Stack:**

- https://github.com/pytorch/pytorch/pull/166840 : This PR enables the tests, ignores the tests that failed
- https://github.com/pytorch/pytorch/pull/166839 : This fixed the bug and enable the full tests for xpu

**Some skipped test time:**

```
test_augment_trace_against_flop_counter_maxat0_xpu_float16 [49.0863s]
test_augment_trace_against_flop_counter_maxat0_xpu_float32 [18.2268s]
test_augment_trace_against_flop_counter_maxat1_xpu_float16 [85.6549s]
test_augment_trace_against_flop_counter_maxat1_xpu_float32 [329.0832s]
test_augment_trace_against_flop_counter_maxat2_xpu_float16 [24.4825s]
test_augment_trace_against_flop_counter_maxat2_xpu_float32 [19.0688s]
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166840
Approved by: https://github.com/guangyey, https://github.com/jansel
2025-11-11 07:49:07 +00:00
525eb9fab9 Fix command injection vulnerability in PCH compilation (#167502)
Fixed a command injection vulnerability in PreCompiled Header (PCH) compilation where extra_cflags were passed to subprocess with shell=True, allowing arbitrary command execution through malicious compiler flags.

Changed subprocess.check_output(pch_cmd, shell=True) to use shlex.split() to safely parse the command without shell interpretation. This prevents shell metacharacters (;, |, &, etc.) in extra_cflags from being executed as shell commands.

Added test case test_pch_command_injection that verifies:
1. PCH compilation attempts with malicious payloads in extra_cflags
2. Shell commands embedded in flags are not executed
3. Exploit file is not created, proving no shell execution occurred

Note: On RHEL/Fedora and other systems with versioned GCC compilers, the test depends on https://github.com/pytorch/pytorch/pull/167501 being merged first, otherwise the test will be skipped due to GCC detection issues.

Fixes #167480

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167502
Approved by: https://github.com/malfet
2025-11-11 07:27:41 +00:00
7886070fc5 Use stable topological sort in fuse_by_partitions (#167397)
legalize_graph() performs a topo sort that shuffles the nodes is a global way, making the result unpredictable.
We should avoid this in graph pass in general.

This problem is discovered when testing regional_inductor, a single fuse region trigger the global reordering.

Before
https://www.internalfb.com/intern/diffing/?before_paste_number=2029217728&after_paste_number=2029218006&regex_remove_pattern=&enable_regex_remove=0&strip_empty_lines=0&line_wrap=0&selected_tab=plain_diff

After
https://www.internalfb.com/intern/diffing/?paste_number=2029162294&regex_remove_pattern=&enable_regex_remove=0&strip_empty_lines=0&line_wrap=0&selected_tab=plain_diff

Left is gm before regional_inductor, right is after.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167397
Approved by: https://github.com/ezyang
2025-11-11 07:14:02 +00:00
87d17e9dee [pallas backend] Implementing Strided/Scatter Access (#167426)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167426
Approved by: https://github.com/yarongmu-google, https://github.com/jansel
2025-11-11 06:32:25 +00:00
53422e6bc8 [MPS] Add mechanism for reporting asserts from kernels (#166615)
Allocate ErrorMessages buffer associated with MPSStream and introduce `c10:🤘:report_error` method(and `TORCH_REPORT_ERROR` macro), that can be used to preserve up to `c10:🤘:error_message_count` messages

Add test that detects those

As results attempt to run something like
```python
import torch
x=torch.rand(10, 1, 10, device='mps')
y=x[:, [1]]
torch.mps.synchonize()
```
will raise `torch.AcceleratorError: index 1 is out of bounds for dimension 0 with size 1`

Fixes https://github.com/pytorch/pytorch/issues/111669
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166615
Approved by: https://github.com/manuelcandales, https://github.com/dcci
ghstack dependencies: #167444, #167445
2025-11-11 06:28:14 +00:00
c34b743eac [Dynamo] Support for xor (#166065)
Add missing support for xor (and maybe some other binary ops later on)

Fixes #146688

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166065
Approved by: https://github.com/ezyang
2025-11-11 05:44:08 +00:00
db250fa895 Revert "Expose THPVariable_Wrap() with a type argument (#167488)"
This reverts commit 52a6b5a4cc9f938b9cda102fb506fd0e4b32ecad.

Reverted https://github.com/pytorch/pytorch/pull/167488 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/167488#issuecomment-3515070469))
2025-11-11 05:39:40 +00:00
52231a7974 show current env before running lint (#166860)
There seems to be some discrepency between CI and local for Pyrefly so logging these to be able to check for different dependency versions
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166860
Approved by: https://github.com/janeyx99
2025-11-11 05:28:31 +00:00
cf71c53eae Fix check_compiler_is_gcc to detect versioned GCC compilers (#167501)
The function was only returning True for compilers named 'c++', but failed to detect g++, gcc, g++-13, g++-14, etc. This fixes the detection to work for any GCC variant by checking for both COLLECT_GCC and 'gcc version' in the compiler output.

The previous implementation used os.path.basename() on the resolved compiler path and only checked if it exactly matched 'c++'. This caused false negatives for versioned GCC installations and direct g++ usage.

The fix simplifies the logic: if both COLLECT_GCC is present in the output (indicating GCC toolchain) and 'gcc version' appears in the version string, it's GCC.

Fixes #167499

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167501
Approved by: https://github.com/ezyang, https://github.com/malfet
2025-11-11 05:14:05 +00:00
f9caae42ed [MPS] Move dispatch_sync_with_rethrow to MPSStream (#167445)
And wrap dispatches to copy sync with rethrow-wrapped method

Needed if execption could be raised during the sync, for example when surfacing async errors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167445
Approved by: https://github.com/Skylion007, https://github.com/manuelcandales, https://github.com/dcci
ghstack dependencies: #167444
2025-11-11 05:03:50 +00:00
52a6b5a4cc Expose THPVariable_Wrap() with a type argument (#167488)
For torchdistx, which is only *mostly* dead

Differential Revision: D86712979

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167488
Approved by: https://github.com/soulitzer
2025-11-11 04:56:50 +00:00
94f6f79e27 [3/N] Use Python 3.10 typing (#167431)
This PR applies new Union and Optional typing syntax to some files.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167431
Approved by: https://github.com/ezyang
2025-11-11 04:40:05 +00:00
5676de1157 [PT2] Supply index to fake tensors on mtia device (#167457)
Summary:
When PT2 sees an operation like `torch.empty_strided(8, device="cuda")` it returns a fake tensor on a specific device index, such as `"cuda:0"`. This is implemented via a list of multi-index devices in the fake tensor constructor. If a device supports indices but is not in this list, we can hit fake tensor propagation errors as `"cuda"` is not considered to be the same device as `"cuda:0"`.

This PR adds the `"mtia"` device to this list, to resolve some fake tensor propagation errors we're seeing with the full Inductor backend. (Internal task: T243176905)

Test Plan: Tests are stacked on the internal diff D86605248.

Reviewed By: StellarrZ

Differential Revision: D86605248

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167457
Approved by: https://github.com/eellison, https://github.com/mlazos
2025-11-11 04:31:28 +00:00
2ca0b3f70a [simplefsdp] fix autobucketing pass that takes comm op as input (#167484)
Fix for issue: https://github.com/pytorch/torchtitan/issues/2004

The root cause is that we are scheduling comm ops that are used as input to bwd graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167484
Approved by: https://github.com/eellison
2025-11-11 04:09:02 +00:00
b06453c7cf Make PT2 compile backprop through custom op without autograd key a hard error (#166367)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166367
Approved by: https://github.com/bdhirsh
2025-11-11 03:16:30 +00:00
f0fa39a7e4 Revert "[inductor, 3.14] fix itertools.product pickle error in test_cpu_repro (#167382)"
This reverts commit 5320ca3725c4ccf2811c211b48af1ddebb2b471f.

Reverted https://github.com/pytorch/pytorch/pull/167382 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
b5142f74f9 Revert "[inductor, 3.14] catch pickle.PicklingError exceptions (#167383)"
This reverts commit ad7db3617ec5cc3aa384bd4408fcfbc2acac1a98.

Reverted https://github.com/pytorch/pytorch/pull/167383 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
a14452bfce Revert "[dynamo, 3.14] enable dynamo in 3.14 (#167384)"
This reverts commit 17e70ae459c45d85ef77afa4d19efe5f8b44f573.

Reverted https://github.com/pytorch/pytorch/pull/167384 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
619f329a4b Revert "[3.14, dataloader] handle forkserver default mp start method in 3.14 (#167387)"
This reverts commit cf63b212e330836c2be92bef903f5a5d0dc2c7e9.

Reverted https://github.com/pytorch/pytorch/pull/167387 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
7a48db0809 Revert "[export, 3.14] handle patching methods with functools.partial correctly in non-strict export (#167396)"
This reverts commit fe0bb7cf6001532b14bba14d686baa1ff0b98de0.

Reverted https://github.com/pytorch/pytorch/pull/167396 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:08 +00:00
406f2943d2 Revert "Rework PyObject preservation (#166342)"
This reverts commit 6ca8cc6edf30b5ca882d4871af617e674b6cdd47.

Reverted https://github.com/pytorch/pytorch/pull/166342 on behalf of https://github.com/jeanschmidt due to seems to have introduced test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_var_cpu_int8 [GH job link](https://github.com/pytorch/pytorch/actions/runs/19247187935/job/55027440149) [HUD commit link](6ca8cc6edf) ([comment](https://github.com/pytorch/pytorch/pull/166342#issuecomment-3514771276))
2025-11-11 02:54:00 +00:00
c3bc56c8b4 [xpu][fix] Format XPU c10 and aten code (#167298)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167298
Approved by: https://github.com/cyyever, https://github.com/albanD, https://github.com/Skylion007
2025-11-11 02:07:37 +00:00
b2be4d24c0 [DTensor] Make ExplicitRedistributeContext strict/non-strict mode (#167370)
Also support nesting, enable/disable, and make the class use a
thread-local for storage so independent threads do not confuse each
other.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167370
Approved by: https://github.com/ezyang
ghstack dependencies: #166593
2025-11-11 01:19:16 +00:00
8d5cceeb6a [torchbench][optimus] Add backend optimus (#167357)
Summary: `--optimus [all | vertical_opt | horizontal_opt]` will kick off inductor compile with different fusion strategies.

Test Plan:
TorchBench Runner:

```
$ buck2 run mode/opt //pytorch/benchmark:run -- customized_optimus_illustrative -t train -d cuda
GPU Time per batch:   56.254 milliseconds
CPU Wall Time per batch:  56.326 milliseconds
CPU Wall Time:        56.326 milliseconds
Time to first batch:          420.0777 ms
GPU 0 Peak Memory:              0.0695 GB
CPU Peak Memory:              359.6362 GB
```

PT2 Benchmark Runner (comparing with eager):

```
buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative  --performance --training --inductor

running benchmark: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:02<00:00, 14.37it/s]
4.509x
```

eager latency: ~56 ms
inductor latency: ~11 ms

Optimus backend:

```
$ buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative --performance --training --optimus all
11.02923508733511 ms, 13.884015614166856 ms, 0.794x
```

```
$ buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative --performance --training --optimus vertical_opt
12.47156853787601 ms, 10.699485195800662 ms, 1.166x
```

```
$ buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative --performance --training --optimus horizontal_opt
11.078484123572707 ms, 10.797873372212052 ms, 1.026x
```

optimus latency ~10 ms

Differential Revision: D86524903

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167357
Approved by: https://github.com/mengluy0125
2025-11-11 00:35:30 +00:00
f6331192b4 Enable Doc builds for: Minor Releases RCs. Minor and Patch Releases final RC (#167478)
Enable Doc builds for
1. Minor Releases RCs
2. Minor and Patch Releases final RC

This is done to prevent publishing doc for patch releases when building rcs.
See:
https://github.com/pytorch/docs/pull/57

Followup after: https://github.com/pytorch/pytorch/pull/153973
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167478
Approved by: https://github.com/svekars, https://github.com/seemethere
2025-11-11 00:34:08 +00:00
f8d408d24a [CP] Correctly compile create_cp_block_mask (#167153)
Currently we re-compile create_block_mask every time, which is not very efficient and the global compilation also causes some issues. This PR lazily compile the create_block_mask and does it only once.

Fixes https://github.com/pytorch/pytorch/issues/167064

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167153
Approved by: https://github.com/drisspg, https://github.com/XilunWu
2025-11-11 00:03:06 +00:00
5a85b6eaf8 Migrate TypeTraits, TypeList, Metaprogramming to torch:: headeronly (#167386)
Taking over #163634; adding tests/headeronly APIs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167386
Approved by: https://github.com/albanD, https://github.com/mikaylagawarecki
2025-11-11 00:02:20 +00:00
e3d6896d08 [MTIAGraph][Pytorch][3/n] Implement mtia_graph python wrapper in pytorch (#166964)
- Add python module `mtia_graph.py`, which is a wrapper on top of the c++ logic implemented in previous PRs/diffs
- Add python level integration tests

[Doc](https://docs.google.com/document/d/1Q3xdZAIqhBvuy2HxGDfJyXVmxYXUEeYSZSwsp7bcJF8/edit?tab=t.osb46a42t6wb)

Differential Revision: [D84673488](https://our.internmc.facebook.com/intern/diff/D84673488/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166964
Approved by: https://github.com/malfet
2025-11-10 23:49:20 +00:00
9d9e7c7b1c [Pytorch] Extend OSS conversion benchmarks (#167099)
Summary: We are extending OSS conversion benchmarks, to include all combinations between types

Test Plan: CI

Differential Revision: D86315975

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167099
Approved by: https://github.com/mcfi
2025-11-10 23:36:57 +00:00
4c3721fe70 allow sym_stride, and sym_size lowering in inductor to return ints (#167345)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167345
Approved by: https://github.com/eellison
2025-11-10 23:29:23 +00:00
8ef4099313 Revert "Add min/max support for barebones uint types (#166813)"
This reverts commit 9ffc480c5a928eaccb4ac0e1755a1c596674d884.

Reverted https://github.com/pytorch/pytorch/pull/166813 on behalf of https://github.com/jeanschmidt due to It was reverted internally 6 days ago, but not reverted on OSS, this is causing conflicts ([comment](https://github.com/pytorch/pytorch/pull/166813#issuecomment-3514328895))
2025-11-10 23:25:22 +00:00
de773364be Support AC in default partitioner when functionalization is enabled (#166610)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166610
Approved by: https://github.com/SherlockNoMad
ghstack dependencies: #166536
2025-11-10 23:09:01 +00:00
47da714b8b [inductor][determinism] type errors + use odc to dump imc on exit (#167136)
Summary: fix some type errors + instead of manually creating a filelock when dumping dcache's imc to file we simply use an odc (since this is the intended behavior of odc, anyways)

Test Plan:
```
buck test fbcode//mode/opt caffe2/test/inductor:caching
```

Differential Revision: D86345594

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167136
Approved by: https://github.com/aorenste
2025-11-10 22:51:03 +00:00
69ab1f93e4 Add shim for at::get_num_threads (#167362)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167362
Approved by: https://github.com/janeyx99
ghstack dependencies: #166579, #166694, #166695
2025-11-10 22:21:14 +00:00
232baa33b3 Redo add parallel_for to torch/csrc/stable (#166695)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166695
Approved by: https://github.com/malfet
ghstack dependencies: #166579, #166694
2025-11-10 22:21:14 +00:00
6f0182495f Add stable::Tensor.device() (#166694)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166694
Approved by: https://github.com/janeyx99
ghstack dependencies: #166579
2025-11-10 22:21:14 +00:00
7da82b84e2 Add torch::stable::Device (#166579)
Prior to this PR, the IValue <-> StableIValue conversion for `DeviceObjType` (aka c10::Device) was to pack it into the leading bits of the StableIValue (which is a uint64_t)

After this PR, the IValue <-> StableIValue conversion for `DeviceObjType` expects DeviceType to be packed into the upper 32 bits of StableIValue and DeviceIndex to be packed into the lower 32 bits

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166579
Approved by: https://github.com/janeyx99
2025-11-10 22:21:14 +00:00
cda7604434 [ez] Remove spammy deprecation log (#167470)
"
/packages/pytorch_latest_sixlib_conda/conda/lib/python3.12/site-packages/torch/_dynamo/variables/user_defined.py:1815: FutureWarning: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead.
  return ctor(*args, **kwargs)"

is too spammy

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167470
Approved by: https://github.com/tugsbayasgalan
2025-11-10 21:49:23 +00:00
6ca8cc6edf Rework PyObject preservation (#166342)
Make the PyObject preservation scheme thread-safe with free threaded (nogil) Python. The general idea is:

* Python Tensor and Storage objects always hold a strong reference to their underlying c10 object
* c10 objects hold a strong reference to their Python objects if there's at least one other reference to the c10 object

This is implemented in `intrusive_ptr`:

* The top most bit (`kHasPyObject`) from the weakref count is now used to indicate if the `intrusive_ptr_target` has an associated PyObject. So `kHasPyObject` is one bit, the weakref count is now 31 bits and the strong refcount remains 32 bits.
* When the reference count increases from one to two and `kHasPyObject` is set, we incref the associated Python object to ensure that it's kept alive.
* When the reference count decreases from two to one (i.e., there are no C++ reference to the `intrusive_ptr_target` other than from the Python object), we decre the associated Python object to break the cycle.

Other benefits:

* We can delete a lot of the copypasta from Python internal `subtype_dealloc`
* This fixes the weakref and GC bugs we had in the previous scheme. Python weakrefs on Tensors and Storages should just work as expected now.

Risks:

* Extra branch for reference count operations on `intrusive_ptr<TensorImpl>`, `intrusive_ptr<StorageImpl>`, and the generic `intrusive_ptr<intrusive_ptr_target>` even when we're not using Python.
* It's a big change
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166342
Approved by: https://github.com/albanD
2025-11-10 21:47:53 +00:00
bb37483464 Use c7i.2xlarge for B200 build (#167078)
The build system is oversized for what is necessary. Reduce the size to optimize costs. The default workflow runner is `linux.c7i.2xlarge` so we are just removing the runner definition in the workflow so that it uses the default.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167078
Approved by: https://github.com/nWEIdia, https://github.com/seemethere
2025-11-10 21:45:45 +00:00
2751b1d3c3 Support repr on user defined objects (#167372)
Fixes: https://github.com/pytorch/pytorch/issues/167369

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167372
Approved by: https://github.com/anijain2305
2025-11-10 21:06:37 +00:00
fe0bb7cf60 [export, 3.14] handle patching methods with functools.partial correctly in non-strict export (#167396)
Note: dynamo is not affected by this since patching class methods are not supported right now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167396
Approved by: https://github.com/angelayi
ghstack dependencies: #167382, #167383, #167384, #167387
2025-11-10 20:52:05 +00:00
cf63b212e3 [3.14, dataloader] handle forkserver default mp start method in 3.14 (#167387)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167387
Approved by: https://github.com/malfet
ghstack dependencies: #167382, #167383, #167384
2025-11-10 20:52:05 +00:00
17e70ae459 [dynamo, 3.14] enable dynamo in 3.14 (#167384)
dynamo tests are passing in the CI PR above - so we could probably just enable dynamo right now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167384
Approved by: https://github.com/Skylion007, https://github.com/mlazos
ghstack dependencies: #167382, #167383
2025-11-10 20:52:05 +00:00
ad7db3617e [inductor, 3.14] catch pickle.PicklingError exceptions (#167383)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167383
Approved by: https://github.com/aorenste
ghstack dependencies: #167382
2025-11-10 20:52:04 +00:00
5320ca3725 [inductor, 3.14] fix itertools.product pickle error in test_cpu_repro (#167382)
`inductor/test_cpu_cpp_wrapper` was failing since it was attempting to pickle`itertools.product`, and that is no longer picklable in 3.14. We work around by eagerly generating a list.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167382
Approved by: https://github.com/atalman, https://github.com/malfet
2025-11-10 20:52:04 +00:00
3e4faca130 [torch.export] Refactor placeholder_naming_pass to reduce CCN (#166600)
Summary: Reduced CCN from 37 to 28 of placeholder_naming_pass method

Test Plan: Existing tests

Differential Revision: D85820388

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166600
Approved by: https://github.com/angelayi
2025-11-10 20:44:18 +00:00
0c2f206ded Typo fix - baddbmm_strategy (#166963)
This is called by registration with decorator, so function not called directly. For clarity, add the "b" for "batch" in function name.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166963
Approved by: https://github.com/janeyx99
2025-11-10 20:35:42 +00:00
6cf21fa331 Fix -ffunction-sections, -fdata-sections not being added on aarch64. (#166407)
Preferred solution to #166380

Changes:

- Moved summary print to bottom of CMakeLists.txt
- Fix the problem 'add_compile_options' should be called before targets defined, so opted for `append_cxx_flag_if_supported` and `append_c_flag_if_supported` ( new ).
- Added extra verbosity so it can be seen when linker script added.

( unfortunately linker script has to be added per-target rather than globally due to ninja/cmake depdendency tracking ).

Also move summary print to bottom of CMakeLists.txt and improve logging
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166407
Approved by: https://github.com/Aidyn-A, https://github.com/atalman
2025-11-10 20:32:08 +00:00
cdc8460f2c Use c7i.2xlarge for H100 build (#167466)
The build system maybe oversized for what is necessary. Reduce the size to optimize costs. The default workflow runner is linux.c7i.2xlarge so we are just removing the runner definition in the workflow so that it uses the default.

Relates to pytorch/test-infra#7175.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167466
Approved by: https://github.com/seemethere
2025-11-10 20:20:54 +00:00
86130aa2ca Fix flaky memory profiler test [2] (#167268)
Fixes #167037

Move the module definition outside of the unit test so when we run the unit test multiple times, the module is not re-compiled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167268
Approved by: https://github.com/angelayi
2025-11-10 19:51:38 +00:00
9491830c79 move subgraph_has_impure_ops from node.is_impure into const_fold to unblock production (#167443)
Summary:
https://github.com/pytorch/pytorch/pull/166609 updates `node.is_impure` to consider a submodule as impure if submodule contains impure node. This in turn changes `graph.eliminate_dead_code()` function behavior, which does not eliminate nodes with side effects, see [pytorch documentation](https://docs.pytorch.org/docs/stable/fx.html#torch.fx.Graph.eliminate_dead_code)
> Remove all dead code from the graph, based on each node’s number of users, and whether the nodes have any side effects.

While this is correct that a submodule containing side-effectful ops is side-effectful and should not be dead code eliminated, some customers rely on the dead code elimination to eliminate submodules that contain impure ops which is the behavior before #166609 fix.

Due to production environment constraints, we have to revert https://github.com/pytorch/pytorch/pull/166609 and move the side-effectful submodule check logic to `const_fold.py`, which will correctly **not** const-fold a submodule that contains impure ops.

NOTE other call sites that use `node.is_impure()` to make decisions are still incorrectly eliminating side-effectful submodules, but we can't safely change that today.

## This pr
- move `_subgraph_has_impure_op` into `fx/experimental/const_fold.py`, check and prevent const-folding an impure submodule
- added a note in `node.is_impure` to highlight the incorrect behavior and context in case people go looking in the future.

Test Plan: run test_fx_const_fold and all tests pass

Differential Revision: D86641994

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167443
Approved by: https://github.com/jfix71
2025-11-10 19:29:54 +00:00
04a85b4c21 [compile-on-one-rank] Step 1: DeviceId (#166680)
Add a "--virtual-local-rank" mode to torchrun. When used instead of passing the
local rank in LOCAL_RANK it uses a LOCAL_RANK of "0" and adjusts
CUDA_VISIBLE_DEVICES to reflect the desired GPU index.

Testing:
(tweaked run_train.sh to use `--log-dir`)
```
export NGPU=8
export CONFIG_FILE="./torchtitan/models/llama3/train_configs/debug_model.toml"
with-proxy ./run_train.sh --model.name compiler_toolkit.llama3 --compile.enable --parallelism.data_parallel_shard_degree=2 --parallelism.tensor_parallel_degree=4
```

And then comparing ranks:

Without --virtual-local-rank gives a lot of differences like:
```
 [rank#]:        mul_1: "f32[8, 512, 256]" = torch.ops.aten.mul.Tensor(mul, view_9);  mul = None
-[rank#]:        _to_copy_3: "bf16[8, 512, 256]" = torch.ops.aten._to_copy.default(mul_1, dtype = torch.bfloat16, layout = torch.strided, device = device(type='cuda', index=0));  mul_1 = None
+[rank#]:        _to_copy_3: "bf16[8, 512, 256]" = torch.ops.aten._to_copy.default(mul_1, dtype = torch.bfloat16, layout = torch.strided, device = device(type='cuda', index=1));  mul_1 = None
 [rank#]:        detach: "f32[8, 512, 1]" = torch.ops.aten.detach.default(rsqrt);  rsqrt = None
```

With --virtual-local-rank makes those differences go away.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166680
Approved by: https://github.com/ezyang
2025-11-10 18:47:31 +00:00
a4437d76f0 Add some labeler rules that used to be in the autolabel bot (#167330)
See https://github.com/pytorch/test-infra/pull/7446 for the paths

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167330
Approved by: https://github.com/huydhn
2025-11-10 18:38:42 +00:00
3ea829a337 Fix torch.cond HOP device in inductor (#167354)
Fixes #166918

The output device may not be on the same device as the predicate device.

```
python test/inductor/test_control_flow.py -k test_output_on_different_device
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167354
Approved by: https://github.com/ydwu4, https://github.com/zou3519
2025-11-10 18:19:38 +00:00
3966b5ad05 [BE] Fix out-of-bounds index_put in test_mps.py (#167444)
Discovered while enabling assertions on out-of-bounds accesses. Otherwise test fails with
```
ERROR: test_sdpa_mask_fp16_L6_S17_NH23_HS121 (__main__.TestSDPA.test_sdpa_mask_fp16_L6_S17_NH23_HS121)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/malfet/git/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 3334, in wrapper
    method(*args, **kwargs)
    ~~~~~~^^^^^^^^^^^^^^^^^
  File "/Users/malfet/git/pytorch/pytorch/build/../test/test_mps.py", line 9494, in test_sdpa_mask_fp16_L6_S17_NH23_HS121
    self._test_sdpa_mask(torch.float16, 7, 17, 23, 121)
    ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/malfet/git/pytorch/pytorch/build/../test/test_mps.py", line 9478, in _test_sdpa_mask
    y_ref = F.scaled_dot_product_attention(q.cpu(), k.cpu(), v.cpu(), attn_mask=mask.cpu(), dropout_p=0.0, is_causal=False)
                                           ~~~~~^^
torch.AcceleratorError: index out of range

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167444
Approved by: https://github.com/Skylion007, https://github.com/manuelcandales
2025-11-10 18:19:28 +00:00
442 changed files with 14629 additions and 4513 deletions

View File

@ -30,7 +30,6 @@ into a tarball, with the following structure:
More specifically, `build_magma.sh` copies over the relevant files from the `package_files` directory depending on the ROCm version.
Outputted binaries should be in the `output` folder.
## Pushing
Packages can be uploaded to an S3 bucket using:

View File

@ -96,7 +96,6 @@ function pip_build_and_install() {
python3 -m pip wheel \
--no-build-isolation \
--no-deps \
--no-use-pep517 \
-w "${wheel_dir}" \
"${build_target}"
fi
@ -308,6 +307,28 @@ function install_torchao() {
pip_build_and_install "git+https://github.com/pytorch/ao.git@${commit}" dist/ao
}
function install_flash_attn_cute() {
echo "Installing FlashAttention CuTe from GitHub..."
# Grab latest main til we have a pinned commit
local flash_attn_commit
flash_attn_commit=$(git ls-remote https://github.com/Dao-AILab/flash-attention.git HEAD | cut -f1)
# Clone the repo to a temporary directory
rm -rf flash-attention-build
git clone --depth 1 --recursive https://github.com/Dao-AILab/flash-attention.git flash-attention-build
pushd flash-attention-build
git checkout "${flash_attn_commit}"
# Install only the 'cute' sub-directory
pip_install -e flash_attn/cute/
popd
# remove the local repo
rm -rf flash-attention-build
echo "FlashAttention CuTe installation complete."
}
function print_sccache_stats() {
echo 'PyTorch Build Statistics'
sccache --show-stats

View File

@ -100,6 +100,337 @@ def check_lib_statically_linked_libstdc_cxx_abi_symbols(lib: str) -> None:
)
def _compile_and_extract_symbols(
cpp_content: str, compile_flags: list[str], exclude_list: list[str] | None = None
) -> list[str]:
"""
Helper to compile a C++ file and extract all symbols.
Args:
cpp_content: C++ source code to compile
compile_flags: Compilation flags
exclude_list: List of symbol names to exclude. Defaults to ["main"].
Returns:
List of all symbols found in the object file (excluding those in exclude_list).
"""
import subprocess
import tempfile
if exclude_list is None:
exclude_list = ["main"]
with tempfile.TemporaryDirectory() as tmpdir:
tmppath = Path(tmpdir)
cpp_file = tmppath / "test.cpp"
obj_file = tmppath / "test.o"
cpp_file.write_text(cpp_content)
result = subprocess.run(
compile_flags + [str(cpp_file), "-o", str(obj_file)],
capture_output=True,
text=True,
timeout=60,
)
if result.returncode != 0:
raise RuntimeError(f"Compilation failed: {result.stderr}")
symbols = get_symbols(str(obj_file))
# Return all symbol names, excluding those in the exclude list
return [name for _addr, _stype, name in symbols if name not in exclude_list]
def check_stable_only_symbols(install_root: Path) -> None:
"""
Test TORCH_STABLE_ONLY and TORCH_TARGET_VERSION by compiling test code and comparing symbol counts.
This approach tests:
1. WITHOUT macros -> many torch symbols exposed
2. WITH TORCH_STABLE_ONLY -> zero torch symbols (all hidden)
3. WITH TORCH_TARGET_VERSION -> zero torch symbols (all hidden)
4. WITH both macros -> zero torch symbols (all hidden)
"""
include_dir = install_root / "include"
assert include_dir.exists(), f"Expected {include_dir} to be present"
test_cpp_content = """
// Main torch C++ API headers
#include <torch/torch.h>
#include <torch/all.h>
// ATen tensor library
#include <ATen/ATen.h>
// Core c10 headers (commonly used)
#include <c10/core/Device.h>
#include <c10/core/DeviceType.h>
#include <c10/core/ScalarType.h>
#include <c10/core/TensorOptions.h>
#include <c10/util/Optional.h>
int main() { return 0; }
"""
base_compile_flags = [
"g++",
"-std=c++17",
f"-I{include_dir}",
f"-I{include_dir}/torch/csrc/api/include",
"-c", # Compile only, don't link
]
# Compile WITHOUT any macros
symbols_without = _compile_and_extract_symbols(
cpp_content=test_cpp_content,
compile_flags=base_compile_flags,
)
# We expect constexpr symbols, inline functions used by other headers etc.
# to produce symbols
num_symbols_without = len(symbols_without)
print(f"Found {num_symbols_without} symbols without any macros defined")
assert num_symbols_without != 0, (
"Expected a non-zero number of symbols without any macros"
)
# Compile WITH TORCH_STABLE_ONLY (expect 0 symbols)
compile_flags_with_stable_only = base_compile_flags + ["-DTORCH_STABLE_ONLY"]
symbols_with_stable_only = _compile_and_extract_symbols(
cpp_content=test_cpp_content,
compile_flags=compile_flags_with_stable_only,
)
num_symbols_with_stable_only = len(symbols_with_stable_only)
assert num_symbols_with_stable_only == 0, (
f"Expected no symbols with TORCH_STABLE_ONLY macro, but found {num_symbols_with_stable_only}"
)
# Compile WITH TORCH_TARGET_VERSION (expect 0 symbols)
compile_flags_with_target_version = base_compile_flags + [
"-DTORCH_TARGET_VERSION=1"
]
symbols_with_target_version = _compile_and_extract_symbols(
cpp_content=test_cpp_content,
compile_flags=compile_flags_with_target_version,
)
num_symbols_with_target_version = len(symbols_with_target_version)
assert num_symbols_with_target_version == 0, (
f"Expected no symbols with TORCH_TARGET_VERSION macro, but found {num_symbols_with_target_version}"
)
# Compile WITH both macros (expect 0 symbols)
compile_flags_with_both = base_compile_flags + [
"-DTORCH_STABLE_ONLY",
"-DTORCH_TARGET_VERSION=1",
]
symbols_with_both = _compile_and_extract_symbols(
cpp_content=test_cpp_content,
compile_flags=compile_flags_with_both,
)
num_symbols_with_both = len(symbols_with_both)
assert num_symbols_with_both == 0, (
f"Expected no symbols with both macros, but found {num_symbols_with_both}"
)
def check_stable_api_symbols(install_root: Path) -> None:
"""
Test that stable API headers still expose symbols with TORCH_STABLE_ONLY.
The torch/csrc/stable/c/shim.h header is tested in check_stable_c_shim_symbols
"""
include_dir = install_root / "include"
assert include_dir.exists(), f"Expected {include_dir} to be present"
stable_dir = include_dir / "torch" / "csrc" / "stable"
assert stable_dir.exists(), f"Expected {stable_dir} to be present"
stable_headers = list(stable_dir.rglob("*.h"))
if not stable_headers:
raise RuntimeError("Could not find any stable headers")
includes = []
for header in stable_headers:
rel_path = header.relative_to(include_dir)
includes.append(f"#include <{rel_path.as_posix()}>")
includes_str = "\n".join(includes)
test_stable_content = f"""
{includes_str}
int main() {{ return 0; }}
"""
compile_flags = [
"g++",
"-std=c++17",
f"-I{include_dir}",
f"-I{include_dir}/torch/csrc/api/include",
"-c",
"-DTORCH_STABLE_ONLY",
]
symbols_stable = _compile_and_extract_symbols(
cpp_content=test_stable_content,
compile_flags=compile_flags,
)
num_symbols_stable = len(symbols_stable)
print(f"Found {num_symbols_stable} symbols in torch/csrc/stable")
assert num_symbols_stable > 0, (
f"Expected stable headers to expose symbols with TORCH_STABLE_ONLY, "
f"but found {num_symbols_stable} symbols"
)
def check_headeronly_symbols(install_root: Path) -> None:
"""
Test that header-only utility headers still expose symbols with TORCH_STABLE_ONLY.
"""
include_dir = install_root / "include"
assert include_dir.exists(), f"Expected {include_dir} to be present"
# Find all headers in torch/headeronly
headeronly_dir = include_dir / "torch" / "headeronly"
assert headeronly_dir.exists(), f"Expected {headeronly_dir} to be present"
headeronly_headers = list(headeronly_dir.rglob("*.h"))
if not headeronly_headers:
raise RuntimeError("Could not find any headeronly headers")
# Filter out platform-specific headers that may not compile everywhere
platform_specific_keywords = [
"cpu/vec",
]
filtered_headers = []
for header in headeronly_headers:
rel_path = header.relative_to(include_dir).as_posix()
if not any(
keyword in rel_path.lower() for keyword in platform_specific_keywords
):
filtered_headers.append(header)
includes = []
for header in filtered_headers:
rel_path = header.relative_to(include_dir)
includes.append(f"#include <{rel_path.as_posix()}>")
includes_str = "\n".join(includes)
test_headeronly_content = f"""
{includes_str}
int main() {{ return 0; }}
"""
compile_flags = [
"g++",
"-std=c++17",
f"-I{include_dir}",
f"-I{include_dir}/torch/csrc/api/include",
"-c",
"-DTORCH_STABLE_ONLY",
]
symbols_headeronly = _compile_and_extract_symbols(
cpp_content=test_headeronly_content,
compile_flags=compile_flags,
)
num_symbols_headeronly = len(symbols_headeronly)
print(f"Found {num_symbols_headeronly} symbols in torch/headeronly")
assert num_symbols_headeronly > 0, (
f"Expected headeronly headers to expose symbols with TORCH_STABLE_ONLY, "
f"but found {num_symbols_headeronly} symbols"
)
def check_aoti_shim_symbols(install_root: Path) -> None:
"""
Test that AOTI shim headers still expose symbols with TORCH_STABLE_ONLY.
"""
include_dir = install_root / "include"
assert include_dir.exists(), f"Expected {include_dir} to be present"
# There are no constexpr symbols etc., so we need to actually use functions
# so that some symbols are found.
test_shim_content = """
#include <torch/csrc/inductor/aoti_torch/c/shim.h>
int main() {
int32_t (*fp1)() = &aoti_torch_device_type_cpu;
int32_t (*fp2)() = &aoti_torch_dtype_float32;
(void)fp1; (void)fp2;
return 0;
}
"""
compile_flags = [
"g++",
"-std=c++17",
f"-I{include_dir}",
f"-I{include_dir}/torch/csrc/api/include",
"-c",
"-DTORCH_STABLE_ONLY",
]
symbols_shim = _compile_and_extract_symbols(
cpp_content=test_shim_content,
compile_flags=compile_flags,
)
num_symbols_shim = len(symbols_shim)
assert num_symbols_shim > 0, (
f"Expected shim headers to expose symbols with TORCH_STABLE_ONLY, "
f"but found {num_symbols_shim} symbols"
)
def check_stable_c_shim_symbols(install_root: Path) -> None:
"""
Test that stable C shim headers still expose symbols with TORCH_STABLE_ONLY.
"""
include_dir = install_root / "include"
assert include_dir.exists(), f"Expected {include_dir} to be present"
# Check if the stable C shim exists
stable_shim = include_dir / "torch" / "csrc" / "stable" / "c" / "shim.h"
if not stable_shim.exists():
raise RuntimeError("Could not find stable c shim")
# There are no constexpr symbols etc., so we need to actually use functions
# so that some symbols are found.
test_stable_shim_content = """
#include <torch/csrc/stable/c/shim.h>
int main() {
// Reference stable C API functions to create undefined symbols
AOTITorchError (*fp1)(const char*, uint32_t*, int32_t*) = &torch_parse_device_string;
AOTITorchError (*fp2)(uint32_t*) = &torch_get_num_threads;
(void)fp1; (void)fp2;
return 0;
}
"""
compile_flags = [
"g++",
"-std=c++17",
f"-I{include_dir}",
f"-I{include_dir}/torch/csrc/api/include",
"-c",
"-DTORCH_STABLE_ONLY",
]
symbols_stable_shim = _compile_and_extract_symbols(
cpp_content=test_stable_shim_content,
compile_flags=compile_flags,
)
num_symbols_stable_shim = len(symbols_stable_shim)
assert num_symbols_stable_shim > 0, (
f"Expected stable C shim headers to expose symbols with TORCH_STABLE_ONLY, "
f"but found {num_symbols_stable_shim} symbols"
)
def check_lib_symbols_for_abi_correctness(lib: str) -> None:
print(f"lib: {lib}")
cxx11_symbols = grep_symbols(lib, LIBTORCH_CXX11_PATTERNS)
@ -129,6 +460,13 @@ def main() -> None:
check_lib_symbols_for_abi_correctness(libtorch_cpu_path)
check_lib_statically_linked_libstdc_cxx_abi_symbols(libtorch_cpu_path)
# Check symbols when TORCH_STABLE_ONLY is defined
check_stable_only_symbols(install_root)
check_stable_api_symbols(install_root)
check_headeronly_symbols(install_root)
check_aoti_shim_symbols(install_root)
check_stable_c_shim_symbols(install_root)
if __name__ == "__main__":
main()

View File

@ -353,6 +353,17 @@ def test_linalg(device="cpu") -> None:
torch.linalg.svd(A)
def test_sdpa(device="cpu", dtype=torch.float16) -> None:
"""Regression test for https://github.com/pytorch/pytorch/issues/167602
Without nvrtc_builtins on CuDNN-9.13 on CUDA-13 fails with ` No valid execution plans built.`
"""
print(f"Testing SDPA on {device} using type {dtype}")
k, q, v = torch.rand(3, 1, 16, 77, 64, dtype=dtype, device=device).unbind(0)
attn = torch.rand(1, 1, 77, 77, dtype=dtype, device=device)
rc = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn)
assert rc.isnan().any().item() is False
def smoke_test_compile(device: str = "cpu") -> None:
supported_dtypes = [torch.float16, torch.float32, torch.float64]
@ -489,10 +500,12 @@ def main() -> None:
smoke_test_conv2d()
test_linalg()
test_numpy()
test_sdpa()
if is_cuda_system:
test_linalg("cuda")
test_cuda_gds_errors_captured()
test_sdpa("cuda")
if options.package == "all":
smoke_test_modules()

View File

@ -344,8 +344,18 @@ test_python_smoke() {
}
test_python_smoke_b200() {
# Targeted smoke tests for B200 - staged approach to avoid too many failures
time python test/run_test.py --include test_matmul_cuda test_scaled_matmul_cuda inductor/test_fp8 $PYTHON_TEST_EXTRA_OPTION --upload-artifacts-while-running
# Targeted smoke tests for B200 including FlashAttention CuTe coverage
install_flash_attn_cute
time python test/run_test.py \
--include \
test_matmul_cuda \
test_scaled_matmul_cuda \
inductor/test_fp8 \
nn/attention/test_fa4 \
nn/attention/test_open_registry \
inductor/test_flex_flash \
$PYTHON_TEST_EXTRA_OPTION \
--upload-artifacts-while-running
assert_git_not_dirty
}
@ -1670,6 +1680,22 @@ test_operator_microbenchmark() {
done
}
test_attention_microbenchmark() {
TEST_REPORTS_DIR=$(pwd)/test/test-reports
mkdir -p "$TEST_REPORTS_DIR"
TEST_DIR=$(pwd)
# Install attention-gym dependency
echo "Installing attention-gym..."
python -m pip install git+https://github.com/meta-pytorch/attention-gym.git@main
pip show triton
cd "${TEST_DIR}"/benchmarks/transformer
$TASKSET python score_mod.py --config configs/config_basic.yaml \
--output-json-for-dashboard "${TEST_REPORTS_DIR}/attention_microbenchmark.json"
}
if ! [[ "${BUILD_ENVIRONMENT}" == *libtorch* || "${BUILD_ENVIRONMENT}" == *-bazel-* ]]; then
(cd test && python -c "import torch; print(torch.__config__.show())")
(cd test && python -c "import torch; print(torch.__config__.parallel_info())")
@ -1727,6 +1753,8 @@ elif [[ "${TEST_CONFIG}" == *operator_benchmark* ]]; then
fi
elif [[ "${TEST_CONFIG}" == *operator_microbenchmark* ]]; then
test_operator_microbenchmark
elif [[ "${TEST_CONFIG}" == *attention_microbenchmark* ]]; then
test_attention_microbenchmark
elif [[ "${TEST_CONFIG}" == *inductor_distributed* ]]; then
test_inductor_distributed
elif [[ "${TEST_CONFIG}" == *inductor-halide* ]]; then

View File

@ -63,7 +63,7 @@ self-hosted-runner:
- linux.rocm.gpu.gfx942.1
- linux.rocm.gpu.gfx942.2
- linux.rocm.gpu.gfx942.4
- rocm-docker
- linux.rocm.gfx942.docker-cache
# Org wise AWS `mac2.metal` runners (2020 Mac mini hardware powered by Apple silicon M1 processors)
- macos-m1-stable
- macos-m1-14

View File

@ -1 +1 @@
ad5816f0eee1c873df1b7d371c69f1f811a89387
07b6cbde121417a70e4dc871adb6d27030e0ce3f

View File

@ -1 +1 @@
ccb801b88af136454798b945175c4c87e636ac33
acccf86477759b2d3500f1ae1be065f7b1e409ec

13
.github/labeler.yml vendored
View File

@ -165,3 +165,16 @@
- torch/_inductor/kernel/mm.py
- test/inductor/test_max_autotune.py
- third_party/fbgemm
"ciflow/mps":
- aten/src/ATen/mps/**
- aten/src/ATen/native/mps/**
- torch/_inductor/codegen/mps.py
- test/test_mps.py
- test/inductor/test_mps_basic.py
"ciflow/h100-symm-mem":
- torch/csrc/distributed/c10d/symm_mem/**
- torch/distributed/_symmetric_memory/**
- test/distributed/**/*mem*
- test/distributed/**/*mem*/**

View File

@ -34,6 +34,9 @@ python3 torch/utils/data/datapipes/gen_pyi.py
# Also check generated pyi files
find torch -name '*.pyi' -exec git add --force -- "{}" +
# Print current environment
python3 -m pip freeze
RC=0
# Run lintrunner on all files
if ! lintrunner --force-color --tee-json=lint.json ${ADDITIONAL_LINTRUNNER_ARGS} 2> /dev/null; then

View File

@ -0,0 +1,73 @@
name: attention_op_microbenchmark
on:
push:
tags:
- ciflow/op-benchmark/*
workflow_dispatch:
schedule:
# Run at 06:00 UTC everyday
- cron: 0 7 * * *
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}
cancel-in-progress: true
permissions:
id-token: write
contents: read
jobs:
attn-microbenchmark-build:
if: github.repository_owner == 'pytorch'
uses: ./.github/workflows/_linux-build.yml
with:
runner: linux.12xlarge.memory
build-environment: linux-jammy-cuda12.8-py3.10-gcc9-sm80
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '8.0 9.0'
test-matrix: |
{ include: [
{ config: "attention_microbenchmark_test", shard: 1, num_shards: 1, runner: "linux.aws.a100" },
{ config: "attention_microbenchmark_test", shard: 1, num_shards: 1, runner: "linux.aws.h100" },
]}
secrets: inherit
attn-microbenchmark-test:
name: attn-microbenchmark-test
uses: ./.github/workflows/_linux-test.yml
needs: attn-microbenchmark-build
with:
timeout-minutes: 500
build-environment: linux-jammy-cuda12.8-py3.10-gcc9-sm80
docker-image: ${{ needs.attn-microbenchmark-build.outputs.docker-image }}
test-matrix: ${{ needs.attn-microbenchmark-build.outputs.test-matrix }}
secrets: inherit
# B200 runner
opmicrobenchmark-build-b200:
if: github.repository_owner == 'pytorch'
name: opmicrobenchmark-build-b200
uses: ./.github/workflows/_linux-build.yml
with:
runner: linux.12xlarge.memory
build-environment: linux-jammy-cuda12.8-py3.10-gcc9-sm100
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '10.0'
test-matrix: |
{ include: [
{ config: "operator_microbenchmark_test", shard: 1, num_shards: 1, runner: "linux.dgx.b200" },
]}
secrets: inherit
opmicrobenchmark-test-b200:
name: opmicrobenchmark-test-b200
uses: ./.github/workflows/_linux-test.yml
needs: opmicrobenchmark-build-b200
with:
timeout-minutes: 500
build-environment: linux-jammy-cuda12.8-py3.10-gcc9-sm100
docker-image: ${{ needs.opmicrobenchmark-build-b200.outputs.docker-image }}
test-matrix: ${{ needs.opmicrobenchmark-build-b200.outputs.test-matrix }}
aws-role-to-assume: arn:aws:iam::308535385114:role/gha_workflow_s3_and_ecr_read_only
secrets: inherit

View File

@ -119,6 +119,22 @@ jobs:
with:
docker-image: ${{ steps.build-docker-image.outputs.docker-image }}
- name: Generate output
if: contains(matrix.docker-image-name, 'rocm')
id: generate_output
run: |
docker_image_name="${{ matrix.docker-image-name }}"
docker_image_tag="${{ steps.build-docker-image.outputs.docker-image }}"
echo "${docker_image_name}=${docker_image_tag}" >> docker-builds-output-${docker_image_name}.txt
- name: Upload artifacts
uses: actions/upload-artifact@v4.4.0
if: contains(matrix.docker-image-name, 'rocm')
with:
name: docker-builds-artifacts-${{ matrix.docker-image-name }}
retention-days: 14
path: ./docker-builds-output-${{ matrix.docker-image-name }}.txt
- uses: nick-fields/retry@7152eba30c6575329ac0576536151aca5a72780e # v3.0.0
name: Push to https://ghcr.io/
id: push-to-ghcr-io

View File

@ -1,55 +0,0 @@
name: docker-cache-mi300
on:
# run every 6 hours
schedule:
- cron: 0 0,6,12,18 * * *
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name }}
cancel-in-progress: true
permissions:
id-token: write
contents: read
jobs:
docker-cache:
if: github.repository_owner == 'pytorch'
runs-on: rocm-docker
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
with:
no-sudo: true
- name: configure aws credentials
id: aws_creds
uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 # v4.1.0
with:
role-to-assume: arn:aws:iam::308535385114:role/gha_workflow_s3_and_ecr_read_only
aws-region: us-east-1
role-duration-seconds: 18000
- name: Login to Amazon ECR
id: login-ecr
continue-on-error: false
uses: aws-actions/amazon-ecr-login@062b18b96a7aff071d4dc91bc00c4c1a7945b076 # v2.0.1
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
with:
docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3
push: false
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Tar and upload to S3 bucket
run: |
sudo docker save -o ~/docker-data/pytorch/pytorch_docker_image.tar ${{ steps.calculate-docker-image.outputs.docker-image }}
sudo rclone copy -P --s3-upload-concurrency 64 --s3-chunk-size 200M --s3-upload-cutoff 300M ~/docker-data/pytorch/pytorch_docker_image.tar oci:pytorchbucket0002/pytorch_docker_image --progress

105
.github/workflows/docker-cache-rocm.yml vendored Normal file
View File

@ -0,0 +1,105 @@
name: docker-cache-rocm
on:
workflow_run:
workflows: [docker-builds]
branches: [main, release]
types:
- completed
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name }}
cancel-in-progress: true
permissions:
id-token: write
contents: read
actions: read
jobs:
download-docker-builds-artifacts:
if: github.repository_owner == 'pytorch'
name: download-docker-builds-artifacts
runs-on: ubuntu-latest
outputs:
pytorch-linux-jammy-rocm-n-py3: ${{ steps.process-artifacts.outputs.pytorch-linux-jammy-rocm-n-py3 }}
pytorch-linux-noble-rocm-n-py3: ${{ steps.process-artifacts.outputs.pytorch-linux-noble-rocm-n-py3 }}
pytorch-linux-jammy-rocm-n-py3-benchmarks: ${{ steps.process-artifacts.outputs.pytorch-linux-jammy-rocm-n-py3-benchmarks }}
steps:
- name: Download artifacts
uses: actions/download-artifact@v4.1.7
with:
run-id: ${{ github.event.workflow_run.id }}
path: ./docker-builds-artifacts
merge-multiple: true
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Process artifacts
id: process-artifacts
run: |
ls -R ./docker-builds-artifacts
cat ./docker-builds-artifacts/*txt >> "${GITHUB_OUTPUT}"
cat "${GITHUB_OUTPUT}"
docker-cache:
if: github.repository_owner == 'pytorch'
needs: download-docker-builds-artifacts
strategy:
fail-fast: false
matrix:
runner: [linux.rocm.gfx942.docker-cache]
docker-image: [
"${{ needs.download-docker-builds-artifacts.outputs.pytorch-linux-jammy-rocm-n-py3 }}",
"${{ needs.download-docker-builds-artifacts.outputs.pytorch-linux-noble-rocm-n-py3 }}",
"${{ needs.download-docker-builds-artifacts.outputs.pytorch-linux-jammy-rocm-n-py3-benchmarks }}"
]
runs-on: "${{ matrix.runner }}"
steps:
- name: debug
run: |
JSON_STRINGIFIED="${{ toJSON(needs.download-docker-builds-artifacts.outputs) }}"
echo "Outputs of download-docker-builds-artifacts job: ${JSON_STRINGIFIED}"
- name: configure aws credentials
id: aws_creds
uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 # v4.1.0
with:
role-to-assume: arn:aws:iam::308535385114:role/gha_workflow_s3_and_ecr_read_only
aws-region: us-east-1
role-duration-seconds: 18000
- name: Login to Amazon ECR
id: login-ecr
continue-on-error: false
uses: aws-actions/amazon-ecr-login@062b18b96a7aff071d4dc91bc00c4c1a7945b076 # v2.0.1
- name: Generate ghrc.io tag
id: ghcr-io-tag
run: |
ecr_image="${{ matrix.docker-image }}"
ghcr_image="ghcr.io/pytorch/ci-image:${ecr_image##*:}"
echo "ghcr_image=${ghcr_image}" >> "$GITHUB_OUTPUT"
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: ${{ steps.ghcr-io-tag.outputs.ghcr_image }}
- name: Save as tarball
run: |
docker_image_tag=${{ matrix.docker-image }}
docker_image_tag="${docker_image_tag#*:}" # Remove everything before and including first ":"
docker_image_tag="${docker_image_tag%-*}" # Remove everything after and including last "-"
ref_name=${{ github.event.workflow_run.head_branch }}
if [[ $ref_name =~ "release/" ]]; then
ref_suffix="release"
elif [[ $ref_name == "main" ]]; then
ref_suffix="main"
else
echo "Unexpected branch in ref_name: ${ref_name}" && exit 1
fi
docker tag ${{ steps.ghcr-io-tag.outputs.ghcr_image }} ${{ matrix.docker-image }}
# mv is atomic operation, so we use intermediate tar.tmp file to prevent read-write contention
docker save -o ~/pytorch-data/docker/${docker_image_tag}.tar.tmp ${{ matrix.docker-image }}
mv ~/pytorch-data/docker/${docker_image_tag}.tar.tmp ~/pytorch-data/docker/${docker_image_tag}_${ref_suffix}.tar

View File

@ -37,7 +37,6 @@ jobs:
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runner: "linux.c7i.12xlarge"
build-environment: linux-jammy-cuda12.8-py3.10-gcc11-sm90-dist
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '9.0'

View File

@ -1,4 +1,4 @@
name: inductor-rocm
name: inductor-rocm-mi200
on:
schedule:

View File

@ -5,9 +5,11 @@ on:
- cron: 0 0 * * *
push:
tags:
# NOTE: Doc build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
# NOTE: Doc build pipelines should only get triggered on:
# Major or minor release candidates builds
- v[0-9]+.[0-9]+.0+-rc[0-9]+
# Final RC for major, minor and patch releases
- v[0-9]+.[0-9]+.[0-9]+
- ciflow/nightly/*
workflow_dispatch:

View File

@ -1,4 +1,4 @@
name: rocm
name: rocm-mi200
on:
push:

View File

@ -5,7 +5,9 @@
# Flow:
# 1. Builds PyTorch with CUDA 12.8+ and sm100 architecture for B200
# 2. Runs smoke tests on linux.dgx.b200 runner
# 3. Tests executed are defined in .ci/pytorch/test.sh -> test_python_smoke() function
# 3. Tests executed are defined in .ci/pytorch/test.sh -> test_python_smoke_b200() function
# - Includes matmul, scaled_matmul, FP8, and FlashAttention CuTe tests
# - FlashAttention CuTe DSL is installed as part of test execution
#
# Triggered by:
# - Pull requests modifying this workflow file

View File

@ -41,7 +41,6 @@ jobs:
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runner: linux.12xlarge.memory
build-environment: linux-jammy-cuda12.8-py3.10-gcc11-sm90
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '9.0'

83
.github/workflows/trunk-rocm-mi300.yml vendored Normal file
View File

@ -0,0 +1,83 @@
name: trunk-rocm-mi300
on:
push:
branches:
- main
- release/*
workflow_dispatch:
schedule:
- cron: 29 8 * * * # about 1:29am PDT
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}
cancel-in-progress: true
permissions:
id-token: write
contents: read
jobs:
llm-td:
if: github.repository_owner == 'pytorch'
name: before-test
uses: ./.github/workflows/llm_td_retrieval.yml
permissions:
id-token: write
contents: read
target-determination:
name: before-test
uses: ./.github/workflows/target_determination.yml
needs: llm-td
permissions:
id-token: write
contents: read
get-label-type:
name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with:
triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
curr_branch: ${{ github.head_ref || github.ref_name }}
curr_ref_type: ${{ github.ref_type }}
linux-jammy-rocm-py3_10-build:
name: linux-jammy-rocm-py3.10
uses: ./.github/workflows/_linux-build.yml
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build-environment: linux-jammy-rocm-py3.10
docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3
sync-tag: rocm-build
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "default", shard: 2, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "default", shard: 3, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "default", shard: 4, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "default", shard: 5, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "default", shard: 6, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "distributed", shard: 1, num_shards: 3, runner: "linux.rocm.gpu.gfx942.4.b" },
{ config: "distributed", shard: 2, num_shards: 3, runner: "linux.rocm.gpu.gfx942.4.b" },
{ config: "distributed", shard: 3, num_shards: 3, runner: "linux.rocm.gpu.gfx942.4.b" },
]}
secrets: inherit
linux-jammy-rocm-py3_10-test:
permissions:
id-token: write
contents: read
name: linux-jammy-rocm-py3.10
uses: ./.github/workflows/_rocm-test.yml
needs:
- linux-jammy-rocm-py3_10-build
- target-determination
with:
build-environment: linux-jammy-rocm-py3.10
docker-image: ${{ needs.linux-jammy-rocm-py3_10-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-rocm-py3_10-build.outputs.test-matrix }}
secrets: inherit

View File

@ -5,6 +5,7 @@ on:
workflows:
- pull
- trunk
- trunk-rocm-mi300
- periodic
- periodic-rocm-mi200
- periodic-rocm-mi300

View File

@ -186,6 +186,8 @@ include_patterns = [
'aten/src/ATen/native/nested/cuda/*.h',
'aten/src/ATen/native/nested/*.cpp',
'aten/src/ATen/native/nested/*.h',
'aten/src/ATen/xpu/**/*.h',
'aten/src/ATen/xpu/**/*.cpp',
'c10/**/*.cpp',
'c10/**/*.h',
'torch/*.h',

View File

@ -736,6 +736,44 @@ if(NOT DEFINED USE_BLAS)
set(USE_BLAS ON)
endif()
# Prioritized Text Linker Optimization
if(USE_PRIORITIZED_TEXT_FOR_LD)
set(LINKER_SCRIPT_FILE_IN "${CMAKE_SOURCE_DIR}/cmake/prioritized_text.txt")
set(LINKER_SCRIPT_FILE_OUT "${CMAKE_SOURCE_DIR}/cmake/linker_script.ld")
execute_process(
COMMAND ${Python_EXECUTABLE}
${CMAKE_SOURCE_DIR}/tools/setup_helpers/generate_linker_script.py
--filein "${LINKER_SCRIPT_FILE_IN}"
--fout "${LINKER_SCRIPT_FILE_OUT}"
RESULT_VARIABLE _gen_result
OUTPUT_VARIABLE _gen_output
ERROR_VARIABLE _gen_error
)
if(NOT _gen_result EQUAL 0)
message(FATAL_ERROR
"Failed to generate linker script:\n${_gen_output}\n${_gen_error}")
endif()
append_cxx_flag_if_supported("-ffunction-sections" CMAKE_CXX_FLAGS)
append_cxx_flag_if_supported("-fdata-sections" CMAKE_CXX_FLAGS)
append_c_flag_if_supported("-ffunction-sections" CMAKE_C_FLAGS)
append_c_flag_if_supported("-fdata-sections" CMAKE_C_FLAGS)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -T${LINKER_SCRIPT_FILE_OUT}")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} -T${LINKER_SCRIPT_FILE_OUT}")
else()
if(LINUX AND CPU_AARCH64)
message(WARNING [[
It is strongly recommend to enable linker script optimization for all AArch64 Linux builds.
To do so please export USE_PRIORITIZED_TEXT_FOR_LD=1
]])
endif()
endif()
# Build libtorch mobile library, which contains ATen/TH ops and native support
# for TorchScript model, but doesn't contain not-yet-unified caffe2 ops;
if(INTERN_BUILD_MOBILE)
@ -1402,9 +1440,6 @@ if(BUILD_JNI)
add_subdirectory(android/pytorch_android)
endif()
include(cmake/Summary.cmake)
caffe2_print_configuration_summary()
# Parse custom debug info
if(DEFINED USE_CUSTOM_DEBINFO)
string(REPLACE ";" " " SOURCE_FILES "${USE_CUSTOM_DEBINFO}")
@ -1444,56 +1479,5 @@ if(BUILD_BUNDLE_PTXAS AND USE_CUDA)
DESTINATION "${CMAKE_INSTALL_BINDIR}")
endif()
if(USE_PRIORITIZED_TEXT_FOR_LD)
add_compile_options(
$<$<COMPILE_LANGUAGE:C,CXX>:-ffunction-sections>
$<$<COMPILE_LANGUAGE:C,CXX>:-fdata-sections>
)
set(LINKER_SCRIPT_FILE_OUT "${CMAKE_SOURCE_DIR}/cmake/linker_script.ld")
set(LINKER_SCRIPT_FILE_IN "${CMAKE_SOURCE_DIR}/cmake/prioritized_text.txt")
add_custom_command(
OUTPUT "${LINKER_SCRIPT_FILE_OUT}"
COMMAND ${Python_EXECUTABLE} ${CMAKE_SOURCE_DIR}/tools/setup_helpers/generate_linker_script.py --filein "${LINKER_SCRIPT_FILE_IN}" --fout "${LINKER_SCRIPT_FILE_OUT}"
DEPENDS ${CMAKE_SOURCE_DIR}/tools/setup_helpers/generate_linker_script.py "${LINKER_SCRIPT_FILE_IN}"
COMMENT "Generating prioritized text linker files"
VERBATIM
)
add_custom_target(generate_linker_script DEPENDS "${LINKER_SCRIPT_FILE_OUT}")
if(BUILD_PYTHON)
set(LINKER_OPT_TARGETS torch_python)
endif()
if(NOT BUILD_LIBTORCHLESS)
list(APPEND LINKER_OPT_TARGETS torch_cpu c10)
if(USE_CUDA)
list(APPEND LINKER_OPT_TARGETS torch_cuda c10_cuda)
endif()
if(USE_XPU)
list(APPEND LINKER_OPT_TARGETS torch_xpu c10_xpu)
endif()
if(USE_ROCM)
list(APPEND LINKER_OPT_TARGETS torch_hip c10_hip)
endif()
endif()
foreach(tgt IN LISTS LINKER_OPT_TARGETS)
if(TARGET ${tgt})
add_dependencies("${tgt}" generate_linker_script)
target_link_options_if_supported(${tgt} "-T,${LINKER_SCRIPT_FILE_OUT}")
set_property(TARGET ${tgt} APPEND PROPERTY LINK_DEPENDS "${LINKER_SCRIPT_FILE_OUT}")
else()
message(WARNING "Requested target '${tgt}' for linker script optimization was not found.")
endif()
endforeach()
else()
if(LINUX AND CPU_AARCH64)
message(WARNING [[
It is strongly recommend to enable linker script optimization for all AArch64 Linux builds.
To do so please export USE_PRIORITIZED_TEXT_FOR_LD=1
]])
endif()
endif()
include(cmake/Summary.cmake)
caffe2_print_configuration_summary()

View File

@ -37,7 +37,7 @@ Copyright (c) 2024 Tri Dao.
All rights reserved.
All contributions by Arm:
Copyright (c) 2021, 2023-2024 Arm Limited and/or its affiliates
Copyright (c) 2021, 2023-2025 Arm Limited and/or its affiliates
All contributions from Caffe:
Copyright(c) 2013, 2014, 2015, the respective contributors

View File

@ -18,6 +18,8 @@ Please report security issues using https://github.com/pytorch/pytorch/security/
All reports submitted through the security advisories mechanism would **either be made public or dismissed by the team within 90 days of the submission**. If advisory has been closed on the grounds that it is not a security issue, please do not hesitate to create an [new issue](https://github.com/pytorch/pytorch/issues/new?template=bug-report.yml) as it is still likely a valid issue within the framework.
**Note on crashes and out of bounds access**: PyTorch is a computational framework that performs operations on behalf of the caller. Like many low-level libraries, PyTorch generally does not validate all inputs to every function—the responsibility for providing valid arguments lies with the calling code. While crashes and out of bounds memory access should be reported as bugs, they are generally not considered security vulnerabilities in PyTorch's threat model.
Please refer to the following page for our responsible disclosure policy, reward guidelines, and those things that should not be reported:
https://www.facebook.com/whitehat

View File

@ -18,6 +18,8 @@
#include <unordered_set>
#include <utility>
C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED("-Wswitch-default")
namespace torch {
class TORCH_API CustomClassHolder : public c10::intrusive_ptr_target {};
namespace jit {
@ -1630,4 +1632,6 @@ struct TORCH_API WeakOrStrongTypePtr {
} // namespace c10
C10_DIAGNOSTIC_POP()
#include <ATen/core/ivalue_inl.h> // IWYU pragma: keep

View File

@ -29,6 +29,8 @@
#include <c10/util/intrusive_ptr.h>
#include <c10/util/irange.h>
C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED("-Wswitch-default")
namespace torch {
namespace jit {
struct Function;
@ -2567,3 +2569,5 @@ TypePtr IValue::type() const {
}
} // namespace c10
C10_DIAGNOSTIC_POP()

View File

@ -11,6 +11,8 @@
#include <sleef.h>
#endif
C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED("-Wswitch-default")
// Sleef offers vectorized versions of some transcedentals
// such as sin, cos, tan etc..
// However for now opting for STL, since we are not building
@ -650,3 +652,5 @@ inline Vectorized<float> Vectorized<float>::erf() const {
} // namespace CPU_CAPABILITY
} // namespace at::vec
C10_DIAGNOSTIC_POP()

View File

@ -1,6 +1,7 @@
#include <ATen/cuda/CUDAGeneratorImpl.h>
#include <ATen/cuda/CUDAGraph.h>
#include <ATen/cuda/Exceptions.h>
#include <ATen/cuda/MemPool.h>
#include <ATen/Functions.h>
#include <c10/cuda/CUDAFunctions.h>
@ -13,7 +14,7 @@ static bool _cuda_graphs_debug = false;
MempoolId_t graph_pool_handle() {
// Sets just the second value, to distinguish it from MempoolId_ts created from
// cudaStreamGetCaptureInfo id_s in capture_begin.
return c10::cuda::MemPool::graph_pool_handle();
return at::cuda::MemPool::graph_pool_handle();
}
/**
@ -90,7 +91,7 @@ void CUDAGraph::capture_begin(MempoolId_t pool/*=0*/, cudaStreamCaptureMode capt
} else {
// User did not ask us to share a mempool. Create graph pool handle using is_user_created=false.
// Sets just the first value, to distinguish it from MempoolId_ts created by graph_pool_handle().
mempool_id_ = c10::cuda::MemPool::graph_pool_handle(false);
mempool_id_ = at::cuda::MemPool::graph_pool_handle(false);
TORCH_INTERNAL_ASSERT(mempool_id_.first > 0);
}

View File

@ -0,0 +1,69 @@
#include <ATen/core/CachingHostAllocator.h>
#include <ATen/cuda/MemPool.h>
namespace at::cuda {
// uid_ is incremented when a user creates a MemPool,
// for example: using graph_pool_handle() or c10::cuda::MemPool().
//
// uuid_ is incremented when CUDAGraph creates a MemPool
// as a result of a user not providing a pool.
//
// MempoolId_t of {0, 0} is used to denote when no MemPool has been
// passed to a function, either by user or CUDAGraphs. For example,
// default value of MempoolId_t for capture_begin function is {0, 0}.
// That's why uid_ and uuid_ start at 1.
std::atomic<CaptureId_t> MemPool::uid_{1};
std::atomic<CaptureId_t> MemPool::uuid_{1};
MemPool::MemPool(
CUDACachingAllocator::CUDAAllocator* allocator,
bool is_user_created,
bool use_on_oom)
: allocator_(allocator), is_user_created_(is_user_created) {
if (is_user_created_) {
id_ = {0, uid_++};
} else {
id_ = {uuid_++, 0};
}
device_ = c10::cuda::current_device();
CUDACachingAllocator::createOrIncrefPool(device_, id_, allocator);
if (use_on_oom) {
CUDACachingAllocator::setUseOnOOM(device_, id_);
}
}
MemPool::~MemPool() {
// TORCH_INTERNAL_ASSERT(use_count() == 1);
// We used to assert that TORCH_INTERNAL_ASSERT(use_count() == 1);
// However, this assertion is not true if a memory pool is shared
// with a cuda graph. That CUDAGraph will increase the use count
// until it is reset.
CUDACachingAllocator::releasePool(device_, id_);
c10::cuda::CUDACachingAllocator::emptyCache(id_);
}
MempoolId_t MemPool::id() {
return id_;
}
CUDACachingAllocator::CUDAAllocator* MemPool::allocator() {
return allocator_;
}
int MemPool::use_count() {
return CUDACachingAllocator::getPoolUseCount(device_, id_);
}
c10::DeviceIndex MemPool::device() {
return device_;
}
MempoolId_t MemPool::graph_pool_handle(bool is_user_created) {
if (is_user_created) {
return {0, uid_++};
}
return {uuid_++, 0};
}
} // namespace at::cuda

View File

@ -0,0 +1,44 @@
#pragma once
#include <c10/core/Allocator.h>
#include <c10/cuda/CUDACachingAllocator.h>
namespace at::cuda {
// Keep BC only
using c10::CaptureId_t;
using c10::MempoolId_t;
// MemPool represents a pool of memory in a caching allocator. Currently,
// it's just the ID of the pool object maintained in the CUDACachingAllocator.
//
// An allocator pointer can be passed to the MemPool to define how the
// allocations should be done in the pool. For example: using a different
// system allocator such as ncclMemAlloc.
struct TORCH_CUDA_CPP_API MemPool {
MemPool(
c10::cuda::CUDACachingAllocator::CUDAAllocator* allocator = nullptr,
bool is_user_created = true,
bool use_on_oom = false);
MemPool(const MemPool&) = delete;
MemPool(MemPool&&) = default;
MemPool& operator=(const MemPool&) = delete;
MemPool& operator=(MemPool&&) = default;
~MemPool();
MempoolId_t id();
c10::cuda::CUDACachingAllocator::CUDAAllocator* allocator();
int use_count();
c10::DeviceIndex device();
static MempoolId_t graph_pool_handle(bool is_user_created = true);
private:
static std::atomic<CaptureId_t> uid_;
static std::atomic<CaptureId_t> uuid_;
c10::cuda::CUDACachingAllocator::CUDAAllocator* allocator_;
bool is_user_created_;
MempoolId_t id_;
c10::DeviceIndex device_;
};
} // namespace at::cuda

View File

@ -55,14 +55,6 @@ struct numeric_limits<int8_t> {
static inline __host__ __device__ int8_t upper_bound() { return INT8_MAX; }
};
template <>
struct numeric_limits<uint16_t> {
static inline __host__ __device__ uint16_t lowest() { return 0; }
static inline __host__ __device__ uint16_t max() { return UINT16_MAX; }
static inline __host__ __device__ uint16_t lower_bound() { return 0; }
static inline __host__ __device__ uint16_t upper_bound() { return UINT16_MAX; }
};
template <>
struct numeric_limits<int16_t> {
static inline __host__ __device__ int16_t lowest() { return INT16_MIN; }
@ -71,14 +63,6 @@ struct numeric_limits<int16_t> {
static inline __host__ __device__ int16_t upper_bound() { return INT16_MAX; }
};
template <>
struct numeric_limits<uint32_t> {
static inline __host__ __device__ uint32_t lowest() { return 0; }
static inline __host__ __device__ uint32_t max() { return UINT32_MAX; }
static inline __host__ __device__ uint32_t lower_bound() { return 0; }
static inline __host__ __device__ uint32_t upper_bound() { return UINT32_MAX; }
};
template <>
struct numeric_limits<int32_t> {
static inline __host__ __device__ int32_t lowest() { return INT32_MIN; }
@ -87,21 +71,6 @@ struct numeric_limits<int32_t> {
static inline __host__ __device__ int32_t upper_bound() { return INT32_MAX; }
};
template <>
struct numeric_limits<uint64_t> {
#ifdef _MSC_VER
static inline __host__ __device__ uint64_t lowest() { return 0; }
static inline __host__ __device__ uint64_t max() { return _UI64_MAX; }
static inline __host__ __device__ uint64_t lower_bound() { return 0; }
static inline __host__ __device__ uint64_t upper_bound() { return _UI64_MAX; }
#else
static inline __host__ __device__ uint64_t lowest() { return 0; }
static inline __host__ __device__ uint64_t max() { return UINT64_MAX; }
static inline __host__ __device__ uint64_t lower_bound() { return 0; }
static inline __host__ __device__ uint64_t upper_bound() { return UINT64_MAX; }
#endif
};
template <>
struct numeric_limits<int64_t> {
#ifdef _MSC_VER

View File

@ -440,7 +440,7 @@ bool MPSHeapAllocatorImpl::release_cached_buffers() {
// we need to release the lock temporarily as synchronizing may cause deadlock with completion handlers.
m_mutex.unlock();
auto stream = getDefaultMPSStream();
dispatch_sync(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
stream->synchronize(SyncType::COMMIT_AND_WAIT);
});
m_mutex.lock();

View File

@ -110,6 +110,9 @@ class TORCH_API MPSStream {
return _stream;
}
MTLBuffer_t getErrorBuffer();
void checkLastError();
private:
Stream _stream;
MTLCommandQueue_t _commandQueue = nil;
@ -121,6 +124,8 @@ class TORCH_API MPSStream {
dispatch_queue_t _serialQueue = nullptr;
// CommitAndContinue is enabled by default
bool _enableCommitAndContinue = true;
// Buffer that contains last raised error
MTLBuffer_t _errorBuffer = nil;
// use synchronize() to access any of these commit functions outside MPSStream
void commit();
@ -155,4 +160,7 @@ class TORCH_API MPSStreamImpl {
MPSStreamImpl();
};
#ifdef __OBJC__
void dispatch_sync_with_rethrow(dispatch_queue_t queue, void (^block)());
#endif
} // namespace at::mps

View File

@ -3,13 +3,13 @@
#include <ATen/mps/MPSAllocatorInterface.h>
#include <ATen/mps/MPSProfiler.h>
#include <ATen/mps/MPSStream.h>
#include <c10/metal/error.h>
@interface MPSGraphExecutionDescriptor ()
@property(readwrite, atomic) BOOL enableCommitAndContinue;
@end
namespace at::mps {
//-----------------------------------------------------------------
// MPSStream
//-----------------------------------------------------------------
@ -30,6 +30,10 @@ MPSStream::MPSStream(Stream stream) : _stream(stream) {
// Choose level which optimizes for GPU
_compilationDescriptor.optimizationLevel = MPSGraphOptimizationLevel0;
_executionDescriptor.compilationDescriptor = _compilationDescriptor;
_errorBuffer = [MPSDevice::getInstance()->device() newBufferWithLength:sizeof(c10::metal::ErrorMessages)
options:MTLResourceStorageModeShared];
std::memset([_errorBuffer contents], 0, 1024);
}
MPSStream::~MPSStream() {
@ -38,6 +42,8 @@ MPSStream::~MPSStream() {
[_executionDescriptor release];
[_compilationDescriptor release];
_executionDescriptor = nil;
[_errorBuffer release];
_errorBuffer = nil;
_compilationDescriptor = nil;
assert(_commandBuffer == nil);
@ -104,6 +110,7 @@ void MPSStream::commitAndWait() {
[_prevCommandBuffer waitUntilCompleted];
[_prevCommandBuffer release];
_prevCommandBuffer = nil;
checkLastError();
}
if (_commandBuffer) {
@ -111,6 +118,7 @@ void MPSStream::commitAndWait() {
[_commandBuffer waitUntilCompleted];
[_commandBuffer release];
_commandBuffer = nil;
checkLastError();
}
}
@ -153,7 +161,7 @@ void MPSStream::fill(id<MTLBuffer> buffer, uint8_t value, size_t length, size_t
if (length == 0) {
return;
}
dispatch_sync(_serialQueue, ^() {
dispatch_sync_with_rethrow(_serialQueue, ^() {
@autoreleasepool {
endKernelCoalescing();
id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer() blitCommandEncoder];
@ -183,7 +191,7 @@ void MPSStream::copy(id<MTLBuffer> srcBuffer,
size_t dstOffset,
uint64_t profileId,
SyncType syncType) {
dispatch_sync(_serialQueue, ^() {
dispatch_sync_with_rethrow(_serialQueue, ^() {
@autoreleasepool {
endKernelCoalescing();
id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer() blitCommandEncoder];
@ -236,7 +244,7 @@ void MPSStream::executeMPSGraph(MPSGraph* mpsGraph, NSDictionary* feeds, NSDicti
auto& profiler = getMPSProfiler();
const bool isGraphProfilingEnabled = profiler.isOperationProfilingEnabled();
dispatch_sync(_serialQueue, ^() {
dispatch_sync_with_rethrow(_serialQueue, ^() {
endKernelCoalescing();
if (isGraphProfilingEnabled) {
// this function call is only relevant for interval-based Signposts
@ -266,6 +274,24 @@ void MPSStream::executeMPSGraph(MPSGraph* mpsGraph, NSDictionary* feeds, NSDicti
});
}
id<MTLBuffer> MPSStream::getErrorBuffer() {
return _errorBuffer;
}
void MPSStream::checkLastError() {
auto msgs = reinterpret_cast<c10::metal::ErrorMessages*>([_errorBuffer contents]);
const auto& msg = msgs->msg[0];
if (!msgs) {
return;
}
unsigned int count = 0;
std::swap(count, msgs->count);
if (!count) {
return;
}
throw c10::AcceleratorError({msg.func, msg.file, msg.line}, 1, msg.message);
}
//-----------------------------------------------------------------
// MPSStreamImpl
//-----------------------------------------------------------------
@ -289,4 +315,19 @@ MPSStream* getDefaultMPSStream() {
return MPSStreamImpl::getInstance();
}
// Helper methods
void dispatch_sync_with_rethrow(dispatch_queue_t queue, void (^block)()) {
__block std::optional<std::exception_ptr> block_exception;
dispatch_sync(queue, ^() {
try {
block();
} catch (...) {
block_exception = std::current_exception();
}
});
if (block_exception) {
std::rethrow_exception(*block_exception);
}
}
} // namespace at::mps

View File

@ -1936,7 +1936,7 @@ static bool should_fold(const Tensor& tensor1, const Tensor& tensor2, bool has_o
// We order the tensors. t1 will be the larger tensor
// We can always transpose tensor2 as the dimensions are always >= 1 (precondition from matmul)
// and tensor1_larger iff tensor2.dim() > tensor1.dim(9
// and tensor1_larger iff tensor2.dim() > tensor1.dim()
const auto t1 = tensor1_larger ? MaybeOwned<Tensor>::borrowed(tensor1)
: MaybeOwned<Tensor>::owned(tensor2.mT());
const int64_t dim_t1 = t1->dim();
@ -1948,20 +1948,11 @@ static bool should_fold(const Tensor& tensor1, const Tensor& tensor2, bool has_o
return false;
}
// In this case we *do* incur in an extra copy to avoid creating an unnecessary large tensor in the backward
// Suppose we don't fold here. Let t1.shape = [b, m, n] t2.shape = [n, k] like in a transformer
// t2 will be expanded to a tensor of shape [b, n, k] and then we do t1.bmm(t2_expanded)
// The issue appears in the backward.
// The output gradient g of this operation would have shape [b, m, k]
// The backward wrt. t2 of bmm would be given by t1.mH @ g, which has shape [b, n, k]
// Then, the backward of expand is simply `sum(0)`. As such, we are instantiating a tensor
// of shape [b, n, k] unnecessarily, which may cause a large memory footprint, and in the
// worst case, an OOM
bool t2_requires_grad = tensor1_larger ? tensor2.requires_grad() : tensor1.requires_grad();
if (t2_requires_grad && !has_out) {
// We should be checking !at::GradMode::is_enabled(), but apparently
// this regresses performance in some cases:
// https://github.com/pytorch/pytorch/issues/118548#issuecomment-1916022394
// If we require a gradient, we should fold to minimize backward memory usage - even if this
// leads to a copy in forward because is needed in backward,
// only time we avoid this strict pre-allocated memory usage (has_out = True)
bool requires_grad = tensor1.requires_grad() || tensor2.requires_grad();
if (requires_grad && !has_out) {
return true;
}

View File

@ -142,6 +142,7 @@ Tensor _pack_padded_sequence_backward_symint(const Tensor& grad, c10::SymIntArra
std::tuple<Tensor, Tensor> _pad_packed_sequence(const Tensor& data, const Tensor& _batch_sizes, bool batch_first, const Scalar& padding_value, int64_t total_length) {
auto batch_sizes_t = _batch_sizes.contiguous();
checkLongTensor(batch_sizes_t);
TORCH_CHECK(batch_sizes_t.numel() > 0, "batch_sizes can not be empty");
int64_t * batch_sizes = batch_sizes_t.data_ptr<int64_t>();
int64_t max_batch_size = batch_sizes[0];

View File

@ -23,6 +23,7 @@
#include <ATen/ops/_aminmax_native.h>
#include <ATen/ops/_assert_async_native.h>
#include <ATen/ops/_assert_scalar_native.h>
#include <ATen/ops/_async_error_native.h>
#include <ATen/ops/_functional_assert_async_native.h>
#include <ATen/ops/_functional_assert_scalar_native.h>
#include <ATen/ops/_make_per_tensor_quantized_tensor.h>
@ -479,6 +480,14 @@ Tensor isfinite(const Tensor& self) {
});
}
void _async_error(std::string_view msg) {
TORCH_CHECK(0, msg);
}
void _async_error_meta(std::string_view msg) {
// Do NOT error, it's an async error!
}
void _assert_async_cpu(const Tensor& self) {
TORCH_CHECK(
native::is_nonzero(self),

View File

@ -1,6 +1,8 @@
#pragma once
#include <c10/util/Exception.h>
C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED("-Wswitch-default")
namespace at::native {
// Used as an interface between the different BLAS-like libraries
@ -21,3 +23,5 @@ static inline char to_blas(TransposeType trans) {
}
} // namespace at::native
C10_DIAGNOSTIC_POP()

View File

@ -5,7 +5,6 @@
#include <ATen/native/ReduceOpsUtils.h>
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/Parallel.h>
#include <ATen/TensorIterator.h>
#include <ATen/OpMathType.h>
@ -79,12 +78,12 @@ void min_all_kernel_impl(Tensor& result, const Tensor& input) {
reduce_all_impl<int64_t>(result, input, upper_bound<int64_t>(),
[=](int64_t a, int64_t b) -> int64_t { return min_impl(a, b); });
} else {
AT_DISPATCH_V2(input.scalar_type(), "min_all", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, input.scalar_type(), "min_all", [&] {
using Vec = Vectorized<opmath_type<scalar_t>>;
reduce_all_impl_vec<scalar_t>(result, input, upper_bound<scalar_t>(),
[=] (scalar_t a , scalar_t b) -> scalar_t { return min_impl(a, b); },
[=](Vec a, Vec b) -> Vec { return minimum(a, b); });
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
});
}
}
@ -104,12 +103,12 @@ void max_all_kernel_impl(Tensor& result, const Tensor& input) {
reduce_all_impl<int64_t>(result, input, lower_bound<int64_t>(),
[=](int64_t a, int64_t b) -> int64_t { return max_impl(a, b); });
} else {
AT_DISPATCH_V2(input.scalar_type(), "max_all", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, input.scalar_type(), "max_all", [&] {
using Vec = Vectorized<opmath_type<scalar_t>>;
reduce_all_impl_vec<scalar_t>(result, input, lower_bound<scalar_t>(),
[=] (scalar_t a , scalar_t b) -> scalar_t { return max_impl(a, b); },
[=](Vec a, Vec b) -> Vec { return maximum(a, b); });
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
});
}
}
@ -200,7 +199,7 @@ void aminmax_allreduce_kernel(
}
);
} else {
AT_DISPATCH_V2(input.scalar_type(), "aminmax_cpu", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND2(kBFloat16, kHalf, input.scalar_type(), "aminmax_cpu", [&] {
using Vec = Vectorized<opmath_type<scalar_t>>;
using scalar_t_pair = std::pair<scalar_t, scalar_t>;
reduce_all_impl_vec_two_outputs<scalar_t>(
@ -215,7 +214,7 @@ void aminmax_allreduce_kernel(
[=](Vec a, Vec b) -> Vec { return minimum(a, b); },
[=](Vec a, Vec b) -> Vec { return maximum(a, b); }
);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
});
}
}

View File

@ -3,7 +3,6 @@
#include <ATen/core/Tensor.h>
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/OpMathType.h>
#include <ATen/cpu/vec/vec.h>
#include <ATen/cpu/vec/functional.h>
@ -348,35 +347,34 @@ struct MinValuesOps: public at::native::MinOps<scalar_t> {
};
void min_values_kernel_impl(TensorIterator& iter) {
// This case is special because of Vectorized<int64_t> does not
// handle upper_bound<int64_t>().
// See: https://github.com/pytorch/pytorch/issues/43254
if (iter.dtype() == kLong || iter.dtype() == kUInt64) {
AT_DISPATCH_V2(iter.dtype(), "min_values_cpu", AT_WRAP([&iter] {
binary_kernel_reduce(
iter,
MinValuesOps<scalar_t>{},
std::pair<scalar_t, int64_t>(upper_bound<scalar_t>(), -1));
}), kLong, kUInt64);
if (iter.dtype() == kLong) {
// This case is special because of Vectorized<int64_t> does not
// handle upper_bound<int64_t>().
// See: https://github.com/pytorch/pytorch/issues/43254
using scalar_t = int64_t;
binary_kernel_reduce(
iter,
MinValuesOps<scalar_t>{},
std::pair<scalar_t, int64_t>(upper_bound<scalar_t>(), -1));
return;
}
AT_DISPATCH_V2(iter.dtype(), "min_values_cpu", AT_WRAP([&iter] {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.dtype(), "min_values_cpu", [&iter] {
binary_kernel_reduce_vec(
iter,
[](scalar_t a, scalar_t b) -> scalar_t { return min_impl(a, b); },
[](Vectorized<scalar_t> a, Vectorized<scalar_t> b) { return minimum(a, b); },
static_cast<double>(upper_bound<scalar_t>()));
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void max_values_kernel_impl(TensorIterator& iter) {
AT_DISPATCH_V2(iter.dtype(), "max_values_cpu", AT_WRAP([&iter] {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.dtype(), "max_values_cpu", [&iter] {
binary_kernel_reduce_vec(
iter,
[](scalar_t a, scalar_t b) -> scalar_t { return max_impl(a, b); },
[](Vectorized<scalar_t> a, Vectorized<scalar_t> b) { return maximum(a, b); },
lower_bound<scalar_t>());
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void argmax_kernel_impl(TensorIterator &iter) {

View File

@ -11,7 +11,6 @@
#include <vector>
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/Parallel.h>
#include <ATen/NumericUtils.h>
#include <ATen/TensorIterator.h>
@ -107,7 +106,7 @@ void min_kernel_impl(
bool keepdim) {
int64_t self_dim_size = ensure_nonempty_size(self, dim);
AT_DISPATCH_V2(self.scalar_type(), "min_cpu", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(ScalarType::Half, ScalarType::BFloat16, ScalarType::Bool, self.scalar_type(), "min_cpu", [&] {
compare_base_kernel<scalar_t>(result, indice, self, dim, keepdim, [&] (
scalar_t* result_data, int64_t* indice_data,
const scalar_t* self_data, auto self_dim_stride) {
@ -129,7 +128,7 @@ void min_kernel_impl(
*indice_data = index;
}
);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), ScalarType::Half, ScalarType::BFloat16, ScalarType::Bool);
});
}
void max_kernel_impl(
@ -140,7 +139,7 @@ void max_kernel_impl(
bool keepdim) {
int64_t self_dim_size = ensure_nonempty_size(self, dim);
AT_DISPATCH_V2(self.scalar_type(), "max_cpu", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(ScalarType::Half, ScalarType::BFloat16, ScalarType::Bool, self.scalar_type(), "max_cpu", [&] {
compare_base_kernel<scalar_t>(result, indice, self, dim, keepdim, [&] (
scalar_t* result_data, int64_t* indice_data,
const scalar_t* self_data, auto self_dim_stride) {
@ -162,7 +161,7 @@ void max_kernel_impl(
*indice_data = index;
}
);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), ScalarType::Half, ScalarType::BFloat16, ScalarType::Bool);
});
}
void aminmax_kernel(
@ -187,7 +186,7 @@ void aminmax_kernel(
return;
}
AT_DISPATCH_V2(self.scalar_type(), "aminmax_cpu", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(ScalarType::Bool, ScalarType::BFloat16, ScalarType::Half, self.scalar_type(), "aminmax_cpu", [&] {
compare_base_kernel<scalar_t, scalar_t>(min_result, max_result, self, wrap_dim, keepdim, [&] (
scalar_t* min_result_data, scalar_t* max_result_data,
const scalar_t* self_data, auto self_dim_stride) {
@ -210,7 +209,7 @@ void aminmax_kernel(
*max_result_data = max_number;
}
);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), ScalarType::Bool, ScalarType::BFloat16, ScalarType::Half);
});
}
void where_kernel_impl(TensorIterator &iter) {

View File

@ -1,6 +1,7 @@
#pragma once
#include <ATen/native/CompositeRandomAccessorCommon.h>
#include <thrust/swap.h>
#include <thrust/tuple.h>
namespace at { namespace native {

View File

@ -669,9 +669,12 @@ std::optional<c10::ScalarType> out_dtype) {
// _scaled_mm_allowed_device is used here within _grouped_mm_cuda which seems incorrect since scale is not used.
// the _grouped_mm_fallback should be safe for any ROCm GPU since it's just calling typical mm/bmm
bool use_fast_path = false;
// On non CK system(w/ ROCm), make sure use_fast_path is false
#if defined(USE_ROCM_CK_GEMM)
if (at::detail::getCUDAHooks().isGPUArch({"gfx942", "gfx950"})) {
use_fast_path = true;
}
#endif //USE_ROCM_CK_GEMM
#endif
const auto out_dtype_ = _resolve_grouped_mm_out_dtype(mat_a, mat_b, out_dtype);
Tensor out = create_grouped_gemm_output_tensor(mat_a, mat_b, offs, out_dtype_);
@ -680,7 +683,11 @@ std::optional<c10::ScalarType> out_dtype) {
#ifndef USE_ROCM
at::cuda::detail::bf16bf16_grouped_mm(mat_a, mat_b, offs, bias, out);
#else
#if defined(USE_ROCM_CK_GEMM)
at::hip::detail::group_gemm_ck(mat_a, mat_b, offs, bias, out);
#else
TORCH_WARN("ROCm: Group Gemm through CK not selected.");
#endif //USE_ROCM_CK_GEMM
#endif
} else {
_grouped_mm_fallback(mat_a, mat_b, offs, bias, out_dtype, out);

View File

@ -1,6 +1,5 @@
#define TORCH_ASSERT_NO_OPERATORS
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/NumericUtils.h>
#include <ATen/native/DispatchStub.h>
#include <ATen/native/ReduceAllOps.h>
@ -29,22 +28,22 @@ void _min_max_values_kernel_cuda_impl(TensorIterator& iter) {
}
void aminmax_allreduce_launch_kernel(TensorIterator& iter) {
AT_DISPATCH_V2(
iter.input_dtype(), "aminmax_all_cuda", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(
kBFloat16, kHalf, kBool, iter.input_dtype(), "aminmax_all_cuda", [&] {
_min_max_values_kernel_cuda_impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void aminmax_launch_kernel(TensorIterator& iter) {
AT_DISPATCH_V2(
iter.input_dtype(), "aminmax_cuda", AT_WRAP([&]() {
AT_DISPATCH_ALL_TYPES_AND3(
kBFloat16, kHalf, kBool, iter.input_dtype(), "aminmax_cuda", [&]() {
gpu_reduce_kernel<scalar_t, scalar_t>(
iter,
MinMaxOps<scalar_t, scalar_t, int32_t>{},
thrust::pair<scalar_t, scalar_t>(
at::numeric_limits<scalar_t>::upper_bound(),
at::numeric_limits<scalar_t>::lower_bound()));
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
} // namespace at::native

View File

@ -1,6 +1,5 @@
#define TORCH_ASSERT_NO_OPERATORS
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/NumericUtils.h>
#include <ATen/native/DispatchStub.h>
#include <ATen/native/ReduceAllOps.h>
@ -34,27 +33,27 @@ void max_values_kernel_cuda_impl(TensorIterator& iter) {
}
void max_values_kernel_cuda(TensorIterator& iter) {
AT_DISPATCH_V2(
iter.dtype(), "max_values_cuda", AT_WRAP([&]() {
AT_DISPATCH_ALL_TYPES_AND3(
kBFloat16, kHalf, kBool, iter.dtype(), "max_values_cuda", [&]() {
max_values_kernel_cuda_impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void max_launch_kernel(TensorIterator& iter) {
AT_DISPATCH_V2(
iter.input_dtype(), "max_cuda", AT_WRAP([&]() {
AT_DISPATCH_ALL_TYPES_AND3(
kBFloat16, kHalf, kBool, iter.input_dtype(), "max_cuda", [&]() {
gpu_reduce_kernel<scalar_t, scalar_t>(
iter,
MaxOps<scalar_t>{},
thrust::pair<scalar_t, int64_t>(
at::numeric_limits<scalar_t>::lower_bound(), 0));
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void max_all_launch_kernel(TensorIterator &iter) {
AT_DISPATCH_V2(iter.input_dtype(), "max_all_cuda", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.input_dtype(), "max_all_cuda", [&] {
max_values_kernel_cuda_impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
REGISTER_DISPATCH(max_values_stub, &max_values_kernel_cuda)

View File

@ -12,7 +12,6 @@
#include <ATen/NumericUtils.h>
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/NumericUtils.h>
#include <ATen/cuda/NumericLimits.cuh>
@ -34,24 +33,24 @@ void min_values_kernel_cuda_impl(TensorIterator& iter) {
}
void min_values_kernel_cuda(TensorIterator& iter) {
AT_DISPATCH_V2(iter.dtype(), "min_values_cuda", AT_WRAP([&]() {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.dtype(), "min_values_cuda", [&]() {
min_values_kernel_cuda_impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void min_launch_kernel(TensorIterator &iter) {
AT_DISPATCH_V2(iter.input_dtype(), "min_cuda", AT_WRAP([&]() {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.input_dtype(), "min_cuda", [&]() {
gpu_reduce_kernel<scalar_t, scalar_t>(
iter,
MinOps<scalar_t>{},
thrust::pair<scalar_t, int64_t>(at::numeric_limits<scalar_t>::upper_bound(), 0));
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void min_all_launch_kernel(TensorIterator &iter) {
AT_DISPATCH_V2(iter.input_dtype(), "min_all_cuda", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.input_dtype(), "min_all_cuda", [&] {
min_values_kernel_cuda_impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
REGISTER_DISPATCH(min_values_stub, &min_values_kernel_cuda)

View File

@ -267,15 +267,15 @@ void scan_dim_with_indices(const TensorBase& self, const TensorBase& values, con
* outer dimensions, which contains several "inner rows").
* Each thread processes a single inner row at a time.
*/
template<typename scalar_t, class BinaryOp>
template<typename scalar_t, typename index_t, class BinaryOp>
__global__ void tensor_kernel_scan_outer_dim(scalar_t *tgt_, const scalar_t *src_,
const uint32_t num_orows, const uint32_t num_irows, const uint32_t row_size,
const scalar_t init, BinaryOp binary_op)
{
for (uint32_t orow = blockIdx.x; orow < num_orows; orow += gridDim.x) {
for (uint32_t irow = blockIdx.y * blockDim.x + threadIdx.x; irow < num_irows; irow += gridDim.y * blockDim.x) {
const scalar_t *src = src_ + orow * row_size * num_irows + irow;
scalar_t *tgt = tgt_ + orow * row_size * num_irows + irow;
const scalar_t *src = src_ + static_cast<index_t>(orow) * row_size * num_irows + irow;
scalar_t *tgt = tgt_ + (index_t) orow * row_size * num_irows + irow;
scalar_t acc = init;
for (uint32_t col = 0; col < row_size; ++col) {
@ -409,10 +409,15 @@ __host__ void scan_outer_dim(const TensorBase& self, const TensorBase& result,
check_fits_in_unsigned(num_irows, "num_irows");
check_fits_in_unsigned(num_orows, "num_orows");
check_fits_in_unsigned(row_size, "row_size");
tensor_kernel_scan_outer_dim<scalar_t><<<grid, threads, 0, at::cuda::getCurrentCUDAStream()>>>(
if (static_cast<size_t>(num_irows) * num_orows * row_size <= UINT_MAX) {
tensor_kernel_scan_outer_dim<scalar_t, uint32_t><<<grid, threads, 0, at::cuda::getCurrentCUDAStream()>>>(
result.mutable_data_ptr<scalar_t>(), self.const_data_ptr<scalar_t>(),
num_orows, num_irows, row_size, init, binary_op);
} else {
tensor_kernel_scan_outer_dim<scalar_t, size_t><<<grid, threads, 0, at::cuda::getCurrentCUDAStream()>>>(
result.mutable_data_ptr<scalar_t>(), self.const_data_ptr<scalar_t>(),
num_orows, num_irows, row_size, init, binary_op);
}
C10_CUDA_KERNEL_LAUNCH_CHECK();
}

View File

@ -40,8 +40,6 @@ using namespace at::mps;
namespace at::native::mps {
void dispatch_sync_with_rethrow(dispatch_queue_t queue, void (^block)());
struct MPSScalar {
id<MTLBuffer> getMTLBuffer() const {
return __builtin_bit_cast(id<MTLBuffer>, buffer.get());

View File

@ -53,21 +53,6 @@
@end
namespace at::native::mps {
void dispatch_sync_with_rethrow(dispatch_queue_t queue, void (^block)()) {
__block std::optional<std::exception_ptr> block_exception;
dispatch_sync(queue, ^() {
try {
block();
} catch (...) {
block_exception = std::current_exception();
}
});
if (block_exception) {
std::rethrow_exception(*block_exception);
}
}
/**
* Computes distance from lowest to highest element offset in given tensor.
*/

View File

@ -1,4 +1,5 @@
#include <c10/metal/atomic.h>
#include <c10/metal/error.h>
#include <c10/metal/indexing.h>
#include <metal_stdlib>
@ -31,10 +32,24 @@ OffsetT index_apply_indices(
constant IndexAB* indices,
constant int64_t* sizes,
constant int64_t* strides,
uint num_indices) {
uint num_indices,
thread bool& error,
device ErrorMessages* error_buf) {
OffsetT rc = offs.x;
for (uint i = 0; i < num_indices; i++) {
auto idx = indices[i].indexArray[offs.y];
if (idx < -sizes[i] || idx >= sizes[i]) {
TORCH_REPORT_ERROR(
error_buf,
"index ",
idx,
" is out of bounds for dimension ",
i,
" with size ",
sizes[i]);
error = true;
break;
}
if (idx < 0) {
idx += sizes[i];
}
@ -55,6 +70,7 @@ kernel void index_select(
constant int64_t* index_sizes,
constant int64_t* index_strides,
constant uint4& ndim_nindices_numel,
device ErrorMessages* error_buffer,
uint thread_index [[thread_position_in_grid]]) {
const auto ndim = ndim_nindices_numel.x;
const auto num_indices = ndim_nindices_numel.y;
@ -65,8 +81,19 @@ kernel void index_select(
indices_strides,
ndim,
thread_index);
bool error = false;
auto input_offs = index_apply_indices<OffsetT>(
offs.yz, indices, index_sizes, index_strides, num_indices);
offs.yz,
indices,
index_sizes,
index_strides,
num_indices,
error,
error_buffer);
if (error) {
output[offs.x / sizeof(T)] = 0;
return;
}
output[offs.x / sizeof(T)] = input[input_offs / sizeof(T)];
}
@ -82,7 +109,9 @@ inline void index_put_impl(
constant int64_t* index_sizes,
constant int64_t* index_strides,
constant uint4& ndim_nindices_numel,
device ErrorMessages* error_buffer,
uint thread_index) {
bool error = false;
const auto ndim = ndim_nindices_numel.x;
const auto num_indices = ndim_nindices_numel.y;
const auto offs = index_get_offsets(
@ -93,7 +122,16 @@ inline void index_put_impl(
ndim,
thread_index);
auto output_offs = index_apply_indices<OffsetT>(
offs.xz, indices, index_sizes, index_strides, num_indices);
offs.xz,
indices,
index_sizes,
index_strides,
num_indices,
error,
error_buffer);
if (error) {
return;
}
output[output_offs / sizeof(T)] = input[offs.y / sizeof(T)];
}
@ -109,6 +147,7 @@ kernel void index_put(
constant int64_t* index_sizes,
constant int64_t* index_strides,
constant uint4& ndim_nindices_numel,
device ErrorMessages* error_buffer,
uint thread_index [[thread_position_in_grid]]) {
index_put_impl(
output,
@ -121,6 +160,7 @@ kernel void index_put(
index_sizes,
index_strides,
ndim_nindices_numel,
error_buffer,
thread_index);
}
@ -136,6 +176,7 @@ kernel void index_put_serial(
constant int64_t* index_sizes,
constant int64_t* index_strides,
constant uint4& ndim_nindices_numel,
device ErrorMessages* error_buffer,
uint thread_index [[thread_position_in_grid]]) {
(void)thread_index; // Suppress unused vairable varning
for (uint idx = 0; idx < ndim_nindices_numel.z; ++idx) {
@ -150,6 +191,7 @@ kernel void index_put_serial(
index_sizes,
index_strides,
ndim_nindices_numel,
error_buffer,
idx);
}
}
@ -166,6 +208,7 @@ kernel void index_put_accumulate(
constant int64_t* index_sizes,
constant int64_t* index_strides,
constant uint4& ndim_nindices_numel,
device ErrorMessages* error_buffer,
uint thread_index [[thread_position_in_grid]]) {
const auto ndim = ndim_nindices_numel.x;
const auto num_indices = ndim_nindices_numel.y;
@ -176,8 +219,18 @@ kernel void index_put_accumulate(
indices_strides,
ndim,
thread_index);
bool error = false;
auto output_offs = index_apply_indices<OffsetT>(
offs.xz, indices, index_sizes, index_strides, num_indices);
offs.xz,
indices,
index_sizes,
index_strides,
num_indices,
error,
error_buffer);
if (error) {
return;
}
AtomicType<T>::atomic_add(
reinterpret_cast<device AtomicType_t<T>*>(output),
output_offs / sizeof(T),
@ -197,6 +250,7 @@ kernel void index_put_accumulate(
constant int64_t* index_sizes, \
constant int64_t* index_strides, \
constant uint4& ndim_nindices_numel, \
device ErrorMessages* error_buffer, \
uint thread_index [[thread_position_in_grid]])
#define REGISTER_INDEX_OP_ALL_DTYPES(OP_NAME) \

View File

@ -220,7 +220,7 @@ Tensor _embedding_bag_dense_backward_mps(const Tensor& output_grad,
auto num_threads = (params.mode == EmbeddingBagMode::MAX) ? output_grad.numel() : num_indices * params.feature_size;
MPSStream* stream = getCurrentMPSStream();
mps::dispatch_sync_with_rethrow(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
@autoreleasepool {
id<MTLComputeCommandEncoder> computeEncoder = stream->commandEncoder();
auto pipeline_state = lib.getPipelineStateForFunc(fmt::format("embedding_bag_backward_{}_{}",
@ -273,7 +273,7 @@ Tensor _embedding_bag_per_sample_weights_backward_mps(const Tensor& output_grad,
auto num_threads = num_indices * feature_size;
MPSStream* stream = getCurrentMPSStream();
mps::dispatch_sync_with_rethrow(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
@autoreleasepool {
id<MTLComputeCommandEncoder> computeEncoder = stream->commandEncoder();
auto pipeline_state = lib.getPipelineStateForFunc(fmt::format("embedding_bag_per_sample_weights_backward_{}_{}",

View File

@ -179,7 +179,8 @@ static void dispatch_index_kernel(TensorIteratorBase& iter,
iter.strides(2),
index_size,
index_stride,
ndim_nindiees);
ndim_nindiees,
mpsStream->getErrorBuffer());
mtl_dispatch1DJob(computeEncoder, indexSelectPSO, serial ? 1 : iter.numel());
});
}
@ -299,7 +300,7 @@ static Tensor& nonzero_out_native_mps(const Tensor& self, Tensor& out_) {
MPSStream* stream = getCurrentMPSStream();
using CachedGraph = MPSUnaryCachedGraph;
dispatch_sync(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
stream->synchronize(SyncType::COMMIT_AND_WAIT);
});
int64_t total_nonzero = at::count_nonzero(self).item<int64_t>();
@ -384,7 +385,7 @@ Tensor& nonzero_out_mps(const Tensor& self, Tensor& out_) {
MPSStream* stream = getCurrentMPSStream();
using CachedGraph = MPSUnaryCachedGraph;
dispatch_sync(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
stream->synchronize(SyncType::COMMIT_AND_WAIT);
});
int64_t total_nonzero = at::count_nonzero(self).item<int64_t>();

View File

@ -923,7 +923,7 @@ std::tuple<Tensor, Tensor, Tensor> layer_norm_mps(const Tensor& input,
MPSStream* stream = getCurrentMPSStream();
TORCH_CHECK_NOT_IMPLEMENTED(input.scalar_type() != kLong, "Not implemented for long on MPS");
@autoreleasepool {
mps::dispatch_sync_with_rethrow(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
// which kernel variant to use based on the normalized axis N size
const int N_READS = 4;
auto metalType = mps::scalarToMetalTypeString(input);

View File

@ -192,6 +192,11 @@
CompositeExplicitAutograd: _assert_tensor_metadata
Meta: _assert_tensor_metadata_meta_symint
- func: _async_error(str msg) -> ()
dispatch:
CompositeExplicitAutograd: _async_error
Meta: _async_error_meta
- func: _print(str s) -> ()
dispatch:
CompositeExplicitAutograd: _print
@ -7513,7 +7518,7 @@
- func: _sparse_mask_projection(Tensor self, Tensor mask, bool accumulate_matches=False) -> Tensor
variants: method
dispatch:
SparseCPU, SparseCUDA: sparse_mask_projection
SparseCPU, SparseCUDA, SparseMPS: sparse_mask_projection
autogen: _sparse_mask_projection.out
- func: _to_cpu(Tensor[] tensors) -> Tensor[]

View File

@ -30,10 +30,12 @@
#include <thrust/binary_search.h>
#include <thrust/device_ptr.h>
#include <thrust/distance.h>
#include <thrust/iterator/constant_iterator.h>
#include <thrust/scan.h>
#include <thrust/sequence.h>
#include <thrust/sort.h>
#include <thrust/system/cuda/execution_policy.h>
#include <thrust/iterator/constant_iterator.h>
#include <cuda_runtime_api.h>
#include <cusparse.h>
@ -47,6 +49,7 @@
#include <c10/macros/Macros.h>
#include <thrust/copy.h>
#include <thrust/device_ptr.h>
#include <thrust/distance.h>
#include <thrust/for_each.h>
#include <thrust/functional.h>
#include <thrust/gather.h>

View File

@ -445,6 +445,33 @@ static SparseTensor& mul_out_dense_sparse_mps(
return out;
}
static std::tuple<Tensor, Tensor, int64_t> mps_intersect_binary_search(
const Tensor& A_keys,
const Tensor& B_keys,
int64_t lenA,
int64_t lenB,
bool boolean_flag) {
auto stream = getCurrentMPSStream();
auto outA_idx = at::empty({lenA}, A_keys.options().dtype(at::kLong));
auto outB_idx = at::empty({lenA}, A_keys.options().dtype(at::kLong));
auto counter = at::zeros({1}, A_keys.options().dtype(at::kInt));
dispatch_sync_with_rethrow(stream->queue(), ^() {
@autoreleasepool {
auto pso = lib.getPipelineStateForFunc("intersect_binary_search");
auto enc = stream->commandEncoder();
[enc setComputePipelineState:pso];
mtl_setArgs(enc, A_keys, B_keys, outA_idx, outB_idx, counter,
static_cast<uint32_t>(lenB), boolean_flag);
mtl_dispatch1DJob(enc, pso, static_cast<uint32_t>(lenA));
}
});
const auto match_count = static_cast<int64_t>(counter.item<int32_t>());
return std::make_tuple(std::move(outA_idx), std::move(outB_idx), match_count);
}
SparseTensor& mul_out_sparse_mps(const Tensor& t_, const Tensor& src_, SparseTensor& r_) {
TORCH_CHECK(r_.is_mps(), "mul: expected 'out' to be MPS, but got ", r_.device());
@ -523,22 +550,10 @@ SparseTensor& mul_out_sparse_mps(const Tensor& t_, const Tensor& src_, SparseTen
auto A_keys = A_is_lhs ? lhs_keys : rhs_keys;
auto B_keys = A_is_lhs ? rhs_keys : lhs_keys;
auto outA_idx = at::empty({lenA}, at::device(device).dtype(kLong));
auto outB_idx = at::empty({lenA}, at::device(device).dtype(kLong));
auto counter = at::zeros({1}, at::device(device).dtype(kInt));
auto [outA_idx, outB_idx, M_int64] = mps_intersect_binary_search(
A_keys, B_keys, lenA, lenB, A_is_lhs);
dispatch_sync_with_rethrow(stream->queue(), ^() {
@autoreleasepool {
auto pso = lib.getPipelineStateForFunc("intersect_binary_search");
auto enc = stream->commandEncoder();
[enc setComputePipelineState:pso];
mtl_setArgs(enc, A_keys, B_keys, outA_idx, outB_idx, counter,
static_cast<uint32_t>(lenB), A_is_lhs);
mtl_dispatch1DJob(enc, pso, static_cast<uint32_t>(lenA));
}
});
const uint32_t M = counter.item<int32_t>(); // number of structural matches
const auto M = static_cast<uint32_t>(M_int64); // number of structural matches
r_.resize_as_(lhs);
@ -762,6 +777,14 @@ SparseTensor& add_out_sparse_mps(const SparseTensor& self,
using OptTensor = std::optional<Tensor>;
static Tensor create_sparse_output_values(
const Tensor& template_values,
int64_t output_nnz,
ScalarType dtype) {
auto out_val_sizes = template_values.sizes().vec();
out_val_sizes[0] = output_nnz;
return at::zeros(out_val_sizes, template_values.options().dtype(dtype));
}
static void sparse_mask_apply_out_mps_kernel(
Tensor& result,
@ -783,9 +806,9 @@ static void sparse_mask_apply_out_mps_kernel(
auto src = src_in.coalesce();
auto mask = coalesce_mask ? mask_in.coalesce() : mask_in;
const int64_t src_nnz = src._nnz();
const int64_t mask_nnz = mask._nnz();
const int64_t sd = src.sparse_dim();
const auto src_nnz = src._nnz();
const auto mask_nnz = mask._nnz();
const auto sd = src.sparse_dim();
result.sparse_resize_(mask.sizes(), mask.sparse_dim(), mask.dense_dim());
auto commonDtype = at::result_type(src, mask);
@ -814,53 +837,27 @@ static void sparse_mask_apply_out_mps_kernel(
return;
}
auto mask_indices = mask._indices().contiguous();
auto src_values = src._values().to(commonDtype).contiguous();
auto out_values = create_sparse_output_values(src_values, mask_nnz, commonDtype);
if (src_nnz == 0) {
auto out_indices = mask._indices().contiguous();
auto src_values = src._values().to(commonDtype);
auto out_val_sizes = src_values.sizes().vec();
out_val_sizes[0] = mask_nnz;
auto out_values = at::zeros(out_val_sizes, src_values.options());
alias_into_sparse(result, out_indices, out_values);
alias_into_sparse(result, mask_indices, out_values);
result._coalesced_(mask.is_coalesced());
return;
}
auto mask_indices = mask._indices().contiguous();
auto src_indices = src._indices().contiguous();
auto src_values = src._values().to(commonDtype).contiguous();
auto mask_keys = flatten_indices(mask._indices().contiguous(), mask.sizes().slice(0, sd)).contiguous();
auto src_keys = flatten_indices(src._indices().contiguous(), src.sizes().slice(0, sd)).contiguous();
auto mask_keys = flatten_indices(mask_indices, mask.sizes().slice(0, sd)).contiguous();
auto src_keys = flatten_indices(src_indices, src.sizes().slice(0, sd)).contiguous();
const bool A_is_src = (src_nnz <= mask_nnz);
const int64_t lenA = A_is_src ? src_nnz : mask_nnz;
const int64_t lenB = A_is_src ? mask_nnz : src_nnz;
const auto A_is_src = (src_nnz <= mask_nnz);
const auto lenA = A_is_src ? src_nnz : mask_nnz;
const auto lenB = A_is_src ? mask_nnz : src_nnz;
auto A_keys = A_is_src ? src_keys : mask_keys;
auto B_keys = A_is_src ? mask_keys : src_keys;
const auto device = result.device();
auto stream = getCurrentMPSStream();
auto outA_idx = at::empty({lenA}, at::device(device).dtype(at::kLong));
auto outB_idx = at::empty({lenA}, at::device(device).dtype(at::kLong));
auto counter = at::zeros({1}, at::device(device).dtype(at::kInt));
dispatch_sync_with_rethrow(stream->queue(), ^() {
@autoreleasepool {
auto pso = lib.getPipelineStateForFunc("intersect_binary_search");
auto enc = stream->commandEncoder();
[enc setComputePipelineState:pso];
mtl_setArgs(enc, A_keys, B_keys, outA_idx, outB_idx, counter,
static_cast<uint32_t>(lenB), A_is_src);
mtl_dispatch1DJob(enc, pso, static_cast<uint32_t>(lenA));
}
});
const int64_t M = static_cast<int64_t>(counter.item<int32_t>());
auto out_val_sizes = src_values.sizes().vec();
out_val_sizes[0] = mask_nnz;
auto out_values = at::zeros(out_val_sizes, src_values.options());
auto [outA_idx, outB_idx, M] = mps_intersect_binary_search(
A_keys, B_keys, lenA, lenB, A_is_src);
if (M > 0) {
auto src_match = outA_idx.narrow(0, 0, M);
@ -878,6 +875,70 @@ static void sparse_mask_apply_out_mps_kernel(
result._coalesced_(mask.is_coalesced());
}
static void sparse_mask_projection_out_mps_kernel(
Tensor& result,
const Tensor& lhs,
const Tensor& rhs,
const OptTensor& /*x_hash_opt*/,
bool accumulate_matches) {
TORCH_CHECK(lhs.is_sparse() && rhs.is_sparse(), "sparse_mask_projection: expected sparse COO");
TORCH_CHECK(lhs.is_mps() && rhs.is_mps(), "sparse_mask_projection: expected MPS tensors");
TORCH_CHECK(lhs.sparse_dim() == rhs.sparse_dim(), "sparse_dim mismatch");
auto lhs_c = lhs.coalesce();
auto rhs_c = rhs.coalesce();
const auto sd = lhs_c.sparse_dim();
const auto lhs_nnz = lhs_c._nnz();
const auto rhs_nnz = rhs_c._nnz();
auto commonDtype = at::result_type(lhs_c, rhs_c);
TORCH_CHECK(canCast(commonDtype, result.scalar_type()),
"Can't convert ", commonDtype, " to output ", result.scalar_type());
result.sparse_resize_(lhs.sizes(), lhs.sparse_dim(), lhs.dense_dim());
auto lhs_indices = lhs_c._indices().contiguous();
auto rhs_values = rhs_c._values().to(commonDtype).contiguous();
auto out_values = create_sparse_output_values(rhs_values, lhs_nnz, commonDtype);
if (lhs_nnz > 0 && rhs_nnz > 0) {
auto lhs_keys = flatten_indices(lhs_indices, lhs_c.sizes().slice(0, sd)).contiguous();
auto rhs_keys = flatten_indices(rhs_c._indices().contiguous(), rhs_c.sizes().slice(0, sd)).contiguous();
const auto A_is_lhs = (lhs_nnz <= rhs_nnz);
const auto lenA = A_is_lhs ? lhs_nnz : rhs_nnz;
const auto lenB = A_is_lhs ? rhs_nnz : lhs_nnz;
auto A_keys = A_is_lhs ? lhs_keys : rhs_keys;
auto B_keys = A_is_lhs ? rhs_keys : lhs_keys;
auto [outA_idx, outB_idx, M] = mps_intersect_binary_search(
A_keys, B_keys, lenA, lenB, A_is_lhs);
if (M > 0) {
auto idx_in_A = outA_idx.narrow(0, 0, M);
auto idx_in_B = outB_idx.narrow(0, 0, M);
auto idx_in_lhs = A_is_lhs ? idx_in_A : idx_in_B;
auto idx_in_rhs = A_is_lhs ? idx_in_B : idx_in_A;
const auto view_cols = rhs_values.numel() / std::max<int64_t>(rhs_nnz, 1);
auto rhs_rows = rhs_values.index_select(0, idx_in_rhs).contiguous();
auto rhs_rows_2d = rhs_rows.view({M, view_cols});
auto out_2d = out_values.view({lhs_nnz, view_cols});
if (accumulate_matches) {
out_2d.index_add_(0, idx_in_lhs, rhs_rows_2d);
} else {
out_2d.index_copy_(0, idx_in_lhs, rhs_rows_2d);
}
}
}
alias_into_sparse(result, lhs._indices(), out_values);
result._coalesced_(lhs.is_coalesced());
}
static void sparse_mask_intersection_out_mps_kernel(
Tensor& result,
const Tensor& lhs,
@ -1002,4 +1063,5 @@ Tensor sparse_sparse_matmul_mps(const Tensor& mat1_, const Tensor& mat2_) {
}
REGISTER_MPS_DISPATCH(sparse_mask_intersection_out_stub, &sparse_mask_intersection_out_mps_kernel);
REGISTER_MPS_DISPATCH(sparse_mask_projection_out_stub, &sparse_mask_projection_out_mps_kernel);
} // namespace at::native

View File

@ -1,191 +1,3 @@
#pragma once
#include <ATen/xpu/XPUContext.h>
#include <optional>
namespace at::xpu {
/*
* XPUEvent are movable not copyable wrappers around SYCL event. XPUEvent are
* constructed lazily when first recorded. It has a device, and this device is
* acquired from the first recording stream. Later streams that record the event
* must match the same device.
*
* Currently, XPUEvent does NOT support to export an inter-process event from
* another process via inter-process communication(IPC). So it means that
* inter-process communication for event handles between different processes is
* not available. This could impact some applications that rely on cross-process
* synchronization and communication.
*/
struct TORCH_XPU_API XPUEvent {
// Constructors
XPUEvent(bool enable_timing = false) noexcept
: enable_timing_{enable_timing} {}
~XPUEvent() {
if (isCreated()) {
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_deletion(
at::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
}
}
XPUEvent(const XPUEvent&) = delete;
XPUEvent& operator=(const XPUEvent&) = delete;
XPUEvent(XPUEvent&& other) = default;
XPUEvent& operator=(XPUEvent&& other) = default;
operator sycl::event&() const {
return event();
}
std::optional<at::Device> device() const {
if (isCreated()) {
return at::Device(at::kXPU, device_index_);
} else {
return std::nullopt;
}
}
inline bool isCreated() const {
return (event_.get() != nullptr);
}
DeviceIndex device_index() const {
return device_index_;
}
sycl::event& event() const {
return *event_;
}
bool query() const {
using namespace sycl::info;
if (!isCreated()) {
return true;
}
return event().get_info<event::command_execution_status>() ==
event_command_status::complete;
}
void record() {
record(getCurrentXPUStream());
}
void recordOnce(const XPUStream& stream) {
if (!isCreated()) {
record(stream);
}
}
void record(const XPUStream& stream) {
if (!isCreated()) {
device_index_ = stream.device_index();
assignEvent(stream.queue());
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_creation(
at::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
} else {
TORCH_CHECK(
device_index_ == stream.device_index(),
"Event device ",
device_index_,
" does not match recording stream's device ",
stream.device_index(),
".");
reassignEvent(stream.queue());
}
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_record(
at::kXPU,
reinterpret_cast<uintptr_t>(event_.get()),
reinterpret_cast<uintptr_t>(&stream.queue()));
}
}
void block(const XPUStream& stream) {
if (isCreated()) {
std::vector<sycl::event> event_list{event()};
// Make this stream wait until event_ is completed.
stream.queue().ext_oneapi_submit_barrier(event_list);
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_wait(
at::kXPU,
reinterpret_cast<uintptr_t>(event_.get()),
reinterpret_cast<uintptr_t>(&stream.queue()));
}
}
}
double elapsed_time(const XPUEvent& other) const {
TORCH_CHECK(
isCreated() && other.isCreated(),
"Both events must be recorded before calculating elapsed time.");
TORCH_CHECK(
query() && other.query(),
"Both events must be completed before calculating elapsed time.");
TORCH_CHECK(
enable_timing_ && other.enable_timing_,
"Both events must be created with argument 'enable_timing=True'.");
#if SYCL_COMPILER_VERSION < 20250000
TORCH_CHECK_NOT_IMPLEMENTED(
false,
"elapsed_time of XPUEvent requires PyTorch to be built with SYCL compiler version 2025.0.0 or newer.");
#endif
using namespace sycl::info::event_profiling;
// Block until both of the recorded events are completed.
uint64_t end_time_ns = other.event().get_profiling_info<command_end>();
uint64_t start_time_ns = event().get_profiling_info<command_end>();
// Return the eplased time in milliseconds.
return 1e-6 *
(static_cast<double>(end_time_ns) - static_cast<double>(start_time_ns));
}
void synchronize() const {
if (isCreated()) {
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_synchronization(
at::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
event().wait_and_throw();
}
}
private:
void assignEvent(sycl::queue& queue) {
#if SYCL_COMPILER_VERSION >= 20250000
if (enable_timing_) {
event_ = std::make_unique<sycl::event>(
sycl::ext::oneapi::experimental::submit_profiling_tag(queue));
} else {
event_ = std::make_unique<sycl::event>(queue.ext_oneapi_submit_barrier());
}
#else
event_ = std::make_unique<sycl::event>(queue.ext_oneapi_submit_barrier());
#endif
}
void reassignEvent(sycl::queue& queue) {
event_.reset();
assignEvent(queue);
}
bool enable_timing_ = false;
DeviceIndex device_index_ = -1;
// Only need to track the last event, as events in an in-order queue are
// executed sequentially.
std::unique_ptr<sycl::event> event_;
};
} // namespace at::xpu
#include <c10/xpu/XPUEvent.h>

View File

@ -50,6 +50,7 @@ def check_accuracy(actual_csv, expected_csv, expected_filename):
"mobilenet_v2",
"pytorch_CycleGAN_and_pix2pix",
"pytorch_stargan",
"repvgg_a2",
"resnet152",
"resnet18",
"resnet50",

View File

@ -10,7 +10,7 @@ beit_base_patch16_224,pass,7
convnextv2_nano.fcmae_ft_in22k_in1k,pass,7
convnextv2_nano.fcmae_ft_in22k_in1k,fail_accuracy,7
@ -66,7 +66,7 @@ visformer_small,pass,7
vit_base_patch14_dinov2.lvd142m,pass,7
vit_base_patch14_dinov2.lvd142m,fail_accuracy,7

1 name accuracy graph_breaks
10 mobilenetv2_100 pass 7
11 mobilenetv3_large_100 pass 7
12 mobilevit_s pass 6
13 nfnet_l0 pass 7
14 repvgg_a2 pass 7
15 swin_base_patch4_window7_224 pass 7
16 tf_efficientnet_b0 pass 6
66
67
68
69
70
71
72

View File

@ -50,7 +50,7 @@ nfnet_l0,pass,7
repvgg_a2,fail_accuracy,7
repvgg_a2,pass,7

1 name accuracy graph_breaks
50
51
52
53
54
55
56

View File

@ -952,7 +952,7 @@ def latency_experiment_summary(suite_name, args, model, timings, **kwargs):
first_fields.append(kwargs["tag"])
headers = first_headers + ["speedup", "abs_latency"]
row = first_fields + [float(speedup), median[1] * 1000]
msg = f"{speedup:.3f}x"
msg = f"{median[0] * 1000} ms, {median[1] * 1000} ms, {speedup:.3f}x"
if args.baseline:
headers.extend(
[
@ -1010,7 +1010,7 @@ def latency_experiment_summary(suite_name, args, model, timings, **kwargs):
# Hypothetically you can use this from other places, but it's currently
# inaccessible, and when this assert fails you need to update the
# event_name here to account for the other cases you are using this
assert args.quantization is not None
assert any([args.quantization, args.optimus])
output_signpost(
dict(zip(headers, row)),
args,
@ -2288,11 +2288,9 @@ class BenchmarkRunner:
)
):
is_same = False
except Exception as e:
except Exception:
# Sometimes torch.allclose may throw RuntimeError
exception_string = str(e)
accuracy_status = f"fail_exception: {exception_string}"
return record_status(accuracy_status, dynamo_start_stats=start_stats)
is_same = False
if not is_same:
accuracy_status = "eager_two_runs_differ"
@ -2409,11 +2407,9 @@ class BenchmarkRunner:
force_max_multiplier=force_max_multiplier,
):
is_same = False
except Exception as e:
except Exception:
# Sometimes torch.allclose may throw RuntimeError
exception_string = str(e)
accuracy_status = f"fail_exception: {exception_string}"
return record_status(accuracy_status, dynamo_start_stats=start_stats)
is_same = False
if not is_same:
if self.args.skip_accuracy_check:
@ -2587,6 +2583,9 @@ class BenchmarkRunner:
**experiment_kwargs,
)
# reset dynamo
torch._dynamo.reset()
if self.args.export_aot_inductor:
optimized_model_iter_fn = optimize_ctx
else:
@ -2950,7 +2949,7 @@ class BenchmarkRunner:
status = self.check_tolerance(name, model, example_inputs, optimize_ctx)
print(status)
elif self.args.performance:
if self.args.backend == "torchao":
if self.args.backend in ["torchao", "optimus"]:
status = self.run_performance_test_non_alternate(
name, model, example_inputs, optimize_ctx, experiment, tag
)
@ -3526,6 +3525,12 @@ def parse_args(args=None):
action="store_true",
help="Measure speedup with TorchInductor",
)
group.add_argument(
"--optimus",
choices=["vertical_opt", "horizontal_opt", "all"],
default=None,
help="Measure speedup of Optimus with TorchInductor baseline",
)
group.add_argument(
"--quantization",
choices=[
@ -3783,6 +3788,9 @@ def run(runner, args, original_dir=None):
if args.inductor:
assert args.backend is None
args.backend = "inductor"
if args.optimus:
assert args.backend is None
args.backend = "optimus"
if args.quantization:
assert args.backend is None
args.backend = "torchao"
@ -4067,10 +4075,22 @@ def run(runner, args, original_dir=None):
runner.model_iter_fn = model_iter_fn_and_mark_step
optimize_ctx = torchao_optimize_ctx(args.quantization)
elif args.backend == "optimus":
from .optimus import get_baseline_ctx, get_optimus_optimize_ctx
baseline_ctx = get_baseline_ctx(
nopython=args.nopython, inductor_compile_mode=args.inductor_compile_mode
)
runner.model_iter_fn = baseline_ctx(runner.model_iter_fn)
optimize_ctx = get_optimus_optimize_ctx(
args.optimus, args.nopython, args.inductor_compile_mode
)
else:
optimize_ctx = torch._dynamo.optimize(args.backend, nopython=args.nopython)
experiment = (
speedup_experiment if args.backend != "torchao" else latency_experiment
speedup_experiment
if args.backend not in ["torchao", "optimus"]
else latency_experiment
)
if args.accuracy:
output_filename = f"accuracy_{args.backend}.csv"
@ -4091,7 +4111,12 @@ def run(runner, args, original_dir=None):
if args.only in runner.disable_cudagraph_models:
args.disable_cudagraphs = True
if args.inductor or args.backend == "inductor" or args.export_aot_inductor:
if (
args.inductor
or args.backend == "inductor"
or args.export_aot_inductor
or args.backend == "optimus"
):
inductor_config.triton.cudagraphs = not args.disable_cudagraphs
inductor_config.triton.persistent_reductions = (
not args.disable_persistent_reductions

View File

@ -0,0 +1,62 @@
import functools
import torch
def get_baseline_ctx(nopython, inductor_compile_mode):
return functools.partial(
torch.compile,
backend="inductor",
fullgraph=nopython,
mode=inductor_compile_mode,
)
def get_optimus_optimize_ctx(config, nopython, inductor_compile_mode):
if config == "vertical_opt":
optimus_inductor_config = {
"pre_grad_fusion_options": {
"normalization_pass": {},
"merge_splits_pass": {},
"split_cat_pass": {},
"unbind_stack_pass": {},
"unbind_cat_to_view_pass": {},
}
}
elif config == "horizontal_opt":
optimus_inductor_config = {
"pre_grad_fusion_options": {
"normalization_pass": {},
"batch_linear": {},
"batch_layernorm": {},
},
}
elif config == "all":
optimus_inductor_config = {
"pre_grad_fusion_options": {
"normalization_pass": {},
"batch_linear": {},
"batch_layernorm": {},
"merge_splits_pass": {},
"split_cat_pass": {},
"unbind_stack_pass": {},
"unbind_cat_to_view_pass": {},
},
}
else:
raise RuntimeError(f"Unknown optimus config: {config}")
def _inner(fn):
if "pre_grad_fusion_options" in optimus_inductor_config:
torch._inductor.config.pre_grad_fusion_options = optimus_inductor_config[
"pre_grad_fusion_options"
]
if "post_grad_fusion_options" in optimus_inductor_config:
torch._inductor.config.post_grad_fusion_options = optimus_inductor_config[
"post_grad_fusion_options"
]
return torch.compile(
fn, backend="inductor", fullgraph=nopython, mode=inductor_compile_mode
)
return _inner

View File

@ -2,6 +2,7 @@ import csv
import os
import re
import sys
from pathlib import Path
# This script takes the logs produced by the benchmark scripts (e.g.,
@ -15,8 +16,7 @@ import sys
# This script is not very well written, feel free to rewrite it as necessary
assert len(sys.argv) == 2
full_log = open(sys.argv[1]).read()
full_log = Path(sys.argv[1]).read_text()
# If the log contains a gist URL, extract it so we can include it in the CSV
gist_url = ""

View File

@ -0,0 +1,62 @@
import sys
from benchmark_base import BenchmarkBase
import torch
from torch.distributed._tensor import DTensor, Replicate
from torch.testing._internal.distributed.fake_pg import FakeStore
class BenchmarkDTensorDispatch(BenchmarkBase):
def __init__(self, operator, world_size) -> None:
super().__init__(
category=f"dtensor_dispatch_{operator}",
device="cuda",
)
self.world_size = world_size
def name(self) -> str:
prefix = f"{self.category()}"
return prefix
def description(self) -> str:
return f"DTensor dispatch time for {self.category()}"
def _prepare_once(self) -> None:
self.mesh = torch.distributed.device_mesh.init_device_mesh(
"cuda", (self.world_size,), mesh_dim_names=("dp",)
)
self.a = DTensor.from_local(
torch.ones(10, 10, device=self.device()), self.mesh, [Replicate()]
)
self.b = DTensor.from_local(
torch.ones(10, 10, device=self.device()), self.mesh, [Replicate()]
)
def _prepare(self) -> None:
pass
class BenchmarkDetach(BenchmarkDTensorDispatch):
def __init__(self, world_size) -> None:
super().__init__(operator="detach", world_size=world_size)
def _work(self) -> None:
self.a.detach()
def main():
world_size = 256
fake_store = FakeStore()
torch.distributed.init_process_group(
"fake", store=fake_store, rank=0, world_size=world_size
)
result_path = sys.argv[1]
BenchmarkDetach(world_size).enable_instruction_count().collect_all().append_results(
result_path
)
torch.distributed.destroy_process_group()
if __name__ == "__main__":
main()

View File

@ -484,24 +484,106 @@ PyTorch,sum,sum_R256_V512_dim0_contiguousTrue_cpu,short,False,50.954394,0.000000
PyTorch,sum,sum_R256_V512_dim0_contiguousFalse_cpu,short,False,57.957757,0.000000
PyTorch,sum,sum_R256_V512_dim1_contiguousTrue_cpu,short,False,53.592068,0.000000
PyTorch,sum,sum_R256_V512_dim1_contiguousFalse_cpu,short,False,51.339726,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N16_cpu,short,False,7.040985,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N64_cpu,short,False,7.168604,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N128_cpu,short,False,7.434442,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N16_cpu,short,False,7.078318,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N64_cpu,short,False,7.426670,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N128_cpu,short,False,7.679027,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N16_cpu,short,False,7.281365,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N64_cpu,short,False,7.682783,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N128_cpu,short,False,8.381938,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N16_cpu,short,False,7.039854,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N64_cpu,short,False,7.399855,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N128_cpu,short,False,7.715193,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N16_cpu,short,False,7.255140,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N64_cpu,short,False,7.753522,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N128_cpu,short,False,8.364281,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N16_cpu,short,False,7.476377,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N64_cpu,short,False,8.458564,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N128_cpu,short,False,9.391939,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bool,short,False,0.927,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.uint8,short,False,6.261,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int8,short,False,6.351,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int16,short,False,6.177,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int32,short,False,6.333,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int64,short,False,6.588,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float16,short,False,8.117,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bfloat16,short,False,9.358,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float32,short,False,7.844,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float64,short,False,8.097,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bool,short,False,6.159,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.uint8,short,False,0.926,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int8,short,False,6.192,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int16,short,False,6.276,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int32,short,False,6.461,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int64,short,False,6.524,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float16,short,False,8.136,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bfloat16,short,False,6.854,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float32,short,False,6.446,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float64,short,False,6.829,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bool,short,False,6.088,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.uint8,short,False,6.059,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int8,short,False,0.922,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int16,short,False,6.263,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int32,short,False,6.330,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int64,short,False,6.688,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float16,short,False,8.176,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bfloat16,short,False,6.959,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float32,short,False,6.430,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float64,short,False,6.818,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bool,short,False,6.350,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.uint8,short,False,6.221,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int8,short,False,6.193,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int16,short,False,0.922,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int32,short,False,6.263,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int64,short,False,6.525,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float16,short,False,7.960,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bfloat16,short,False,6.801,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float32,short,False,6.594,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float64,short,False,7.089,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bool,short,False,6.498,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.uint8,short,False,6.358,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int8,short,False,6.390,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int16,short,False,6.415,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int32,short,False,0.925,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int64,short,False,6.657,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float16,short,False,7.954,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bfloat16,short,False,6.930,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float32,short,False,6.737,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float64,short,False,6.948,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bool,short,False,6.757,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.uint8,short,False,6.402,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int8,short,False,6.550,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int16,short,False,6.518,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int32,short,False,6.766,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int64,short,False,0.929,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float16,short,False,8.557,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bfloat16,short,False,9.045,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float32,short,False,7.672,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float64,short,False,7.276,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bool,short,False,6.414,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.uint8,short,False,7.736,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int8,short,False,7.889,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int16,short,False,8.170,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int32,short,False,7.783,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int64,short,False,7.743,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float16,short,False,0.927,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bfloat16,short,False,7.018,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float32,short,False,8.428,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float64,short,False,6.767,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bool,short,False,6.479,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.uint8,short,False,7.827,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int8,short,False,6.450,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int16,short,False,6.320,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int32,short,False,6.385,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int64,short,False,8.119,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float16,short,False,8.063,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bfloat16,short,False,0.925,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float32,short,False,8.629,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float64,short,False,6.638,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bool,short,False,6.425,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.uint8,short,False,7.803,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int8,short,False,6.502,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int16,short,False,6.429,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int32,short,False,6.549,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int64,short,False,7.749,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float16,short,False,7.301,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bfloat16,short,False,7.682,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float32,short,False,0.930,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float64,short,False,6.738,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bool,short,False,6.798,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.uint8,short,False,6.506,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int8,short,False,6.494,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int16,short,False,6.668,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int32,short,False,6.696,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int64,short,False,7.115,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float16,short,False,7.910,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bfloat16,short,False,7.410,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float32,short,False,6.868,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float64,short,False,0.924,0.000000
PyTorch,addcmul,addcmul_M1_N2_cpu_dtypetorch.float32,short,False,4.461410,0.000000
PyTorch,addcmul,addcmul_M1_N2_cpu_dtypetorch.bfloat16,short,False,4.560082,0.000000
PyTorch,addcmul,addcmul_M32_N64_cpu_dtypetorch.float32,short,False,5.141248,0.000000

1 Benchmarking Framework Benchmarking Module Name Case Name tag run_backward Execution Time Peak Memory (KB)
484 PyTorch sum sum_R256_V512_dim0_contiguousFalse_cpu short False 57.957757 0.000000
485 PyTorch sum sum_R256_V512_dim1_contiguousTrue_cpu short False 53.592068 0.000000
486 PyTorch sum sum_R256_V512_dim1_contiguousFalse_cpu short False 51.339726 0.000000
487 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bool short False 7.040985 0.927 0.000000
488 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.uint8 short False 7.168604 6.261 0.000000
489 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int8 short False 7.434442 6.351 0.000000
490 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int16 short False 7.078318 6.177 0.000000
491 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int32 short False 7.426670 6.333 0.000000
492 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int64 short False 7.679027 6.588 0.000000
493 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float16 short False 7.281365 8.117 0.000000
494 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bfloat16 short False 7.682783 9.358 0.000000
495 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float32 short False 8.381938 7.844 0.000000
496 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float64 short False 7.039854 8.097 0.000000
497 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bool short False 7.399855 6.159 0.000000
498 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.uint8 short False 7.715193 0.926 0.000000
499 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int8 short False 7.255140 6.192 0.000000
500 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int16 short False 7.753522 6.276 0.000000
501 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int32 short False 8.364281 6.461 0.000000
502 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int64 short False 7.476377 6.524 0.000000
503 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float16 short False 8.458564 8.136 0.000000
504 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bfloat16 short False 9.391939 6.854 0.000000
505 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float32 short False 6.446 0.000000
506 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float64 short False 6.829 0.000000
507 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bool short False 6.088 0.000000
508 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.uint8 short False 6.059 0.000000
509 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int8 short False 0.922 0.000000
510 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int16 short False 6.263 0.000000
511 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int32 short False 6.330 0.000000
512 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int64 short False 6.688 0.000000
513 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float16 short False 8.176 0.000000
514 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bfloat16 short False 6.959 0.000000
515 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float32 short False 6.430 0.000000
516 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float64 short False 6.818 0.000000
517 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bool short False 6.350 0.000000
518 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.uint8 short False 6.221 0.000000
519 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int8 short False 6.193 0.000000
520 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int16 short False 0.922 0.000000
521 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int32 short False 6.263 0.000000
522 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int64 short False 6.525 0.000000
523 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float16 short False 7.960 0.000000
524 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bfloat16 short False 6.801 0.000000
525 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float32 short False 6.594 0.000000
526 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float64 short False 7.089 0.000000
527 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bool short False 6.498 0.000000
528 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.uint8 short False 6.358 0.000000
529 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int8 short False 6.390 0.000000
530 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int16 short False 6.415 0.000000
531 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int32 short False 0.925 0.000000
532 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int64 short False 6.657 0.000000
533 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float16 short False 7.954 0.000000
534 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bfloat16 short False 6.930 0.000000
535 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float32 short False 6.737 0.000000
536 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float64 short False 6.948 0.000000
537 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bool short False 6.757 0.000000
538 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.uint8 short False 6.402 0.000000
539 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int8 short False 6.550 0.000000
540 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int16 short False 6.518 0.000000
541 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int32 short False 6.766 0.000000
542 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int64 short False 0.929 0.000000
543 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float16 short False 8.557 0.000000
544 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bfloat16 short False 9.045 0.000000
545 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float32 short False 7.672 0.000000
546 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float64 short False 7.276 0.000000
547 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bool short False 6.414 0.000000
548 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.uint8 short False 7.736 0.000000
549 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int8 short False 7.889 0.000000
550 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int16 short False 8.170 0.000000
551 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int32 short False 7.783 0.000000
552 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int64 short False 7.743 0.000000
553 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float16 short False 0.927 0.000000
554 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bfloat16 short False 7.018 0.000000
555 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float32 short False 8.428 0.000000
556 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float64 short False 6.767 0.000000
557 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bool short False 6.479 0.000000
558 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.uint8 short False 7.827 0.000000
559 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int8 short False 6.450 0.000000
560 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int16 short False 6.320 0.000000
561 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int32 short False 6.385 0.000000
562 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int64 short False 8.119 0.000000
563 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float16 short False 8.063 0.000000
564 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bfloat16 short False 0.925 0.000000
565 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float32 short False 8.629 0.000000
566 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float64 short False 6.638 0.000000
567 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bool short False 6.425 0.000000
568 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.uint8 short False 7.803 0.000000
569 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int8 short False 6.502 0.000000
570 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int16 short False 6.429 0.000000
571 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int32 short False 6.549 0.000000
572 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int64 short False 7.749 0.000000
573 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float16 short False 7.301 0.000000
574 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bfloat16 short False 7.682 0.000000
575 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float32 short False 0.930 0.000000
576 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float64 short False 6.738 0.000000
577 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bool short False 6.798 0.000000
578 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.uint8 short False 6.506 0.000000
579 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int8 short False 6.494 0.000000
580 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int16 short False 6.668 0.000000
581 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int32 short False 6.696 0.000000
582 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int64 short False 7.115 0.000000
583 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float16 short False 7.910 0.000000
584 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bfloat16 short False 7.410 0.000000
585 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float32 short False 6.868 0.000000
586 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float64 short False 0.924 0.000000
587 PyTorch addcmul addcmul_M1_N2_cpu_dtypetorch.float32 short False 4.461410 0.000000
588 PyTorch addcmul addcmul_M1_N2_cpu_dtypetorch.bfloat16 short False 4.560082 0.000000
589 PyTorch addcmul addcmul_M32_N64_cpu_dtypetorch.float32 short False 5.141248 0.000000

View File

@ -4,74 +4,84 @@ import torch
tensor_conversion_short_configs = op_bench.cross_product_configs(
M=(
8,
16,
32,
),
N=(
16,
64,
128,
),
M=[32],
N=[128],
device=["cpu", "cuda"],
dtype_one=[
torch.bool,
torch.uint8,
torch.int8,
torch.int16,
torch.int32,
torch.int64,
torch.half,
torch.bfloat16,
torch.float,
torch.double,
],
dtype_two=[
torch.bool,
torch.uint8,
torch.int8,
torch.int16,
torch.int32,
torch.int64,
torch.half,
torch.bfloat16,
torch.float,
torch.double,
],
tags=["short"],
)
tensor_conversion_long_configs = op_bench.cross_product_configs(
M=(
64,
128,
256,
512,
),
N=(
256,
512,
1024,
2048,
),
M=[1024],
N=[1024],
device=["cpu", "cuda"],
dtype_one=[
torch.bool,
torch.uint8,
torch.int8,
torch.int16,
torch.int32,
torch.int64,
torch.half,
torch.bfloat16,
torch.float,
torch.double,
],
dtype_two=[
torch.bool,
torch.uint8,
torch.int8,
torch.int16,
torch.int32,
torch.int64,
torch.half,
torch.bfloat16,
torch.float,
torch.double,
],
tags=["long"],
)
class FloatToHalfTensorConversionBenchmark(op_bench.TorchBenchmarkBase):
def init(self, M, N, device):
class TensorConversionBenchmark(op_bench.TorchBenchmarkBase):
def init(self, M, N, dtype_one, dtype_two, device):
self.inputs = {
"input": torch.rand(
M, N, device=device, requires_grad=False, dtype=torch.float
)
).to(dtype=dtype_one)
}
self.dtype_one = dtype_one
self.dtype_two = dtype_two
def forward(self, input):
return input.to(torch.half)
return input.to(dtype=self.dtype_two)
class HalfToFloatTensorConversionBenchmark(op_bench.TorchBenchmarkBase):
def init(self, M, N, device):
self.inputs = {
"input": torch.rand(
M, N, device=device, requires_grad=False, dtype=torch.half
)
}
def forward(self, input):
return input.to(torch.float)
op_bench.generate_pt_test(
tensor_conversion_short_configs, FloatToHalfTensorConversionBenchmark
)
op_bench.generate_pt_test(
tensor_conversion_long_configs, FloatToHalfTensorConversionBenchmark
)
op_bench.generate_pt_test(
tensor_conversion_short_configs, HalfToFloatTensorConversionBenchmark
)
op_bench.generate_pt_test(
tensor_conversion_long_configs, HalfToFloatTensorConversionBenchmark
)
op_bench.generate_pt_test(tensor_conversion_short_configs, TensorConversionBenchmark)
op_bench.generate_pt_test(tensor_conversion_long_configs, TensorConversionBenchmark)
if __name__ == "__main__":
op_bench.benchmark_runner.main()

View File

@ -349,24 +349,106 @@ PyTorch,sum,sum_R256_V512_dim0_contiguousTrue_cpu,short,FALSE,12.5841
PyTorch,sum,sum_R256_V512_dim0_contiguousFALSE_cpu,short,FALSE,20.8765
PyTorch,sum,sum_R256_V512_dim1_contiguousTrue_cpu,short,FALSE,15.4414
PyTorch,sum,sum_R256_V512_dim1_contiguousFALSE_cpu,short,FALSE,15.3287
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N16_cpu,short,FALSE,5.0499
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N64_cpu,short,FALSE,5.3229
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N128_cpu,short,FALSE,5.4418
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N16_cpu,short,FALSE,5.0868
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N64_cpu,short,FALSE,5.4495
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N128_cpu,short,FALSE,5.5578
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N16_cpu,short,FALSE,5.2631
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N64_cpu,short,FALSE,5.5646
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N128_cpu,short,FALSE,5.7898
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N16_cpu,short,FALSE,5.0228
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N64_cpu,short,FALSE,5.3692
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N128_cpu,short,FALSE,5.4006
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N16_cpu,short,FALSE,5.1107
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N64_cpu,short,FALSE,5.4119
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N128_cpu,short,FALSE,5.5583
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N16_cpu,short,FALSE,5.3818
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N64_cpu,short,FALSE,5.5742
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N128_cpu,short,FALSE,6.8414
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bool,short,False,0.797
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.uint8,short,False,6.071
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int8,short,False,6.031
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int16,short,False,6.243
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int32,short,False,7.231
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int64,short,False,7.791
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float16,short,False,12.661
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bfloat16,short,False,11.225
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float32,short,False,9.772
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float64,short,False,9.872
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bool,short,False,6.033
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.uint8,short,False,0.781
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int8,short,False,6.060
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int16,short,False,6.180
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int32,short,False,7.258
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int64,short,False,7.758
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float16,short,False,10.504
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bfloat16,short,False,6.749
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float32,short,False,7.679
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float64,short,False,7.797
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bool,short,False,6.019
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.uint8,short,False,6.079
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int8,short,False,0.785
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int16,short,False,6.188
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int32,short,False,7.288
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int64,short,False,7.770
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float16,short,False,10.466
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bfloat16,short,False,6.676
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float32,short,False,7.736
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float64,short,False,7.780
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bool,short,False,6.130
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.uint8,short,False,6.221
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int8,short,False,6.101
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int16,short,False,0.791
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int32,short,False,6.254
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int64,short,False,7.733
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float16,short,False,10.562
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bfloat16,short,False,6.704
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float32,short,False,7.819
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float64,short,False,8.276
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bool,short,False,6.361
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.uint8,short,False,6.364
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int8,short,False,6.309
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int16,short,False,6.362
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int32,short,False,0.791
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int64,short,False,7.746
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float16,short,False,9.462
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bfloat16,short,False,6.678
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float32,short,False,7.827
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float64,short,False,8.200
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bool,short,False,6.925
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.uint8,short,False,6.947
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int8,short,False,6.962
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int16,short,False,6.906
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int32,short,False,7.664
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int64,short,False,0.782
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float16,short,False,10.528
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bfloat16,short,False,10.123
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float32,short,False,9.234
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float64,short,False,8.694
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bool,short,False,12.653
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.uint8,short,False,9.348
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int8,short,False,8.774
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int16,short,False,9.063
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int32,short,False,10.012
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int64,short,False,13.641
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float16,short,False,0.788
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bfloat16,short,False,13.757
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float32,short,False,7.170
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float64,short,False,12.511
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bool,short,False,6.516
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.uint8,short,False,8.539
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int8,short,False,6.483
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int16,short,False,6.468
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int32,short,False,7.752
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int64,short,False,9.868
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float16,short,False,10.556
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bfloat16,short,False,0.792
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float32,short,False,7.577
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float64,short,False,8.267
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bool,short,False,6.819
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.uint8,short,False,7.715
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int8,short,False,6.754
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int16,short,False,6.825
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int32,short,False,7.790
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int64,short,False,9.219
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float16,short,False,5.977
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bfloat16,short,False,7.069
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float32,short,False,0.794
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float64,short,False,8.301
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bool,short,False,7.401
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.uint8,short,False,7.843
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int8,short,False,7.117
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int16,short,False,7.170
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int32,short,False,8.000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int64,short,False,9.284
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float16,short,False,7.179
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bfloat16,short,False,7.645
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float32,short,False,7.988
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float64,short,False,0.792
PyTorch,relu,"relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.quint8",short,FALSE,9.4657
PyTorch,relu,"relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.qint8",short,FALSE,9.4625
PyTorch,relu,"relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.qint32",short,FALSE,9.4165

1 Benchmarking Framework Benchmarking Module Name Case Name tag run_backward Execution Time
349 PyTorch sum sum_R256_V512_dim0_contiguousFALSE_cpu short FALSE 20.8765
350 PyTorch sum sum_R256_V512_dim1_contiguousTrue_cpu short FALSE 15.4414
351 PyTorch sum sum_R256_V512_dim1_contiguousFALSE_cpu short FALSE 15.3287
352 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bool short FALSE False 5.0499 0.797
353 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.uint8 short FALSE False 5.3229 6.071
354 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int8 short FALSE False 5.4418 6.031
355 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int16 short FALSE False 5.0868 6.243
356 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int32 short FALSE False 5.4495 7.231
357 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int64 short FALSE False 5.5578 7.791
358 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float16 short FALSE False 5.2631 12.661
359 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bfloat16 short FALSE False 5.5646 11.225
360 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float32 short FALSE False 5.7898 9.772
361 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float64 short FALSE False 5.0228 9.872
362 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bool short FALSE False 5.3692 6.033
363 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.uint8 short FALSE False 5.4006 0.781
364 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int8 short FALSE False 5.1107 6.060
365 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int16 short FALSE False 5.4119 6.180
366 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int32 short FALSE False 5.5583 7.258
367 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int64 short FALSE False 5.3818 7.758
368 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float16 short FALSE False 5.5742 10.504
369 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bfloat16 short FALSE False 6.8414 6.749
370 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float32 short False 7.679
371 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float64 short False 7.797
372 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bool short False 6.019
373 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.uint8 short False 6.079
374 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int8 short False 0.785
375 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int16 short False 6.188
376 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int32 short False 7.288
377 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int64 short False 7.770
378 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float16 short False 10.466
379 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bfloat16 short False 6.676
380 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float32 short False 7.736
381 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float64 short False 7.780
382 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bool short False 6.130
383 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.uint8 short False 6.221
384 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int8 short False 6.101
385 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int16 short False 0.791
386 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int32 short False 6.254
387 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int64 short False 7.733
388 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float16 short False 10.562
389 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bfloat16 short False 6.704
390 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float32 short False 7.819
391 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float64 short False 8.276
392 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bool short False 6.361
393 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.uint8 short False 6.364
394 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int8 short False 6.309
395 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int16 short False 6.362
396 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int32 short False 0.791
397 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int64 short False 7.746
398 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float16 short False 9.462
399 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bfloat16 short False 6.678
400 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float32 short False 7.827
401 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float64 short False 8.200
402 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bool short False 6.925
403 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.uint8 short False 6.947
404 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int8 short False 6.962
405 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int16 short False 6.906
406 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int32 short False 7.664
407 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int64 short False 0.782
408 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float16 short False 10.528
409 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bfloat16 short False 10.123
410 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float32 short False 9.234
411 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float64 short False 8.694
412 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bool short False 12.653
413 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.uint8 short False 9.348
414 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int8 short False 8.774
415 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int16 short False 9.063
416 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int32 short False 10.012
417 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int64 short False 13.641
418 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float16 short False 0.788
419 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bfloat16 short False 13.757
420 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float32 short False 7.170
421 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float64 short False 12.511
422 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bool short False 6.516
423 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.uint8 short False 8.539
424 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int8 short False 6.483
425 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int16 short False 6.468
426 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int32 short False 7.752
427 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int64 short False 9.868
428 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float16 short False 10.556
429 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bfloat16 short False 0.792
430 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float32 short False 7.577
431 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float64 short False 8.267
432 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bool short False 6.819
433 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.uint8 short False 7.715
434 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int8 short False 6.754
435 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int16 short False 6.825
436 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int32 short False 7.790
437 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int64 short False 9.219
438 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float16 short False 5.977
439 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bfloat16 short False 7.069
440 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float32 short False 0.794
441 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float64 short False 8.301
442 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bool short False 7.401
443 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.uint8 short False 7.843
444 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int8 short False 7.117
445 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int16 short False 7.170
446 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int32 short False 8.000
447 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int64 short False 9.284
448 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float16 short False 7.179
449 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bfloat16 short False 7.645
450 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float32 short False 7.988
451 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float64 short False 0.792
452 PyTorch relu relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.quint8 short FALSE 9.4657
453 PyTorch relu relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.qint8 short FALSE 9.4625
454 PyTorch relu relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.qint32 short FALSE 9.4165

View File

@ -83,10 +83,13 @@ if __name__ == "__main__":
if args.outfile == "stdout":
outfile = sys.stdout
need_close = False
elif args.outfile == "stderr":
outfile = sys.stderr
need_close = False
else:
outfile = open(args.outfile, "a")
need_close = True
test_count = args.test_count
m = args.m
@ -147,3 +150,5 @@ if __name__ == "__main__":
time,
file=outfile,
)
if need_close:
outfile.close()

View File

@ -82,10 +82,13 @@ if __name__ == "__main__":
if args.outfile == "stdout":
outfile = sys.stdout
need_close = False
elif args.outfile == "stderr":
outfile = sys.stderr
need_close = False
else:
outfile = open(args.outfile, "a")
need_close = True
test_count = args.test_count
m = args.m
@ -132,3 +135,5 @@ if __name__ == "__main__":
time_csr,
file=outfile,
)
if need_close:
outfile.close()

View File

@ -179,10 +179,13 @@ if __name__ == "__main__":
if args.outfile == "stdout":
outfile = sys.stdout
need_close = False
elif args.outfile == "stderr":
outfile = sys.stderr
need_close = False
else:
outfile = open(args.outfile, "a")
need_close = True
ops = args.ops.split(",")
@ -434,3 +437,5 @@ if __name__ == "__main__":
if op not in {"bsr_scatter_mm6", "bsr_dense_mm_with_meta"}:
# Break on operations that do not consume parameters
break
if need_close:
outfile.close()

View File

@ -125,6 +125,17 @@ AttentionType = Literal[
]
DtypeString = Literal["bfloat16", "float16", "float32"]
SpeedupType = Literal["fwd", "bwd"]
# Operator Name mapping
backend_to_operator_name = {
"math": "math attention kernel",
"efficient": "efficient attention kernel",
"cudnn": "cudnn attention kernel",
"fav2": "flash attention 2 kernel",
"fav3": "flash attention 3 kernel",
"fakv": "flash attention kv cache kernel",
"og-eager": "eager attention kernel",
"flex": "flex attention kernel",
}
def benchmark_torch_function_in_microseconds(func: Callable, *args, **kwargs) -> float:
@ -1265,12 +1276,14 @@ def _output_json_for_dashboard(
model: ModelInfo
metric: MetricInfo
operator_name = backend_to_operator_name.get(backend, backend)
# Benchmark extra info
benchmark_extra_info = {
"input_config": input_config,
"device": device,
"arch": device_arch,
"operator_name": backend,
"operator_name": operator_name,
"attn_type": config.attn_type,
"shape": str(config.shape),
"max_autotune": config.max_autotune,
@ -1288,7 +1301,7 @@ def _output_json_for_dashboard(
type="attention-benchmark",
origins=["pytorch"],
extra_info={
"operator_name": backend,
"operator_name": operator_name,
"attn_type": config.attn_type,
},
),
@ -1315,7 +1328,7 @@ def _output_json_for_dashboard(
type="attention-benchmark",
origins=["pytorch"],
extra_info={
"operator_name": backend,
"operator_name": operator_name,
},
),
metric=MetricInfo(
@ -1341,7 +1354,7 @@ def _output_json_for_dashboard(
type="attention-benchmark",
origins=["pytorch"],
extra_info={
"operator_name": backend,
"operator_name": operator_name,
},
),
metric=MetricInfo(
@ -1371,7 +1384,7 @@ def _output_json_for_dashboard(
type="attention-benchmark",
origins=["pytorch"],
extra_info={
"operator_name": backend,
"operator_name": operator_name,
},
),
metric=MetricInfo(

View File

@ -19,6 +19,17 @@
namespace c10 {
using CaptureId_t = unsigned long long;
// first is set if the instance is created by CUDAGraph::capture_begin.
// second is set if the instance is created by at::cuda::graph_pool_handle.
using MempoolId_t = std::pair<CaptureId_t, CaptureId_t>;
struct MempoolIdHash {
std::size_t operator()(const MempoolId_t& mempool_id) const noexcept {
return mempool_id.first != 0 ? mempool_id.first : mempool_id.second;
}
};
// A DataPtr is a unique pointer (with an attached deleter and some
// context for the deleter) to some memory, which also records what
// device is for its data.

View File

@ -99,7 +99,10 @@ struct C10_API DeviceAllocator : public c10::Allocator {
// Return the free memory size and total memory size in bytes for the
// specified device.
virtual std::pair<size_t, size_t> getMemoryInfo(c10::DeviceIndex device) = 0;
virtual std::pair<size_t, size_t> getMemoryInfo(c10::DeviceIndex device) {
TORCH_CHECK_NOT_IMPLEMENTED(
false, "getMemoryInfo is not implemented for this allocator yet.");
}
};
// This function is used to get the DeviceAllocator for a specific device type

View File

@ -27,6 +27,7 @@
#include <torch/headeronly/core/ScalarType.h>
C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED("-Wswitch-enum")
C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED("-Wswitch-default")
namespace c10 {
@ -205,6 +206,12 @@ inline bool isSignedType(ScalarType t) {
break;
// Do not add default here, but rather define behavior of every new entry
// here. `-Wswitch-enum` would raise a warning in those cases.
// TODO: get PyTorch to adopt exhaustive switches by default with a way to
// opt specific switches to being non-exhaustive.
// Exhaustive:
// `-Wswitch-enum`, `-Wswitch-default`, `-Wno-covered-switch-default`
// Non-Exhaustive:
// `-Wno-switch-enum`, `-Wswitch-default`, `-Wcovered-switch-default`
}
TORCH_CHECK(false, "Unknown ScalarType ", t);
#undef CASE_ISSIGNED

View File

@ -57,6 +57,8 @@ C10_DECLARE_bool(caffe2_keep_on_shrink);
// respect caffe2_keep_on_shrink.
C10_DECLARE_int64(caffe2_max_keep_on_shrink_memory);
C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED("-Wswitch-default")
namespace at {
class Tensor;
class TensorBase;
@ -3303,3 +3305,5 @@ static_assert(
#undef C10_GCC_VERSION_MINOR
} // namespace c10
C10_DIAGNOSTIC_POP()

View File

@ -1012,12 +1012,6 @@ PrivatePoolState::PrivatePoolState(
}
}
struct MempoolIdHash {
std::size_t operator()(const MempoolId_t& mempool_id) const noexcept {
return mempool_id.first != 0 ? mempool_id.first : mempool_id.second;
}
};
cudaError_t allocPrimitive(void** ptr, size_t size, AllocParams& p) {
if (p.pool->owner_PrivatePool && p.pool->owner_PrivatePool->allocator()) {
*ptr = p.pool->owner_PrivatePool->allocator()->raw_alloc(size);
@ -4510,66 +4504,3 @@ std::atomic<CUDAAllocator*> allocator;
static BackendStaticInitializer backend_static_initializer;
} // namespace cuda::CUDACachingAllocator
} // namespace c10
namespace c10::cuda {
// uid_ is incremented when a user creates a MemPool,
// for example: using graph_pool_handle() or c10::cuda::MemPool().
//
// uuid_ is incremented when CUDAGraph creates a MemPool
// as a result of a user not providing a pool.
//
// MempoolId_t of {0, 0} is used to denote when no MemPool has been
// passed to a function, either by user or CUDAGraphs. For example,
// default value of MempoolId_t for capture_begin function is {0, 0}.
// That's why uid_ and uuid_ start at 1.
std::atomic<CaptureId_t> MemPool::uid_{1};
std::atomic<CaptureId_t> MemPool::uuid_{1};
MemPool::MemPool(
CUDACachingAllocator::CUDAAllocator* allocator,
bool is_user_created,
bool use_on_oom)
: allocator_(allocator), is_user_created_(is_user_created) {
if (is_user_created_) {
id_ = {0, uid_++};
} else {
id_ = {uuid_++, 0};
}
device_ = c10::cuda::current_device();
CUDACachingAllocator::createOrIncrefPool(device_, id_, allocator);
if (use_on_oom) {
CUDACachingAllocator::setUseOnOOM(device_, id_);
}
}
MemPool::~MemPool() {
TORCH_INTERNAL_ASSERT(use_count() == 1);
CUDACachingAllocator::releasePool(device_, id_);
c10::cuda::CUDACachingAllocator::emptyCache(id_);
}
MempoolId_t MemPool::id() {
return id_;
}
CUDACachingAllocator::CUDAAllocator* MemPool::allocator() {
return allocator_;
}
int MemPool::use_count() {
return CUDACachingAllocator::getPoolUseCount(device_, id_);
}
c10::DeviceIndex MemPool::device() {
return device_;
}
MempoolId_t MemPool::graph_pool_handle(bool is_user_created) {
if (is_user_created) {
return {0, uid_++};
}
return {uuid_++, 0};
}
} // namespace c10::cuda

View File

@ -562,41 +562,7 @@ inline std::string getUserMetadata() {
} // namespace c10::cuda::CUDACachingAllocator
namespace c10::cuda {
// Keep BC only
using c10::CaptureId_t;
using c10::MempoolId_t;
// MemPool represents a pool of memory in a caching allocator. Currently,
// it's just the ID of the pool object maintained in the CUDACachingAllocator.
//
// An allocator pointer can be passed to the MemPool to define how the
// allocations should be done in the pool. For example: using a different
// system allocator such as ncclMemAlloc.
struct C10_CUDA_API MemPool {
MemPool(
CUDACachingAllocator::CUDAAllocator* allocator = nullptr,
bool is_user_created = true,
bool use_on_oom = false);
MemPool(const MemPool&) = delete;
MemPool(MemPool&&) = default;
MemPool& operator=(const MemPool&) = delete;
MemPool& operator=(MemPool&&) = default;
~MemPool();
MempoolId_t id();
CUDACachingAllocator::CUDAAllocator* allocator();
int use_count();
c10::DeviceIndex device();
static MempoolId_t graph_pool_handle(bool is_user_created = true);
private:
static std::atomic<CaptureId_t> uid_;
static std::atomic<CaptureId_t> uuid_;
CUDACachingAllocator::CUDAAllocator* allocator_;
bool is_user_created_;
MempoolId_t id_;
c10::DeviceIndex device_;
};
} // namespace c10::cuda

View File

@ -295,11 +295,19 @@ DeviceAssertionsData* CUDAKernelLaunchRegistry::
C10_CUDA_CHECK_WO_DSA(
cudaMallocManaged(&uvm_assertions_ptr, sizeof(DeviceAssertionsData)));
#if CUDART_VERSION >= 13000
cudaMemLocation cpuDevice;
cpuDevice.type = cudaMemLocationTypeDevice;
cpuDevice.id = cudaCpuDeviceId;
#else
const auto cpuDevice = cudaCpuDeviceId;
#endif
C10_CUDA_CHECK_WO_DSA(cudaMemAdvise(
uvm_assertions_ptr,
sizeof(DeviceAssertionsData),
cudaMemAdviseSetPreferredLocation,
cudaCpuDeviceId));
cpuDevice));
// GPU will establish direct mapping of data in CPU memory, no page faults
// will be generated
@ -307,7 +315,7 @@ DeviceAssertionsData* CUDAKernelLaunchRegistry::
uvm_assertions_ptr,
sizeof(DeviceAssertionsData),
cudaMemAdviseSetAccessedBy,
cudaCpuDeviceId));
cpuDevice));
// Initialize the memory from the CPU; otherwise, pages may have to be created
// on demand. We think that UVM documentation indicates that first access may

111
c10/metal/error.h Normal file
View File

@ -0,0 +1,111 @@
#pragma once
#include <c10/metal/common.h>
namespace c10 {
namespace metal {
C10_METAL_CONSTEXPR unsigned error_message_count = 30;
struct ErrorMessage {
char file[128];
char func[128];
char message[250];
unsigned int line;
};
struct ErrorMessages {
#ifdef __METAL__
::metal::atomic<unsigned int> count;
#else
unsigned int count;
#endif
ErrorMessage msg[error_message_count];
};
#ifdef __METAL__
namespace detail {
static uint strncpy(device char* dst, constant const char* src, unsigned len) {
uint i = 0;
while (src[i] != 0 && i < len - 1) {
dst[i] = src[i];
i++;
}
dst[i] = 0;
return i;
}
inline uint print_arg(
device char* ptr,
unsigned len,
constant const char* arg) {
return strncpy(ptr, arg, len);
}
// Returns number length as string in base10
static inline uint base10_length(long num) {
uint rc = 1;
if (num < 0) {
num = -num;
rc += 1;
}
while (num > 9) {
num /= 10;
rc++;
}
return rc;
}
// Converts signed integer to string
inline uint print_arg(device char* ptr, unsigned len, long arg) {
const auto arg_len = base10_length(arg);
if (arg_len >= len)
return 0;
if (arg < 0) {
ptr[0] = '-';
arg = -arg;
}
uint idx = 1;
do {
ptr[arg_len - idx] = '0' + (arg % 10);
arg /= 10;
idx++;
} while (arg > 0);
ptr[arg_len] = 0;
return arg_len;
}
template <typename T>
inline void print_args(device char* ptr, unsigned len, T arg) {
print_arg(ptr, len, arg);
}
template <typename T, typename... Args>
inline void print_args(device char* ptr, unsigned len, T arg, Args... args) {
const auto rc = print_arg(ptr, len, arg);
print_args(ptr + rc, len - rc, args...);
}
} // namespace detail
template <typename... Args>
static void report_error(
device ErrorMessages* msgs,
constant const char* file,
int line,
constant const char* func,
Args... args) {
const auto idx =
atomic_fetch_add_explicit(&msgs->count, 1, ::metal::memory_order_relaxed);
if (idx >= error_message_count) {
return;
}
device auto* msg = &msgs->msg[idx];
detail::strncpy(msg->file, file, 128);
detail::strncpy(msg->func, func, 128);
detail::print_args(msg->message, 250, args...);
msg->line = line;
}
#define TORCH_REPORT_ERROR(buf, ...) \
::c10::metal::report_error(buf, __FILE__, __LINE__, __func__, __VA_ARGS__)
#endif
} // namespace metal
} // namespace c10

View File

@ -1 +0,0 @@
#include <c10/util/Metaprogramming.h>

View File

@ -1,224 +1 @@
#pragma once
#include <c10/util/TypeList.h>
#include <type_traits>
namespace c10::guts {
/**
* Access information about result type or arguments from a function type.
* Example:
* using A = function_traits<int (float, double)>::return_type // A == int
* using A = function_traits<int (float, double)>::parameter_types::tuple_type
* // A == tuple<float, double>
*/
template <class Func>
struct function_traits {
static_assert(
!std::is_same_v<Func, Func>,
"In function_traits<Func>, Func must be a plain function type.");
};
template <class Result, class... Args>
struct function_traits<Result(Args...)> {
using func_type = Result(Args...);
using return_type = Result;
using parameter_types = typelist::typelist<Args...>;
static constexpr auto number_of_parameters = sizeof...(Args);
};
/**
* infer_function_traits: creates a `function_traits` type for a simple
* function (pointer) or functor (lambda/struct). Currently does not support
* class methods.
*/
template <typename Functor>
struct infer_function_traits {
using type = function_traits<
c10::guts::detail::strip_class_t<decltype(&Functor::operator())>>;
};
template <typename Result, typename... Args>
struct infer_function_traits<Result (*)(Args...)> {
using type = function_traits<Result(Args...)>;
};
template <typename Result, typename... Args>
struct infer_function_traits<Result(Args...)> {
using type = function_traits<Result(Args...)>;
};
template <typename T>
using infer_function_traits_t = typename infer_function_traits<T>::type;
/**
* make_function_traits: creates a `function_traits` type given a Return type
* and a typelist of Argument types
*
* Example:
* bool f(int, int);
*
* infer_function_traits_t<f> == make_function_traits_t<bool,
* typelist::typelist<int, int>>
*/
template <typename Result, typename ArgList>
struct make_function_traits {
static_assert(
false_t<ArgList>::value,
"In guts::make_function_traits<Result, TypeList>, the ArgList argument must be typelist<...>.");
};
template <typename Result, typename... Args>
struct make_function_traits<Result, typelist::typelist<Args...>> {
using type = function_traits<Result(Args...)>;
};
template <typename Result, typename ArgList>
using make_function_traits_t =
typename make_function_traits<Result, ArgList>::type;
/**
* make_offset_index_sequence<Start, N>
* Like make_index_sequence<N>, but starting from Start instead of 0.
*
* Example:
* make_offset_index_sequence<10, 3> == std::index_sequence<10, 11, 12>
*/
template <size_t Start, size_t N, size_t... Is>
struct make_offset_index_sequence_impl
: make_offset_index_sequence_impl<Start, N - 1, Start + N - 1, Is...> {
static_assert(
static_cast<int>(Start) >= 0,
"make_offset_index_sequence: Start < 0");
static_assert(static_cast<int>(N) >= 0, "make_offset_index_sequence: N < 0");
};
template <size_t Start, size_t... Is>
struct make_offset_index_sequence_impl<Start, 0, Is...> {
typedef std::index_sequence<Is...> type;
};
template <size_t Start, size_t N>
using make_offset_index_sequence =
typename make_offset_index_sequence_impl<Start, N>::type;
/**
* Use tuple_elements to extract a position-indexed subset of elements
* from the argument tuple into a result tuple.
*
* Example:
* std::tuple<int, const char*, double> t = std::make_tuple(0, "HEY", 2.0);
* std::tuple<int, double> result = tuple_elements(t, std::index_sequence<0,
* 2>());
*/
template <class Tuple, size_t... Is>
constexpr auto tuple_elements(Tuple t, std::index_sequence<Is...> /*unused*/) {
return std::tuple<std::tuple_element_t<Is, Tuple>...>(std::get<Is>(t)...);
}
/**
* Use tuple_take to extract the first or last n elements from the argument
* tuple into a result tuple.
*
* Example:
* std::tuple<int, const char*, double> t = std::make_tuple(0, "HEY", 2.0);
* std::tuple<int, const char*> first_two = tuple_take<decltype(t), 2>(t);
* std::tuple<const char*, double> last_two = tuple_take<decltype(t), -2>(t);
*/
template <class Tuple, int N, class Enable = void>
struct TupleTake {};
template <class Tuple, int N>
struct TupleTake<Tuple, N, std::enable_if_t<N >= 0, void>> {
static auto call(Tuple t) {
constexpr size_t size = std::tuple_size<Tuple>();
static_assert(N <= size, "tuple_take: N > size");
return tuple_elements(t, std::make_index_sequence<N>{});
}
};
template <class Tuple, int N>
struct TupleTake < Tuple,
N, std::enable_if_t<N<0, void>> {
static auto call(Tuple t) {
constexpr size_t size = std::tuple_size<Tuple>();
static_assert(-N <= size, "tuple_take: -N > size");
return tuple_elements(t, make_offset_index_sequence<size + N, -N>{});
}
};
template <class Tuple, int N>
auto tuple_take(Tuple t) {
return TupleTake<Tuple, N>::call(t);
}
/**
* Use tuple_slice to extract a contiguous subtuple from the argument.
*
* Example:
* std::tuple<int, const char*, double, bool> t = std::make_tuple(0,
* "HEY", 2.0, false); std::tuple<int, const char*> middle_two =
* tuple_slice<decltype(t), 1, 2>(t);
*/
template <class Tuple, size_t Start, size_t N>
constexpr auto tuple_slice(Tuple t) {
constexpr size_t size = std::tuple_size<Tuple>();
static_assert(Start + N <= size, "tuple_slice: Start + N > size");
return tuple_elements(t, make_offset_index_sequence<Start, N>{});
}
/**
* Use tuple_map to run a mapping function over a tuple to get a new tuple.
*
* Example 1:
* auto result = tuple_map(std::tuple<int32_t, int32_t, int32_t>(3, 4, 5), []
* (int32_t a) -> int16_t {return a+1;});
* // result == std::tuple<int16_t, int16_t, int16_t>(4, 5, 6)
*
* Example 2:
* struct Mapper {
* std::string operator()(int32_t a) const {
* return std::to_string(a);
* }
* int64_t operator()(const std::string& a) const {
* return atoi(a.c_str());
* }
* };
* auto result = tuple_map(std::tuple<int32_t, std::string>(3, "4"),
* Mapper());
* // result == std::tuple<std::string, int64_t>("3", 4)
*
* Example 3:
* struct A final {
* int32_t func() {
* return 5;
* }
* };
* struct B final {
* std::string func() {
* return "5";
* }
* };
* auto result = tuple_map(std::make_tuple(A(), B()), [] (auto a) { return
* a.func(); });
* // result == std::tuple<int32_t, std::string>(5, "5");
*/
namespace detail {
template <class Mapper, class... Args, size_t... Indices>
auto tuple_map(
// NOLINTNEXTLINE(cppcoreguidelines-rvalue-reference-param-not-moved)
std::tuple<Args...>&& tuple,
const Mapper& mapper,
std::index_sequence<Indices...> /*unused*/) {
return std::tuple<decltype(mapper(std::forward<Args>(std::get<Indices>(
tuple))))...>(mapper(std::forward<Args>(std::get<Indices>(tuple)))...);
}
} // namespace detail
template <class Mapper, class... Args>
auto tuple_map(std::tuple<Args...>&& tuple, const Mapper& mapper) {
return detail::tuple_map(
std::move(tuple), mapper, std::index_sequence_for<Args...>());
}
} // namespace c10::guts
#include <torch/headeronly/util/Metaprogramming.h>

View File

@ -1,515 +1 @@
#pragma once
#include <c10/util/TypeTraits.h>
#include <algorithm>
#include <cstddef>
#include <tuple>
#include <type_traits>
#include <utility>
namespace c10::guts {
template <class... T>
struct false_t : std::false_type {};
template <template <class> class... T>
struct false_higher_t : std::false_type {};
namespace typelist {
/**
* Type holding a list of types for compile time type computations
*/
template <class... Items>
struct typelist final {
public:
typelist() = delete; // not for instantiation
};
/**
* Returns the number of types in a typelist
* Example:
* 3 == size<typelist<int, int, double>>::value
*/
template <class TypeList>
struct size final {
static_assert(
false_t<TypeList>::value,
"In typelist::size<T>, T must be typelist<...>.");
};
template <class... Types>
struct size<typelist<Types...>> final {
static constexpr size_t value = sizeof...(Types);
};
/**
* Transforms a list of types into a tuple holding these types.
* Example:
* std::tuple<int, string> == to_tuple_t<typelist<int, string>>
*/
template <class TypeList>
struct to_tuple final {
static_assert(
false_t<TypeList>::value,
"In typelist::to_tuple<T>, T must be typelist<...>.");
};
template <class... Types>
struct to_tuple<typelist<Types...>> final {
using type = std::tuple<Types...>;
};
template <class TypeList>
using to_tuple_t = typename to_tuple<TypeList>::type;
/**
* Creates a typelist containing the types of a given tuple.
* Example:
* typelist<int, string> == from_tuple_t<std::tuple<int, string>>
*/
template <class Tuple>
struct from_tuple final {
static_assert(
false_t<Tuple>::value,
"In typelist::from_tuple<T>, T must be std::tuple<...>.");
};
template <class... Types>
struct from_tuple<std::tuple<Types...>> final {
using type = typelist<Types...>;
};
template <class Tuple>
using from_tuple_t = typename from_tuple<Tuple>::type;
/**
* Concatenates multiple type lists.
* Example:
* typelist<int, string, int> == concat_t<typelist<int, string>,
* typelist<int>>
*/
template <class... TypeLists>
struct concat final {
static_assert(
false_t<TypeLists...>::value,
"In typelist::concat<T1, ...>, the T arguments each must be typelist<...>.");
};
template <class... Head1Types, class... Head2Types, class... TailLists>
struct concat<typelist<Head1Types...>, typelist<Head2Types...>, TailLists...>
final {
using type =
typename concat<typelist<Head1Types..., Head2Types...>, TailLists...>::
type;
};
template <class... HeadTypes>
struct concat<typelist<HeadTypes...>> final {
using type = typelist<HeadTypes...>;
};
template <>
struct concat<> final {
using type = typelist<>;
};
template <class... TypeLists>
using concat_t = typename concat<TypeLists...>::type;
/**
* Filters the types in a type list by a type trait.
* Examples:
* typelist<int&, const string&&> == filter_t<std::is_reference,
* typelist<void, string, int&, bool, const string&&, int>>
*/
template <template <class> class Condition, class TypeList>
struct filter final {
static_assert(
false_t<TypeList>::value,
"In typelist::filter<Condition, TypeList>, the TypeList argument must be typelist<...>.");
};
template <template <class> class Condition, class Head, class... Tail>
struct filter<Condition, typelist<Head, Tail...>> final {
static_assert(
is_type_condition<Condition>::value,
"In typelist::filter<Condition, TypeList>, the Condition argument must be a condition type trait, i.e. have a static constexpr bool ::value member.");
using type = std::conditional_t<
Condition<Head>::value,
concat_t<
typelist<Head>,
typename filter<Condition, typelist<Tail...>>::type>,
typename filter<Condition, typelist<Tail...>>::type>;
};
template <template <class> class Condition>
struct filter<Condition, typelist<>> final {
static_assert(
is_type_condition<Condition>::value,
"In typelist::filter<Condition, TypeList>, the Condition argument must be a condition type trait, i.e. have a static constexpr bool ::value member.");
using type = typelist<>;
};
template <template <class> class Condition, class TypeList>
using filter_t = typename filter<Condition, TypeList>::type;
/**
* Counts how many types in the list fulfill a type trait
* Examples:
* 2 == count_if<std::is_reference, typelist<void, string, int&, bool, const
* string&&, int>>
*/
template <template <class> class Condition, class TypeList>
struct count_if final {
static_assert(
is_type_condition<Condition>::value,
"In typelist::count_if<Condition, TypeList>, the Condition argument must be a condition type trait, i.e. have a static constexpr bool ::value member.");
static_assert(
is_instantiation_of<typelist, TypeList>::value,
"In typelist::count_if<Condition, TypeList>, the TypeList argument must be typelist<...>.");
// TODO Direct implementation might be faster
static constexpr size_t value = size<filter_t<Condition, TypeList>>::value;
};
/**
* Checks if a typelist contains a certain type.
* Examples:
* contains<typelist<int, string>, string> == true_type
* contains<typelist<int, string>, double> == false_type
*/
namespace detail {
template <class TypeList, class Type, class Enable = void>
struct contains {};
template <class Type>
struct contains<typelist<>, Type, void> : std::false_type {};
template <class Type, class Head, class... Tail>
struct contains<
typelist<Head, Tail...>,
Type,
std::enable_if_t<std::is_same_v<Head, Type>>> : std::true_type {};
template <class Type, class Head, class... Tail>
struct contains<
typelist<Head, Tail...>,
Type,
std::enable_if_t<!std::is_same_v<Head, Type>>>
: contains<typelist<Tail...>, Type> {};
} // namespace detail
template <class TypeList, class Type>
using contains = typename detail::contains<TypeList, Type>::type;
/**
* Returns true iff the type trait is true for all types in the type list
* Examples:
* true == all<std::is_reference, typelist<int&, const float&&, const
* MyClass&>>::value false == all<std::is_reference, typelist<int&, const
* float&&, MyClass>>::value
*/
template <template <class> class Condition, class TypeList>
struct all {
static_assert(
false_t<TypeList>::value,
"In typelist::all<Condition, TypeList>, the TypeList argument must be typelist<...>.");
};
template <template <class> class Condition, class... Types>
struct all<Condition, typelist<Types...>>
: std::conjunction<Condition<Types>...> {
static_assert(
is_type_condition<Condition>::value,
"In typelist::all<Condition, TypeList>, the Condition argument must be a condition type trait, i.e. have a static constexpr bool ::value member.");
};
/**
* Returns true iff the type trait is true for any type in the type list
* Examples:
* true == true_for_any_type<std::is_reference, typelist<int, const
* float&&, const MyClass>>::value false ==
* true_for_any_type<std::is_reference, typelist<int, const float,
* MyClass>>::value
*/
template <template <class> class Condition, class TypeList>
struct true_for_any_type final {
static_assert(
false_t<TypeList>::value,
"In typelist::true_for_any_type<Condition, TypeList>, the TypeList argument must be typelist<...>.");
};
template <template <class> class Condition, class... Types>
struct true_for_any_type<Condition, typelist<Types...>> final
: std::disjunction<Condition<Types>...> {
static_assert(
is_type_condition<Condition>::value,
"In typelist::true_for_any_type<Condition, TypeList>, the Condition argument must be a condition type trait, i.e. have a static constexpr bool ::value member.");
};
/**
* Maps types of a type list using a type trait
* Example:
* typelist<int&, double&, string&> == map_t<std::add_lvalue_reference_t,
* typelist<int, double, string>>
*/
template <template <class> class Mapper, class TypeList>
struct map final {
static_assert(
false_t<TypeList>::value,
"In typelist::map<Mapper, TypeList>, the TypeList argument must be typelist<...>.");
};
template <template <class> class Mapper, class... Types>
struct map<Mapper, typelist<Types...>> final {
using type = typelist<Mapper<Types>...>;
};
template <template <class> class Mapper, class TypeList>
using map_t = typename map<Mapper, TypeList>::type;
/**
* Returns the first element of a type list.
* Example:
* int == head_t<typelist<int, string>>
*/
template <class TypeList>
struct head final {
static_assert(
false_t<TypeList>::value,
"In typelist::head<T>, the T argument must be typelist<...>.");
};
template <class Head, class... Tail>
struct head<typelist<Head, Tail...>> final {
using type = Head;
};
template <class TypeList>
using head_t = typename head<TypeList>::type;
/**
* Returns the first element of a type list, or the specified default if the
* type list is empty. Example: int == head_t<bool, typelist<int, string>>
* bool == head_t<bool, typelist<>>
*/
template <class Default, class TypeList>
struct head_with_default final {
using type = Default;
};
template <class Default, class Head, class... Tail>
struct head_with_default<Default, typelist<Head, Tail...>> final {
using type = Head;
};
template <class Default, class TypeList>
using head_with_default_t = typename head_with_default<Default, TypeList>::type;
/**
* Returns the N-th element of a type list.
* Example:
* int == element_t<1, typelist<float, int, char>>
*/
/// Base template.
template <size_t Index, class TypeList>
struct element final {
static_assert(
false_t<TypeList>::value,
"In typelist::element<T>, the T argument must be typelist<...>.");
};
/// Successful case, we have reached the zero index and can "return" the head
/// type.
template <class Head, class... Tail>
struct element<0, typelist<Head, Tail...>> {
using type = Head;
};
/// Error case, we have an index but ran out of types! It will only be selected
/// if `Ts...` is actually empty!
template <size_t Index, class... Ts>
struct element<Index, typelist<Ts...>> {
static_assert(
Index < sizeof...(Ts),
"Index is out of bounds in typelist::element");
};
/// Shave off types until we hit the <0, Head, Tail...> or <Index> case.
template <size_t Index, class Head, class... Tail>
struct element<Index, typelist<Head, Tail...>>
: element<Index - 1, typelist<Tail...>> {};
/// Convenience alias.
template <size_t Index, class TypeList>
using element_t = typename element<Index, TypeList>::type;
/**
* Returns the last element of a type list.
* Example:
* int == last_t<typelist<int, string>>
*/
template <class TypeList>
struct last final {
static_assert(
false_t<TypeList>::value,
"In typelist::last<T>, the T argument must be typelist<...>.");
};
template <class Head, class... Tail>
struct last<typelist<Head, Tail...>> final {
using type = typename last<typelist<Tail...>>::type;
};
template <class Head>
struct last<typelist<Head>> final {
using type = Head;
};
template <class TypeList>
using last_t = typename last<TypeList>::type;
static_assert(std::is_same_v<int, last_t<typelist<double, float, int>>>);
/**
* Take/drop a number of arguments from a typelist.
* Example:
* typelist<int, string> == take_t<typelist<int, string, bool>, 2>
* typelist<bool> == drop_t<typelist<int, string, bool>, 2>
*/
namespace detail {
template <class TypeList, size_t offset, class IndexSequence>
struct take_elements final {};
template <class TypeList, size_t offset, size_t... Indices>
struct take_elements<TypeList, offset, std::index_sequence<Indices...>> final {
using type = typelist<typename element<offset + Indices, TypeList>::type...>;
};
} // namespace detail
template <class TypeList, size_t num>
struct take final {
static_assert(
is_instantiation_of<typelist, TypeList>::value,
"In typelist::take<T, num>, the T argument must be typelist<...>.");
static_assert(
num <= size<TypeList>::value,
"Tried to typelist::take more elements than there are in the list");
using type = typename detail::
take_elements<TypeList, 0, std::make_index_sequence<num>>::type;
};
template <class TypeList, size_t num>
using take_t = typename take<TypeList, num>::type;
template <class TypeList, size_t num>
struct drop final {
static_assert(
is_instantiation_of<typelist, TypeList>::value,
"In typelist::drop<T, num>, the T argument must be typelist<...>.");
static_assert(
num <= size<TypeList>::value,
"Tried to typelist::drop more elements than there are in the list");
using type = typename detail::take_elements<
TypeList,
num,
std::make_index_sequence<size<TypeList>::value - num>>::type;
};
template <class TypeList, size_t num>
using drop_t = typename drop<TypeList, num>::type;
/**
* Like drop, but returns an empty list rather than an assertion error if `num`
* is larger than the size of the TypeList.
* Example:
* typelist<> == drop_if_nonempty_t<typelist<string, bool>, 2>
* typelist<> == drop_if_nonempty_t<typelist<int, string, bool>, 3>
*/
template <class TypeList, size_t num>
struct drop_if_nonempty final {
static_assert(
is_instantiation_of<typelist, TypeList>::value,
"In typelist::drop<T, num>, the T argument must be typelist<...>.");
using type = typename detail::take_elements<
TypeList,
std::min(num, size<TypeList>::value),
std::make_index_sequence<
size<TypeList>::value - std::min(num, size<TypeList>::value)>>::type;
};
template <class TypeList, size_t num>
using drop_if_nonempty_t = typename drop_if_nonempty<TypeList, num>::type;
/**
* Reverses a typelist.
* Example:
* typelist<int, string> == reverse_t<typelist<string, int>>
*/
template <class TypeList>
struct reverse final {
static_assert(
false_t<TypeList>::value,
"In typelist::reverse<T>, the T argument must be typelist<...>.");
};
template <class Head, class... Tail>
struct reverse<typelist<Head, Tail...>> final {
using type =
concat_t<typename reverse<typelist<Tail...>>::type, typelist<Head>>;
};
template <>
struct reverse<typelist<>> final {
using type = typelist<>;
};
template <class TypeList>
using reverse_t = typename reverse<TypeList>::type;
/**
* Find the index of the first type in a typelist fulfilling a type trait
* condition. Example:
*
* 2 == find_if<typelist<char, int, char&, int&>, std::is_reference>::value
*/
template <class TypeList, template <class> class Condition, class Enable = void>
struct find_if final {
static_assert(
false_t<TypeList>::value,
"In typelist::find_if<TypeList, Condition>, the TypeList argument must be typelist<...>.");
};
template <template <class> class Condition>
struct find_if<typelist<>, Condition, void> final {
static_assert(
false_higher_t<Condition>::value,
"In typelist::find_if<Type/List, Condition>, didn't find any type fulfilling the Condition.");
};
template <class Head, class... Tail, template <class> class Condition>
struct find_if<
typelist<Head, Tail...>,
Condition,
std::enable_if_t<Condition<Head>::value>>
final {
static constexpr size_t value = 0;
};
template <class Head, class... Tail, template <class> class Condition>
struct find_if<
typelist<Head, Tail...>,
Condition,
std::enable_if_t<!Condition<Head>::value>>
final {
static constexpr size_t value =
1 + find_if<typelist<Tail...>, Condition>::value;
};
/**
* Maps a list of types into a list of values.
* Examples:
* // Example 1
* auto sizes =
* map_types_to_values<typelist<int64_t, bool, uint32_t>>(
* [] (auto t) { return sizeof(decltype(t)::type); }
* );
* // sizes == std::tuple<size_t, size_t, size_t>{8, 1, 4}
*
* // Example 2
* auto shared_ptrs =
* map_types_to_values<typelist<int, double>>(
* [] (auto t) { return make_shared<typename decltype(t)::type>(); }
* );
* // shared_ptrs == std::tuple<shared_ptr<int>, shared_ptr<double>>()
*/
namespace detail {
template <class T>
struct type_ final {
using type = T;
};
template <class TypeList>
struct map_types_to_values final {
static_assert(
false_t<TypeList>::value,
"In typelist::map_types_to_values<T>, the T argument must be typelist<...>.");
};
template <class... Types>
struct map_types_to_values<typelist<Types...>> final {
template <class Func>
static auto call(Func&& func) {
return std::tuple{std::forward<Func>(func)(type_<Types>())...};
}
};
} // namespace detail
template <class TypeList, class Func>
auto map_types_to_values(Func&& func) {
return detail::map_types_to_values<TypeList>::call(std::forward<Func>(func));
}
} // namespace typelist
} // namespace c10::guts
#include <torch/headeronly/util/TypeList.h>

View File

@ -1,151 +1 @@
#pragma once
#include <functional>
#include <type_traits>
namespace c10::guts {
/**
* is_equality_comparable<T> is true_type iff the equality operator is defined
* for T.
*/
template <class T, class Enable = void>
struct is_equality_comparable : std::false_type {};
template <class T>
struct is_equality_comparable<
T,
std::void_t<decltype(std::declval<T&>() == std::declval<T&>())>>
: std::true_type {};
template <class T>
using is_equality_comparable_t = typename is_equality_comparable<T>::type;
/**
* is_hashable<T> is true_type iff std::hash is defined for T
*/
template <class T, class Enable = void>
struct is_hashable : std::false_type {};
template <class T>
struct is_hashable<T, std::void_t<decltype(std::hash<T>()(std::declval<T&>()))>>
: std::true_type {};
template <class T>
using is_hashable_t = typename is_hashable<T>::type;
/**
* is_function_type<T> is true_type iff T is a plain function type (i.e.
* "Result(Args...)")
*/
template <class T>
struct is_function_type : std::false_type {};
template <class Result, class... Args>
struct is_function_type<Result(Args...)> : std::true_type {};
template <class T>
using is_function_type_t = typename is_function_type<T>::type;
/**
* is_instantiation_of<T, I> is true_type iff I is a template instantiation of T
* (e.g. vector<int> is an instantiation of vector) Example:
* is_instantiation_of_t<vector, vector<int>> // true
* is_instantiation_of_t<pair, pair<int, string>> // true
* is_instantiation_of_t<vector, pair<int, string>> // false
*/
template <template <class...> class Template, class T>
struct is_instantiation_of : std::false_type {};
template <template <class...> class Template, class... Args>
struct is_instantiation_of<Template, Template<Args...>> : std::true_type {};
template <template <class...> class Template, class T>
using is_instantiation_of_t = typename is_instantiation_of<Template, T>::type;
namespace detail {
/**
* strip_class: helper to remove the class type from pointers to `operator()`.
*/
template <typename T>
struct strip_class {};
template <typename Class, typename Result, typename... Args>
struct strip_class<Result (Class::*)(Args...)> {
using type = Result(Args...);
};
template <typename Class, typename Result, typename... Args>
struct strip_class<Result (Class::*)(Args...) const> {
using type = Result(Args...);
};
template <typename T>
using strip_class_t = typename strip_class<T>::type;
} // namespace detail
/**
* Evaluates to true_type, iff the given class is a Functor
* (i.e. has a call operator with some set of arguments)
*/
template <class Functor, class Enable = void>
struct is_functor : std::false_type {};
template <class Functor>
struct is_functor<
Functor,
std::enable_if_t<is_function_type<
detail::strip_class_t<decltype(&Functor::operator())>>::value>>
: std::true_type {};
/**
* lambda_is_stateless<T> is true iff the lambda type T is stateless
* (i.e. does not have a closure).
* Example:
* auto stateless_lambda = [] (int a) {return a;};
* lambda_is_stateless<decltype(stateless_lambda)> // true
* auto stateful_lambda = [&] (int a) {return a;};
* lambda_is_stateless<decltype(stateful_lambda)> // false
*/
namespace detail {
template <class LambdaType, class FuncType>
struct is_stateless_lambda__ final {
static_assert(
!std::is_same_v<LambdaType, LambdaType>,
"Base case shouldn't be hit");
};
// implementation idea: According to the C++ standard, stateless lambdas are
// convertible to function pointers
template <class LambdaType, class C, class Result, class... Args>
struct is_stateless_lambda__<LambdaType, Result (C::*)(Args...) const>
: std::is_convertible<LambdaType, Result (*)(Args...)> {};
template <class LambdaType, class C, class Result, class... Args>
struct is_stateless_lambda__<LambdaType, Result (C::*)(Args...)>
: std::is_convertible<LambdaType, Result (*)(Args...)> {};
// case where LambdaType is not even a functor
template <class LambdaType, class Enable = void>
struct is_stateless_lambda_ final : std::false_type {};
// case where LambdaType is a functor
template <class LambdaType>
struct is_stateless_lambda_<
LambdaType,
std::enable_if_t<is_functor<LambdaType>::value>>
: is_stateless_lambda__<LambdaType, decltype(&LambdaType::operator())> {};
} // namespace detail
template <class T>
using is_stateless_lambda = detail::is_stateless_lambda_<std::decay_t<T>>;
/**
* is_type_condition<C> is true_type iff C<...> is a type trait representing a
* condition (i.e. has a constexpr static bool ::value member) Example:
* is_type_condition<std::is_reference> // true
*/
template <template <class> class C, class Enable = void>
struct is_type_condition : std::false_type {};
template <template <class> class C>
struct is_type_condition<
C,
std::enable_if_t<
std::is_same_v<bool, std::remove_cv_t<decltype(C<int>::value)>>>>
: std::true_type {};
/**
* is_fundamental<T> is true_type iff the lambda type T is a fundamental type
* (that is, arithmetic type, void, or nullptr_t). Example: is_fundamental<int>
* // true We define it here to resolve a MSVC bug. See
* https://github.com/pytorch/pytorch/issues/30932 for details.
*/
template <class T>
struct is_fundamental : std::is_fundamental<T> {};
} // namespace c10::guts
#include <torch/headeronly/util/TypeTraits.h>

View File

@ -24,6 +24,7 @@ set(C10_XPU_HEADERS
XPUCachingAllocator.h
XPUDeviceProp.h
XPUException.h
XPUEvent.h
XPUFunctions.h
XPUMacros.h
XPUStream.h

178
c10/xpu/XPUEvent.h Normal file
View File

@ -0,0 +1,178 @@
#pragma once
#include <c10/xpu/XPUStream.h>
namespace c10::xpu {
/*
* XPUEvent are movable not copyable wrappers around SYCL event. XPUEvent are
* constructed lazily when first recorded. It has a device, and this device is
* acquired from the first recording stream. Later streams that record the event
* must match the same device.
*
* Currently, XPUEvent does NOT support to export an inter-process event from
* another process via inter-process communication(IPC). So it means that
* inter-process communication for event handles between different processes is
* not available. This could impact some applications that rely on cross-process
* synchronization and communication.
*/
struct XPUEvent {
// Constructors
XPUEvent(bool enable_timing = false) noexcept
: enable_timing_{enable_timing} {}
~XPUEvent() {
if (isCreated()) {
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_deletion(
c10::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
}
}
C10_DISABLE_COPY_AND_ASSIGN(XPUEvent);
XPUEvent(XPUEvent&& other) = default;
XPUEvent& operator=(XPUEvent&& other) = default;
operator sycl::event&() const {
return event();
}
std::optional<c10::Device> device() const {
if (isCreated()) {
return c10::Device(c10::kXPU, device_index_);
} else {
return std::nullopt;
}
}
inline bool isCreated() const {
return (event_.get() != nullptr);
}
DeviceIndex device_index() const {
return device_index_;
}
sycl::event& event() const {
return *event_;
}
bool query() const {
using namespace sycl::info;
if (!isCreated()) {
return true;
}
return event().get_info<event::command_execution_status>() ==
event_command_status::complete;
}
void record() {
record(getCurrentXPUStream());
}
void recordOnce(const XPUStream& stream) {
if (!isCreated()) {
record(stream);
}
}
void record(const XPUStream& stream) {
if (!isCreated()) {
device_index_ = stream.device_index();
assignEvent(stream.queue());
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_creation(
c10::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
} else {
TORCH_CHECK(
device_index_ == stream.device_index(),
"Event device ",
device_index_,
" does not match recording stream's device ",
stream.device_index(),
".");
reassignEvent(stream.queue());
}
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_record(
c10::kXPU,
reinterpret_cast<uintptr_t>(event_.get()),
reinterpret_cast<uintptr_t>(&stream.queue()));
}
}
void block(const XPUStream& stream) {
if (isCreated()) {
std::vector<sycl::event> event_list{event()};
// Make this stream wait until event_ is completed.
stream.queue().ext_oneapi_submit_barrier(event_list);
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_wait(
c10::kXPU,
reinterpret_cast<uintptr_t>(event_.get()),
reinterpret_cast<uintptr_t>(&stream.queue()));
}
}
}
double elapsed_time(const XPUEvent& other) const {
TORCH_CHECK(
isCreated() && other.isCreated(),
"Both events must be recorded before calculating elapsed time.");
TORCH_CHECK(
query() && other.query(),
"Both events must be completed before calculating elapsed time.");
TORCH_CHECK(
enable_timing_ && other.enable_timing_,
"Both events must be created with argument 'enable_timing=True'.");
using namespace sycl::info::event_profiling;
// Block until both of the recorded events are completed.
uint64_t end_time_ns = other.event().get_profiling_info<command_end>();
uint64_t start_time_ns = event().get_profiling_info<command_end>();
// Return the eplased time in milliseconds.
return 1e-6 *
(static_cast<double>(end_time_ns) - static_cast<double>(start_time_ns));
}
void synchronize() const {
if (isCreated()) {
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_synchronization(
c10::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
event().wait_and_throw();
}
}
private:
void assignEvent(sycl::queue& queue) {
if (enable_timing_) {
event_ = std::make_unique<sycl::event>(
sycl::ext::oneapi::experimental::submit_profiling_tag(queue));
} else {
event_ = std::make_unique<sycl::event>(queue.ext_oneapi_submit_barrier());
}
}
void reassignEvent(sycl::queue& queue) {
event_.reset();
assignEvent(queue);
}
bool enable_timing_ = false;
c10::DeviceIndex device_index_ = -1;
// Only need to track the last event, as events in an in-order queue are
// executed sequentially.
std::unique_ptr<sycl::event> event_;
};
} // namespace c10::xpu

View File

@ -1,7 +1,7 @@
# This will define the following variables:
# SYCL_FOUND : True if the system has the SYCL library.
# SYCL_INCLUDE_DIR : Include directories needed to use SYCL.
# SYCL_LIBRARY_DIR The path to the SYCL library.
# SYCL_LIBRARY_DIR : The path to the SYCL library.
# SYCL_LIBRARY : SYCL library fullname.
# SYCL_COMPILER_VERSION : SYCL compiler version.

View File

@ -478,6 +478,7 @@ function(torch_update_find_cuda_flags)
endfunction()
include(CheckCXXCompilerFlag)
include(CheckCCompilerFlag)
include(CheckLinkerFlag)
##############################################################################
@ -501,6 +502,24 @@ function(append_cxx_flag_if_supported flag outputvar)
endif()
endfunction()
function(append_c_flag_if_supported flag outputvar)
string(TOUPPER "HAS${flag}" _FLAG_NAME)
string(REGEX REPLACE "[=-]" "_" _FLAG_NAME "${_FLAG_NAME}")
# GCC silences unknown -Wno-XXX flags, so test the corresponding -WXXX.
if(CMAKE_C_COMPILER_ID STREQUAL "GNU")
string(REGEX REPLACE "^Wno-" "W" new_flag "${flag}")
else()
set(new_flag "${flag}")
endif()
check_c_compiler_flag("${new_flag}" ${_FLAG_NAME})
if(${_FLAG_NAME})
string(APPEND ${outputvar} " ${flag}")
set(${outputvar} "${${outputvar}}" PARENT_SCOPE)
endif()
endfunction()
function(target_compile_options_if_supported target flag)
set(_compile_options "")
append_cxx_flag_if_supported("${flag}" _compile_options)

View File

@ -0,0 +1,164 @@
# Accelerator Hooks
## Background
OpenReg hooks provide a mechanism for integrating custom accelerator devices into PyTorch's runtime system. OpenReg (Open Registration) is PyTorch's extensibility framework that allows accelerator vendors to register custom device backends without modifying PyTorch core code.
## Design
The following tables list all hooks that accelerator vendors need to implement when integrating a new device backend. These hooks are categorized into two priority levels:
- **High Priority Hooks**: Core APIs that PyTorch runtime directly depends on. Accelerator vendors are recommended to implement all high priority hooks to ensure full PyTorch compatibility and enable basic device functionality.
- **Low Priority Hooks**: Device management and utility APIs that PyTorch does not directly depend on. These hooks enhance user experience and multi-device support but are *optional*. Accelerator vendors can choose to implement them based on their specific requirements and use cases.
### High Priority Hooks
| Hook Method | Description | Application Scenario |
| ---------------------------------- | --------------------------------------------------------- | -------------------------------------------------------------------------------- |
| `init()` | Initializes the accelerator runtime and device contexts | Set up necessary state when PyTorch first accesses the device |
| `hasPrimaryContext(DeviceIndex)` | Checks if a primary context exists for the device | Determine whether device initialization has occurred |
| `getDefaultGenerator(DeviceIndex)` | Returns the default random number generator for a device | Access the device's primary RNG for reproducible random operations |
| `getNewGenerator(DeviceIndex)` | Creates a new independent random number generator | Create isolated RNG instances for parallel operations |
| `getDeviceFromPtr(void*)` | Determines which device a memory pointer belongs to | Identify the accelerator device associated with a memory allocation |
| `getPinnedMemoryAllocator()` | Returns an allocator for pinned (page-locked) host memory | Allocate host memory that can be efficiently transferred to/from the accelerator |
| `isPinnedPtr(void*)` | Checks if a pointer points to pinned memory | Validate memory types before performing operations |
### Low Priority Hooks
| Hook Method | Description | Application Scenario |
| ---------------------------------- | ---------------------------------------------------------------------------- | -------------------------------------------------------------------- |
| `isBuilt()` | Returns whether the accelerator backend is built/compiled into the extension | Check whether the accelerator library is available at compile time |
| `isAvailable()` | Returns whether the accelerator hardware is available at runtime | Verify whether accelerator devices can be detected and initialized |
| `deviceCount()` | Returns the number of available accelerator devices | Enumerate all available accelerator devices for device selection |
| `setCurrentDevice(DeviceIndex)` | Sets the active device for the current thread | Switch the current thread's context to a specific accelerator device |
| `getCurrentDevice()` | Returns the currently active device index | Query which accelerator device is active in the current thread |
| `exchangeDevice(DeviceIndex)` | Atomically exchanges the current device and returns the previous one | Temporarily switch devices and restore the previous device afterward |
| `maybeExchangeDevice(DeviceIndex)` | Conditionally exchanges device only if the index is valid | Safely attempt device switching with validation |
## Implementation
We can just take `getDefaultGenerator` as an implementation example:
```{eval-rst}
.. literalinclude:: ../../../test/cpp_extensions/open_registration_extension/torch_openreg/csrc/runtime/OpenRegHooks.h
:language: c++
:start-after: LITERALINCLUDE START: OPENREG HOOK EXAMPLES
:end-before: LITERALINCLUDE END: OPENREG HOOK EXAMPLES
:linenos:
```
In this implementation:
1. **Override the base interface**: The `getDefaultGenerator` method overrides the virtual method from `at::PrivateUse1HooksInterface`.
2. **Delegate to device-specific implementation**: It calls `getDefaultOpenRegGenerator(device_index)`, which manages a per-device generator instance.
3. **Return device-specific generator**: The returned `at::Generator` wraps an `OpenRegGeneratorImpl` that implements device-specific random number generation.
This pattern applies to all hooks: override the interface method, validate inputs, delegate to your device-specific API, and return results in PyTorch's expected format.
## Integration Example
The following sections demonstrate how PyTorch integrates with accelerator hooks when accessing the default random number generator. The example traces the complete flow from user-facing Python code down to the device-specific implementation.
### Layer 1: User Code
User code initiates the operation by calling `manual_seed` to set the random seed for reproducible results:
```python
import torch
torch.openreg.manual_seed(42)
```
### Layer 2: Extension Python API
The Python API layer handles device management and calls into the C++ extension (defined in [`torch_openreg/openreg/random.py`][random.py]):
```{eval-rst}
.. literalinclude:: ../../../test/cpp_extensions/open_registration_extension/torch_openreg/torch_openreg/openreg/random.py
:language: python
:start-after: LITERALINCLUDE START: OPENREG MANUAL SEED
:end-before: LITERALINCLUDE END: OPENREG MANUAL SEED
:linenos:
```
The `manual_seed` function gets the current device index and calls `torch_openreg._C._get_default_generator(idx)` to obtain the device-specific generator, then sets the seed on it.
### Layer 3: Python/C++ Bridge
The C++ extension exposes `_getDefaultGenerator` to Python, which bridges to PyTorch's core runtime:
```{eval-rst}
.. literalinclude:: ../../../test/cpp_extensions/open_registration_extension/torch_openreg/torch_openreg/csrc/Module.cpp
:language: c++
:start-after: LITERALINCLUDE START: OPENREG GET DEFAULT GENERATOR
:end-before: LITERALINCLUDE END: OPENREG GET DEFAULT GENERATOR
:linenos:
:emphasize-lines: 10-11
```
```{eval-rst}
.. literalinclude:: ../../../test/cpp_extensions/open_registration_extension/torch_openreg/torch_openreg/csrc/Module.cpp
:language: c++
:start-after: LITERALINCLUDE START: OPENREG MODULE METHODS
:end-before: LITERALINCLUDE END: OPENREG MODULE METHODS
:linenos:
:emphasize-lines: 3
```
This function unpacks the device index from Python, creates a `PrivateUse1` device object, and calls `at::globalContext().defaultGenerator()`. PyTorch's context then dispatches to the registered hooks.
### Layer 4: PyTorch Core Context
PyTorch's Context class dispatches to the appropriate accelerator hooks ([`aten/src/ATen/Context.h`][Context.h]):
```{eval-rst}
.. literalinclude:: ../../../aten/src/ATen/Context.h
:language: c++
:lines: 60-103
:linenos:
:emphasize-lines: 8-9, 24-25
```
This layered architecture enables PyTorch to remain device-agnostic while delegating hardware-specific operations to accelerator implementations. The hooks are registered once at module load time:
```{eval-rst}
.. literalinclude:: ../../../test/cpp_extensions/open_registration_extension/torch_openreg/csrc/runtime/OpenRegHooks.cpp
:language: c++
:start-after: LITERALINCLUDE START: OPENREG HOOK REGISTER
:end-before: LITERALINCLUDE END: OPENREG HOOK REGISTER
:linenos:
:emphasize-lines: 4
```
### Layer 5: Accelerator Hooks
The hooks interface provides the abstraction that PyTorch uses to delegate to device-specific implementations:
```{eval-rst}
.. literalinclude:: ../../../test/cpp_extensions/open_registration_extension/torch_openreg/csrc/runtime/OpenRegHooks.h
:language: c++
:start-after: LITERALINCLUDE START: OPENREG HOOK EXAMPLES
:end-before: LITERALINCLUDE END: OPENREG HOOK EXAMPLES
:linenos:
```
The `getDefaultGenerator` hook method overrides the base interface and delegates to `getDefaultOpenRegGenerator`, which manages the actual generator instances.
### Layer 6: Device-Specific Implementation
The device-specific implementation manages per-device generator instances:
```{eval-rst}
.. literalinclude:: ../../../test/cpp_extensions/open_registration_extension/torch_openreg/csrc/runtime/OpenRegGenerator.cpp
:language: c++
:start-after: LITERALINCLUDE START: OPENREG GET DEFAULT GENERATOR IMPL
:end-before: LITERALINCLUDE END: OPENREG GET DEFAULT GENERATOR IMPL
:linenos:
```
This function maintains a static vector of generators (one per device), initializes them on first access, validates the device index, and returns the appropriate generator instance.
[random.py]: https://github.com/pytorch/pytorch/tree/main/test/cpp_extensions/open_registration_extension/torch_openreg/torch_openreg/openreg/random.py#L48-L53 "random.py"
[Context.h]: https://github.com/pytorch/pytorch/tree/main/aten/src/ATen/Context.h#L61-L102 "Context.h"

View File

@ -42,6 +42,7 @@ Next, we will delve into each chapter of this guide. Each chapter focuses on a k
:glob:
:maxdepth: 1
hooks
autoload
operators
amp

View File

@ -0,0 +1,21 @@
# torch.mtia.mtia_graph
The MTIA backend is implemented out of the tree, only interfaces are defined here.
```{eval-rst}
.. automodule:: torch.mtia.mtia_graph
```
```{eval-rst}
.. currentmodule:: torch.mtia.mtia_graph
```
```{eval-rst}
.. autoclass:: MTIAGraph
:members:
```
```{eval-rst}
.. autoclass:: graph
:members:
```

View File

@ -14,6 +14,10 @@ Utils
sdpa_kernel
SDPBackend
register_flash_attention_impl
activate_flash_attention_impl
list_flash_attention_impls
current_flash_attention_impl
Submodules
----------

View File

@ -29,6 +29,7 @@ mps
xpu
mtia
mtia.memory
mtia.mtia_graph
meta
torch.backends <backends>
torch.export <export>

View File

@ -10,7 +10,7 @@ tp2_dir="$top_dir/third_party"
pip install ninja
# Install onnx
pip install --no-use-pep517 -e "$tp2_dir/onnx"
pip install -e "$tp2_dir/onnx"
# Install caffe2 and pytorch
pip install -r "$top_dir/caffe2/requirements.txt"

View File

@ -1358,6 +1358,45 @@ class concat_license_files:
# Need to create the proper LICENSE.txt for the wheel
class bdist_wheel(setuptools.command.bdist_wheel.bdist_wheel):
def _wrap_headers_with_macro(self, bdist_dir: Path) -> None:
"""Wrap all header files with #if !defined(TORCH_STABLE_ONLY) && !defined(TORCH_TARGET_VERSION).
Excludes:
- torch/include/torch/headeronly/*
- torch/include/torch/csrc/stable/*
- torch/include/torch/csrc/inductor/aoti_torch/c/ (only shim headers)
- torch/include/torch/csrc/inductor/aoti_torch/generated/
"""
header_extensions = (".h", ".hpp", ".cuh")
header_files = [
f for ext in header_extensions for f in bdist_dir.rglob(f"*{ext}")
]
# Paths to exclude from wrapping
exclude_dir_patterns = [
"torch/include/torch/headeronly/",
"torch/include/torch/csrc/stable/",
"torch/include/torch/csrc/inductor/aoti_torch/c/",
"torch/include/torch/csrc/inductor/aoti_torch/generated/",
]
for header_file in header_files:
rel_path = header_file.relative_to(bdist_dir).as_posix()
if any(rel_path.startswith(pattern) for pattern in exclude_dir_patterns):
report(f"Skipping header: {rel_path}")
continue
original_content = header_file.read_text(encoding="utf-8")
wrapped_content = (
"#if !defined(TORCH_STABLE_ONLY) && !defined(TORCH_TARGET_VERSION)\n"
f"{original_content}"
"\n#endif // !defined(TORCH_STABLE_ONLY) && !defined(TORCH_TARGET_VERSION)\n"
)
header_file.write_text(wrapped_content, encoding="utf-8")
report(f"Wrapped header: {rel_path}")
def run(self) -> None:
with concat_license_files(include_files=True):
super().run()
@ -1380,6 +1419,14 @@ class bdist_wheel(setuptools.command.bdist_wheel.bdist_wheel):
# need an __init__.py file otherwise we wouldn't have a package
(bdist_dir / "torch" / "__init__.py").touch()
# Wrap all header files with TORCH_STABLE_ONLY macro
assert self.bdist_dir is not None, "bdist_dir should be set during wheel build"
bdist_dir = Path(self.bdist_dir)
report(
"-- Wrapping header files with if !defined(TORCH_STABLE_ONLY) && !defined(TORCH_TARGET_VERSION)"
)
self._wrap_headers_with_macro(bdist_dir)
class clean(Command):
user_options: ClassVar[list[tuple[str, str | None, str]]] = []

View File

@ -308,12 +308,16 @@ class StepcurrentPlugin:
self.report_status = ""
assert config.cache is not None
self.cache: pytest.Cache = config.cache
self.directory = f"{STEPCURRENT_CACHE_DIR}/{config.getoption('stepcurrent')}"
self.lastrun: Optional[str] = self.cache.get(self.directory, None)
directory = f"{STEPCURRENT_CACHE_DIR}/{config.getoption('stepcurrent')}"
self.lastrun_location = f"{directory}/lastrun"
self.lastrun: Optional[str] = self.cache.get(self.lastrun_location, None)
self.initial_val = self.lastrun
self.skip: bool = config.getoption("stepcurrent_skip")
self.run_single: bool = config.getoption("run_single")
self.made_failing_xml_location = f"{directory}/made_failing_xml"
self.cache.set(self.made_failing_xml_location, False)
def pytest_collection_modifyitems(self, config: Config, items: list[Any]) -> None:
if not self.lastrun:
self.report_status = "Cannot find last run test, not skipping"
@ -349,8 +353,10 @@ class StepcurrentPlugin:
def pytest_runtest_protocol(self, item, nextitem) -> None:
self.lastrun = item.nodeid
self.cache.set(self.directory, self.lastrun)
self.cache.set(self.lastrun_location, self.lastrun)
def pytest_sessionfinish(self, session, exitstatus):
if exitstatus == 0:
self.cache.set(self.directory, self.initial_val)
self.cache.set(self.lastrun_location, self.initial_val)
if exitstatus != 0:
self.cache.set(self.made_failing_xml_location, True)

View File

@ -17,8 +17,11 @@ set(AOTI_ABI_CHECK_TEST_SRCS
${AOTI_ABI_CHECK_TEST_ROOT}/test_headeronlyarrayref.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_macros.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_math.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_metaprogramming.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_rand.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_scalartype.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_typelist.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_typetraits.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_vec.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_vec_half.cpp
)

View File

@ -1,9 +1,8 @@
#include <c10/test/util/Macros.h>
#include <c10/util/Metaprogramming.h>
#include <gtest/gtest.h>
#include <torch/headeronly/util/Metaprogramming.h>
#include <cstdlib>
using namespace c10::guts;
using namespace torch::headeronly::guts;
// NOLINTBEGIN(modernize*, cppcoreguidelines-special-member-functions)
namespace {
@ -65,6 +64,15 @@ static_assert(
typename make_function_traits_t<void, typelist::typelist<int, float>>::
func_type>::value,
"");
struct Functor final {
std::string operator()(int64_t a, float b) const;
};
static_assert(
std::is_same<
std::string(int64_t, float),
typename infer_function_traits_t<Functor>::func_type>::value,
"");
} // namespace test_function_traits
struct MovableOnly {

View File

@ -1,8 +1,8 @@
#include <c10/util/TypeList.h>
#include <gtest/gtest.h>
#include <torch/headeronly/util/TypeList.h>
#include <memory>
using namespace c10::guts::typelist;
using namespace torch::headeronly::guts::typelist;
// NOLINTBEGIN(modernize-unary-static-assert)
namespace test_size {
class MyClass {};

Some files were not shown because too many files have changed in this diff Show More