Compare commits

..

268 Commits

Author SHA1 Message Date
06c1f778c4 Update base for fix mac tests at bottom of stack, second try on "DTensor fast path: port return_and_correct_aliasing and inplace/out checks"
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-12 16:44:13 -08:00
ef2847e26e Update base for fix mac tests at bottom of stack with USE_DISTRIBUTED gating in is_dtensor on "DTensor fast path: port return_and_correct_aliasing and inplace/out checks"
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-12 14:41:21 -08:00
c13054f301 Update base for rebase to viable/strict, fix some tail build issues and add a couple comments on "DTensor fast path: port return_and_correct_aliasing and inplace/out checks"
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-12 10:05:39 -08:00
3feeca3ae2 Update base for rebase to viable/strict, fix some tail build issues and add a couple comments on "DTensor fast path: port return_and_correct_aliasing and inplace/out checks"
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-12 10:03:32 -08:00
780e32524c Move XPUEvent to c10 (#158336)
# Motivation
Move `XPUEvent` to `c10/xpu` to keep consistent with `XPUStream`, which is already in `c10/xpu`. The most important thing is that we will leverage `XPUEven`t in our caching allocator instead of a raw sycl event.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158336
Approved by: https://github.com/EikanWang, https://github.com/albanD
2025-11-12 11:29:42 +00:00
6bf51de533 harden backed_size_oblivious and broadcast_shapes (#167232)
We probably need something similar for expand

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167232
Approved by: https://github.com/ColinPeppler
2025-11-12 09:30:24 +00:00
d33d125c94 [inductor] Remove output copy_ for pallas backend in some cases (#167516)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167516
Approved by: https://github.com/oulgen
2025-11-12 06:18:35 +00:00
dc8bb52f77 Inductor Lite Mode (#167115)
This PR introduces inductor lite mode for opt-in optimizations and numeric correctness guarantees.

Different from default mode that applies all possible fusions, lite mode gives the control back to user and provides guarantee on numeric correctness. Specifically, this mode:

- **Fallback by Default**: Fallback for ALL nodes by default, unless users explicitly mark node for inductor fusion.
- **Selective Decomposition**: Skip decomposition for all nodes except for user marked nodes.
- **Regional inductor compile**
- Skip dead code elimination
- Skip buffer reues
- Skip reorder passes, such as reorder for peak memory, reorder for compute comm overlap, and reorder_for_reducing_graph_partitions.
- Skip all pre-grad, joint-graph, and post-grad passes.

## Example: Flex Attention

```python
import torch
import torch.fx.traceback as fx_traceback
from torch.nn.attention.flex_attention import create_block_mask, flex_attention

def _squared(score, b, h, m, n):
    return score * score

def mask_mod(b, h, q, k):
    return q >= 0

a, b = 12, 64
block_mask = create_block_mask(mask_mod, None, None, a * b, a * b, device="cuda")

def fn(x):
    x = torch.sin(x)
    with fx_traceback.annotate({"compile_with_inductor": 0}):
        x = flex_attention(x, x, x, block_mask=block_mask, score_mod=_squared)
    return torch.cos(x)

x = torch.randn(1, 1, a * b, b, dtype=torch.bfloat16, device="cuda", requires_grad=True)

opt_fn = torch.compile(fn, mode="lite", fullgraph=True,)
opt_fn(x)
```

[code diff](https://www.internalfb.com/intern/diffing/?paste_number=2027441476)

[default mode tlp](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpYAzDxX/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000) vs [lite mode tlp](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/.tmpnnuh1W/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000)

## Numerics

Inductor lite mode provides bitwise equivalence with `aot_eager` backend on torchtitan llama3-8b and DeepSeek v3. https://github.com/pytorch/torchtitan/pull/2005

close: #167012

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167115
Approved by: https://github.com/ezyang
2025-11-12 05:36:26 +00:00
9997e853e9 [DebugMode] record triton kernels, run-to-run determinism checks (#167028)
Following up on https://github.com/pytorch/pytorch/pull/166348, extends DebugMode to capture inductor triton kernels at runtime, and adds an API for checking run-to-run determinism based on tensor hashes.

The workflow looks something like...
```python
# do 1st run with hashes, get logs
with DebugMode() as debug_mode, DebugMode.log_tensor_hashes():
    compiled_model(*inputs)
logs1 = debug_mode.logs

# do 2nd run
with DebugMode() as debug_mode, DebugMode.log_tensor_hashes():
    compiled_model(*inputs)
logs2 = debug_mode.logs

# returns list of calls w/ mismatched outputs
mismatches = DebugMode.check_hash_mismatches(logs1, logs2)
```

Example dump off a smaller version of @drisspg's FlexAttention fwd+bwd determinism tests [script](https://gist.github.com/pianpwk/f65cc63811d12853709dcc77d7eb69f1) (without forced reduction order):
```
cfg: TestConfig(name='Standard', B=2, Hq=32, Hkv=32, Q=2048, KV=2048, Dqk=128, Dv=128)
DETERMINISM: fwd: True, bwd_q: False, bwd_k: False, bwd_v: True

$$$ DEBUG MODE DUMP $$$  (this is what the logs look like)

    [triton] triton_tem_fused_0(arg_Q=t: bf16[2, 32, 2048, 128], arg_K=t: bf16[2, 32, 2048, 128], arg_V=t: bf16[2, 32, 2048, 128], arg_LSE=t: f32[2, 32, 2048], arg_MAX=t: f32[2, 32, 2048], arg_KV_NUM_BLKS=t: i32[2, 32, 16], arg_KV_IDX=t: i32[2, 32, 16, 16], arg_FULL_KV_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_KV_IDX=t: i32[2, 32, 16, 16], out_ptr0=t: bf16[2, 32, 2048, 128])
    # post-kernel hashes: {arg_Q: 13385916.068706088, arg_K: 13389356.409105342, arg_V: 13384993.48412523, arg_LSE: 1347168.9026973695, arg_MAX: 81775.3811062593, arg_KV_NUM_BLKS: 1024.0, arg_KV_IDX: 122880.0, arg_FULL_KV_NUM_BLKS: 7680.0, arg_FULL_KV_IDX: 122880.0, out_ptr0: 924917.7918248245}

    [triton] triton_per_fused_zeros_0(in_ptr0=t: bf16[2, 32, 2048, 128], in_ptr1=t: bf16[2, 32, 2048, 128], out_ptr1=t: f32[2, 32, 2048], xnumel=131072, r0_numel=128)
    # post-kernel hashes: {in_ptr0: 924917.7918248245, in_ptr1: 13389213.797377996, out_ptr1: 81775.38106592931}

    [triton] triton_tem_fused_zeros_1(arg_Q=t: bf16[2, 32, 2048, 128], arg_K=t: bf16[2, 32, 2048, 128], arg_V=t: bf16[2, 32, 2048, 128], arg_LSE=t: f32[2, 32, 2048], arg_DELTA=t: f32[2, 32, 2048], arg_DO=t: bf16[2, 32, 2048, 128], arg_DQ=t: bf16[2, 32, 2048, 128], arg_DV=t: bf16[2, 32, 2048, 128], arg_KV_NUM_BLKS=t: i32[2, 32, 16], arg_KV_IDX=t: i32[2, 32, 16, 16], arg_Q_NUM_BLKS=t: i32[2, 32, 16], arg_Q_IDX=t: i32[2, 32, 16, 16], arg_FULL_KV_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_KV_IDX=t: i32[2, 32, 16, 16], arg_FULL_Q_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_Q_IDX=t: i32[2, 32, 16, 16], out_ptr0=t: bf16[2, 32, 2048, 128])
    # post-kernel hashes: {arg_Q: 13385916.068706088, arg_K: 13389356.409105342, arg_V: 13384993.48412523, arg_LSE: 1347168.9026973695, arg_DELTA: 81775.38106592931, arg_DO: 13389213.797377996, arg_DQ: 874474.8084187683, arg_DV: 727742.3138379117, arg_KV_NUM_BLKS: 1024.0, arg_KV_IDX: 122880.0, arg_Q_NUM_BLKS: 1024.0, arg_Q_IDX: 122880.0, arg_FULL_KV_NUM_BLKS: 7680.0, arg_FULL_KV_IDX: 122880.0, arg_FULL_Q_NUM_BLKS: 7680.0, arg_FULL_Q_IDX: 122880.0, out_ptr0: 700542.3431890717}

$$$ MISMATCHES $$$
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_0', 'arg_name': 'arg_MAX', 'pytree_path': None, 'hash1': 0.0, 'hash2': 81775.3811062593, 'rel_diff': 1.0, 'is_input_hash': False}  # I guess this one is misleading? not sure if I'm doing something wrong with waiting for kernel results
mismatch: {'call_type': 'triton kernel', 'call': 'triton_per_fused_zeros_0', 'arg_name': 'out_ptr1', 'pytree_path': None, 'hash1': 81775.3811062593, 'hash2': 81775.38106592931, 'rel_diff': 4.931801261646669e-10, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'arg_DELTA', 'pytree_path': None, 'hash1': 81775.3811062593, 'hash2': 81775.38106592931, 'rel_diff': 4.931801261646669e-10, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'arg_DQ', 'pytree_path': None, 'hash1': 874474.8097136207, 'hash2': 874474.8084187683, 'rel_diff': 1.480720012120795e-09, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'out_ptr0', 'pytree_path': None, 'hash1': 700542.3488049245, 'hash2': 700542.3431890717, 'rel_diff': 8.016435812581196e-09, 'is_input_hash': False}
```

note: current hash implementation is basically tensor norm, so tensor closeness -> hash closeness. This is likely to change soon, e.g. maybe to `torch.hash_tensor` (https://github.com/pytorch/pytorch/pull/154149) by default

Sample paste diff between log dumps from 2 runs:
<img width="1665" height="445" alt="Screenshot 2025-11-05 at 11 27 24 PM" src="https://github.com/user-attachments/assets/41402e37-f50b-4a9e-a17c-bb98b5917076" />

Another case where running this for FSDP2 on Llama3-8B, helped narrow down divergence b/w aot_eager <-> inductor, to inductor's FWD RMSNorm kernels: P2027003180

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167028
Approved by: https://github.com/v0i0
2025-11-12 05:21:07 +00:00
2a09f6e02e [4/N] Use Python 3.10 typing (#167458)
This PR applies new Union and Optional typing syntax to some files.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167458
Approved by: https://github.com/albanD
2025-11-12 05:15:40 +00:00
bf380fbd4c [vision hash update] update the pinned vision hash (#167491)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vision hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167491
Approved by: https://github.com/pytorchbot
2025-11-12 04:40:02 +00:00
148fd9a522 [audio hash update] update the pinned audio hash (#167490)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167490
Approved by: https://github.com/pytorchbot
2025-11-12 04:31:31 +00:00
7bb8d8c200 [ROCm][CI] Add trunk-rocm-mi300.yml to test new MI3xx CI capacity (#167587)
This adds a workflow to run full set of UTs on default and distributed configs on ROCm MI3xx CI runners, to _eventually_ assess if the CI capacity can handle the PR-based workload for trunk.yml. The plan was to keep this workflow in unstable as we test out this new CI capacity, so it wouldn't impact PR merges. However, since upstream maintainers have indicated that, as of today, even unstable workflows will block PR merges, we are going with branch push-based triggers to at least pipeclean this workflow on the new CI capacity.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167587
Approved by: https://github.com/jeffdaily
2025-11-12 03:44:15 +00:00
5ce4a8b49f Revert "fix wrong accuracy_status when exception. (#165731)"
This reverts commit bfcdbd0a970e5ce08cecd0aa33dd389819f0ec4f.

Reverted https://github.com/pytorch/pytorch/pull/165731 on behalf of https://github.com/zou3519 due to broke inductor periodic ([comment](https://github.com/pytorch/pytorch/pull/165731#issuecomment-3519743601))
2025-11-12 03:36:27 +00:00
7dd56474f2 [annotation] Skip copying custom meta for gradient accumulation nodes; tag with is_gradient_acc=True (#167572)
The seq_nr  doesn't always increment for gradient accumulation nodes, and they might be copying annotation from forward nodes.

I'm just going to skip copying the custom meta for any gradient accumulation nodes and give them a special tag e.g. node.meta["is_gradient_acc"]=True

Example repro for deepseek torchtitan (without using DTensor): https://gist.github.com/yushangdi/aae13ea382732f31d0fdfb3ffeda12c8

(side note: if you want some more hints on these gradient acc node: 1) they have torch.ops.aten.add.Tensor op, not add.default. 2) they have the highest seq_nr(s) )

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167572
Approved by: https://github.com/mlazos
2025-11-12 03:35:57 +00:00
3260bf3b19 [export] stop gap strict export v2 enable and testing. (#167236)
Summary:
Added a new flag called "use_legacy_dynamo_graph_capture" which defaults to True and only False with the updated test_strict_export_v2.py

In addiotion to this flag, we also use legacy tracer when the following features are used:
1. dynamic shape
2. preserve module call signature
3. retracing.
4. draft mode.

Test Plan:
test_strict_export_v2.py

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167236
Approved by: https://github.com/tugsbayasgalan
2025-11-12 03:33:40 +00:00
05c6a06b2b Add FA4 to sdpa (#167348)
# Summary
See title ;)

## Design

Currently once you install there is no going back in the same python process, this need not be the case, cc @mikaylagawarecki's work on being able to grab original impl. I'll leave for follow up.

Okay I added an open reg, but I really want the backends to be found so some weird typing but we get
<img width="523" height="197" alt="Screenshot 2025-11-07 at 3 30 32 PM" src="https://github.com/user-attachments/assets/586de943-bbed-40cf-abd1-131f747a4cf1" />

## Overheads:
<img width="799" height="735" alt="Screenshot 2025-11-07 at 2 35 04 PM" src="https://github.com/user-attachments/assets/f9217f31-3e42-4816-8fb3-29ea8b49d735" />
First call to forward -> majority of time is spent in jit for FA

First call to backward, 3sec interestingly it doesn't appear that with_stack gets events in the backwards loop @albanD is this expected?
<img width="948" height="385" alt="Screenshot 2025-11-07 at 2 35 50 PM" src="https://github.com/user-attachments/assets/a40bacd0-3fb0-4bd8-b33e-bec8fb3f36c0" />

Getting form Pt op to impl is about 43 us which is dwarfed by other cpu overheads
<img width="1227" height="649" alt="Screenshot 2025-11-07 at 2 37 41 PM" src="https://github.com/user-attachments/assets/51da0615-facd-41e1-a6e2-fb7778079ab6" />

Just invoking the jit object from cutesl is 100s of us
<img width="545" height="414" alt="Screenshot 2025-11-07 at 2 38 19 PM" src="https://github.com/user-attachments/assets/d20345a0-6c47-4dcb-892f-9ef9894a1cf5" />

### Example usage
```Py
#!/usr/bin/env python3

"""Minimal FA4 smoke test for scaled dot product attention."""

from __future__ import annotations

import sys
from jsonargparse import CLI

import torch
import torch.nn.functional as F
from torch.nn.attention import (
    install_flash_attention_impl,
    sdpa_kernel,
    SDPBackend,
)

def _map_dtype(kind: str) -> torch.dtype:
    return torch.bfloat16 if kind == "bf16" else torch.float16

# To infinity and beyond
install_flash_attention_impl("FA4")

@sdpa_kernel([SDPBackend.FLASH_ATTENTION])
def main(
    module_path: str = "flash_attn.cute.interface",
    batch: int = 4,
    seq: int = 81292,
    heads: int = 16,
    head_dim: int = 128,
    device: int = 0,
    dtype: str = "bf16"
    ) -> None:
    if not torch.cuda.is_available():
        sys.exit("CUDA is required for FA4 smoke testing")
    torch.cuda.set_device(device)
    dtype = _map_dtype(dtype)
    generator = torch.Generator(device="cuda").manual_seed(0)
    q = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    k = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    v = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    from transformer_nuggets.utils.benchmark import profiler
    with profiler("sdpa_FA4", with_stack=False):
        for _ in range(3):
            out = F.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
            loss = out.real.sum()
            loss.backward()
    print("Scaled dot product attention output norm:", out.norm().item())
    print("dq norm:", q.grad.norm().item())

if __name__ == "__main__":
    CLI(main)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167348
Approved by: https://github.com/albanD, https://github.com/malfet
2025-11-12 03:29:07 +00:00
25e9d8124c Revert "Use c7i.2xlarge for B200 build (#167078)"
This reverts commit bb3748346484d49ace45dcc92b72c12b2ba30d98.

Reverted https://github.com/pytorch/pytorch/pull/167078 on behalf of https://github.com/zxiiro due to This seems to be breaking build when compile is not using sscache. Needs more investigation. ([comment](https://github.com/pytorch/pytorch/pull/167078#issuecomment-3519717750))
2025-11-12 03:22:48 +00:00
3fc193121a Update base for rebase to main to pick up #167580 on "DTensor fast path: port return_and_correct_aliasing and inplace/out checks"
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-11 19:21:57 -08:00
bc882f8284 Support Python 3.14 Lazy Function Annotations on FX graph (#167573)
This was intersting to debug:
(1) Python 3.14 ships with a lazy way for retrieving annotations. The annotations field can be a callable that lazily evaluates it
(2) On the dynamo side, `SET_FUNCTION_ATTRIBUTE` needs to handle an extra flag value (0x10)
(3) The decorator `functools.wraps` used extensively in the codebase (e.g. `make_fx`, `functionalize`) doesn't copy the `__annotations__` attribute by default. To correctly retrieve an annotatin, we need to walk on the chain of `__wrapped__` objects and retrieve the attribute from the first function. Fortunately, there are functions on the stdlib to do this.

Fixes:
```
'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_fx_out_op_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_fx_transpose_simple_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_optional_tensorlist2_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_fx_multi_out_op_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_fx_reapply_views_simple_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_fx_simple_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_nonfunctional_output_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_opt_tensor_list_cpu', 'test/functorch/test_eager_transforms.py::TestFunctionalizeCPU::test_functionalize_optional_tensorlist1_cpu'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167573
Approved by: https://github.com/williamwen42
2025-11-12 02:55:57 +00:00
edd365ed4a [MemTracker] Fix: Remove monkey patching DTensor dispatch (#167580)
Fixes `MemTracker` for #167051

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167580
Approved by: https://github.com/anshul-si
2025-11-12 02:51:40 +00:00
1366a2fa55 [ROCm][CI] Enable uploading of artifacts from docker-builds.yml (#167379)
Needed for https://github.com/pytorch/pytorch/pull/167554 so that we can enable docker caching for ROCm MI3xx runners

Replaces https://github.com/pytorch/pytorch/pull/167378 (by filing from pytorch/pytorch branch so OIDC login doesn't fail due to forked repo)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167379
Approved by: https://github.com/jeffdaily
2025-11-12 02:24:05 +00:00
91f0c5a9da [simplefsdp] add manual bucketing pass (#165487)
As titled, this PR adds manual bucketing pass to SimpleFSDP. Users will need to parse FQNs they wanted to bucket together using `module_bucket_plans`. Then, `_manual_bucket_collectives` will get the node of the subgraphs correspond to each `bucket_module`, and bucket bucketable (FSDP-style) AG/RS together. `_manual_reorder_graph` reorders them for overlapping.

For detailed performance, see this torchtitan PR: https://github.com/pytorch/torchtitan/pull/1881.

There are a few todo items isted in torchtitan PR. Let's start with this PR that implements FSDP+TP+llama3 manual bucketing. I will fix/add the rest in follow up PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165487
Approved by: https://github.com/ezyang
2025-11-12 02:18:34 +00:00
67390692c5 [ROCm][CI] Restrict docker-cache-rocm.yml to main/release branches (#167593)
Follow-up to https://github.com/pytorch/pytorch/pull/167554

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167593
Approved by: https://github.com/jeffdaily
2025-11-12 02:08:23 +00:00
1debfd44fd Revert "Add FA4 to sdpa (#167348)"
This reverts commit cdf0a9c21f7c27298a5bc71620206353125c5494.

Reverted https://github.com/pytorch/pytorch/pull/167348 on behalf of https://github.com/malfet due to Looks like it broke lint? ([comment](https://github.com/pytorch/pytorch/pull/167348#issuecomment-3519549113))
2025-11-12 02:05:30 +00:00
cdf0a9c21f Add FA4 to sdpa (#167348)
# Summary
See title ;)

## Design

Currently once you install there is no going back in the same python process, this need not be the case, cc @mikaylagawarecki's work on being able to grab original impl. I'll leave for follow up.

Okay I added an open reg, but I really want the backends to be found so some weird typing but we get
<img width="523" height="197" alt="Screenshot 2025-11-07 at 3 30 32 PM" src="https://github.com/user-attachments/assets/586de943-bbed-40cf-abd1-131f747a4cf1" />

## Overheads:
<img width="799" height="735" alt="Screenshot 2025-11-07 at 2 35 04 PM" src="https://github.com/user-attachments/assets/f9217f31-3e42-4816-8fb3-29ea8b49d735" />
First call to forward -> majority of time is spent in jit for FA

First call to backward, 3sec interestingly it doesn't appear that with_stack gets events in the backwards loop @albanD is this expected?
<img width="948" height="385" alt="Screenshot 2025-11-07 at 2 35 50 PM" src="https://github.com/user-attachments/assets/a40bacd0-3fb0-4bd8-b33e-bec8fb3f36c0" />

Getting form Pt op to impl is about 43 us which is dwarfed by other cpu overheads
<img width="1227" height="649" alt="Screenshot 2025-11-07 at 2 37 41 PM" src="https://github.com/user-attachments/assets/51da0615-facd-41e1-a6e2-fb7778079ab6" />

Just invoking the jit object from cutesl is 100s of us
<img width="545" height="414" alt="Screenshot 2025-11-07 at 2 38 19 PM" src="https://github.com/user-attachments/assets/d20345a0-6c47-4dcb-892f-9ef9894a1cf5" />

### Example usage
```Py
#!/usr/bin/env python3

"""Minimal FA4 smoke test for scaled dot product attention."""

from __future__ import annotations

import sys
from jsonargparse import CLI

import torch
import torch.nn.functional as F
from torch.nn.attention import (
    install_flash_attention_impl,
    sdpa_kernel,
    SDPBackend,
)

def _map_dtype(kind: str) -> torch.dtype:
    return torch.bfloat16 if kind == "bf16" else torch.float16

# To infinity and beyond
install_flash_attention_impl("FA4")

@sdpa_kernel([SDPBackend.FLASH_ATTENTION])
def main(
    module_path: str = "flash_attn.cute.interface",
    batch: int = 4,
    seq: int = 81292,
    heads: int = 16,
    head_dim: int = 128,
    device: int = 0,
    dtype: str = "bf16"
    ) -> None:
    if not torch.cuda.is_available():
        sys.exit("CUDA is required for FA4 smoke testing")
    torch.cuda.set_device(device)
    dtype = _map_dtype(dtype)
    generator = torch.Generator(device="cuda").manual_seed(0)
    q = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    k = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    v = torch.randn(
        batch,
        heads,
        seq,
        head_dim,
        device="cuda",
        dtype=dtype,
        requires_grad=True,
        generator=generator,
    )
    from transformer_nuggets.utils.benchmark import profiler
    with profiler("sdpa_FA4", with_stack=False):
        for _ in range(3):
            out = F.scaled_dot_product_attention(q, k, v, attn_mask=None, dropout_p=0.0, is_causal=False)
            loss = out.real.sum()
            loss.backward()
    print("Scaled dot product attention output norm:", out.norm().item())
    print("dq norm:", q.grad.norm().item())

if __name__ == "__main__":
    CLI(main)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167348
Approved by: https://github.com/albanD
2025-11-12 01:07:59 +00:00
115016f1a2 [Device Mesh][ez] Clean up unused parameters and duplicate codes (#167581)
While refactoring the code, I found we re-init `_flatten_mapping` and still keep `_flatten_mesh_list ` inside code which is not needed anymore. Let's remove it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167581
Approved by: https://github.com/fegin
2025-11-12 00:59:32 +00:00
971e6ca434 fix sym_size_, sym_stride lowering (#167565)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167565
Approved by: https://github.com/bobrenjc93, https://github.com/Microve, https://github.com/Skylion007
ghstack dependencies: #167345
2025-11-12 00:53:36 +00:00
e8d411e7f7 FSDPMemTracker fix with multihander hooks. (#165662)
Fixes #164663

## Issue
The torch model with multiple layers that is wrapped with fsdp2 registers pre and post forward hooks in a group using `_MultiHandler`. This becomes an issue during the context manager of the tracker where the hooks are reset and replaced. The hooks are all using the same fsdp state pointer so one reset will reset all.  So when the output layer was modified with a new pre and post forward hook it would delete the previous layer's initialization causing `KeyError` for the Norm layer as it is nonexistent.

## The Fix
Check to see if there are multiple `_MultiHandler` objects and `RemoveHandler` objects and only execute the remove hook once.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165662
Approved by: https://github.com/sanketpurandare
2025-11-11 23:49:36 +00:00
2e5233d7bd Revert "Support AC in default partitioner when functionalization is enabled (#166610)"
This reverts commit de773364be041ca7fd2dcaf35ca15c093fc9370b.

Reverted https://github.com/pytorch/pytorch/pull/166610 on behalf of https://github.com/soulitzer due to breaking internal tests ([comment](https://github.com/pytorch/pytorch/pull/166610#issuecomment-3519047226))
2025-11-11 23:01:09 +00:00
514dd96376 Remove --no-use-pep517 flag (#167096)
In pip 25.3 and newer, use of --no-use-pep517 has been removed (https://pip.pypa.io/en/stable/news/). In builds with pip 25.2, a warning message notes:

> DEPRECATION: Building 'torchvision' using the legacy setup.py bdist_wheel mechanism, which will be removed in a future version. pip 25.3 will enforce this behaviour change. A possible replacement is to use the standardized build interface by setting the `--use-pep517` option, (possibly combined with `--no-build-isolation`), or adding a `pyproject.toml` file to the source tree of 'torchvision'. Discussion can be found at https://github.com/pypa/pip/issues/6334

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167096
Approved by: https://github.com/atalman
2025-11-11 23:00:35 +00:00
9ae62fcc18 [ROCm][CI] dynamo benchmarks update ci expected accuracy (#167574)
repvgg_a2 IMPROVED: accuracy=pass, expected=fail_accuracy

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167574
Approved by: https://github.com/jeffdaily

Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-11-11 22:54:55 +00:00
ae71b0e163 Fix typo in torch._refs (#167310)
Should be a typo here, but it doesn't raise an error because the inner function splits it into `a` and `,`, and the `,` case check is skipped.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167310
Approved by: https://github.com/eellison
2025-11-11 22:31:09 +00:00
5b6ff8148d Revert "[ARM] Improve LLM performance & mem usage using int4-bf16 KleidiAI kernels (#158250)"
This reverts commit 402c46503002f98ccfc023a733081fb0719223a1.

Reverted https://github.com/pytorch/pytorch/pull/158250 on behalf of https://github.com/izaitsevfb due to Broke some torch.compile jobs ([comment](https://github.com/pytorch/pytorch/pull/158250#issuecomment-3518944863))
2025-11-11 22:27:51 +00:00
1f7e4343e7 [ROCm][CI] Add docker-cache-rocm.yml to test MI3xx CI docker caching (#167554)
* Trigger this workflow on every completed run of `docker-builds.yml`
* Uses `ubuntu-latest` for downloading artifacts from `docker-build` workflow run
* Uses `linux.rocm.gfx942.docker-cache` to cache docker images as tarballs for MI3xx CI

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167554
Approved by: https://github.com/jeffdaily
2025-11-11 21:32:22 +00:00
b21856f5fc Revert "[DebugMode] record triton kernels, run-to-run determinism checks (#167028)"
This reverts commit 259ba0ecabd809edd35d12b4f992777cb5923b68.

Reverted https://github.com/pytorch/pytorch/pull/167028 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/167028#issuecomment-3518811298))
2025-11-11 21:31:12 +00:00
259ba0ecab [DebugMode] record triton kernels, run-to-run determinism checks (#167028)
Following up on https://github.com/pytorch/pytorch/pull/166348, extends DebugMode to capture inductor triton kernels at runtime, and adds an API for checking run-to-run determinism based on tensor hashes.

The workflow looks something like...
```python
# do 1st run with hashes, get logs
with DebugMode() as debug_mode, DebugMode.log_tensor_hashes():
    compiled_model(*inputs)
logs1 = debug_mode.logs

# do 2nd run
with DebugMode() as debug_mode, DebugMode.log_tensor_hashes():
    compiled_model(*inputs)
logs2 = debug_mode.logs

# returns list of calls w/ mismatched outputs
mismatches = DebugMode.check_hash_mismatches(logs1, logs2)
```

Example dump off a smaller version of @drisspg's FlexAttention fwd+bwd determinism tests [script](https://gist.github.com/pianpwk/f65cc63811d12853709dcc77d7eb69f1) (without forced reduction order):
```
cfg: TestConfig(name='Standard', B=2, Hq=32, Hkv=32, Q=2048, KV=2048, Dqk=128, Dv=128)
DETERMINISM: fwd: True, bwd_q: False, bwd_k: False, bwd_v: True

$$$ DEBUG MODE DUMP $$$  (this is what the logs look like)

    [triton] triton_tem_fused_0(arg_Q=t: bf16[2, 32, 2048, 128], arg_K=t: bf16[2, 32, 2048, 128], arg_V=t: bf16[2, 32, 2048, 128], arg_LSE=t: f32[2, 32, 2048], arg_MAX=t: f32[2, 32, 2048], arg_KV_NUM_BLKS=t: i32[2, 32, 16], arg_KV_IDX=t: i32[2, 32, 16, 16], arg_FULL_KV_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_KV_IDX=t: i32[2, 32, 16, 16], out_ptr0=t: bf16[2, 32, 2048, 128])
    # post-kernel hashes: {arg_Q: 13385916.068706088, arg_K: 13389356.409105342, arg_V: 13384993.48412523, arg_LSE: 1347168.9026973695, arg_MAX: 81775.3811062593, arg_KV_NUM_BLKS: 1024.0, arg_KV_IDX: 122880.0, arg_FULL_KV_NUM_BLKS: 7680.0, arg_FULL_KV_IDX: 122880.0, out_ptr0: 924917.7918248245}

    [triton] triton_per_fused_zeros_0(in_ptr0=t: bf16[2, 32, 2048, 128], in_ptr1=t: bf16[2, 32, 2048, 128], out_ptr1=t: f32[2, 32, 2048], xnumel=131072, r0_numel=128)
    # post-kernel hashes: {in_ptr0: 924917.7918248245, in_ptr1: 13389213.797377996, out_ptr1: 81775.38106592931}

    [triton] triton_tem_fused_zeros_1(arg_Q=t: bf16[2, 32, 2048, 128], arg_K=t: bf16[2, 32, 2048, 128], arg_V=t: bf16[2, 32, 2048, 128], arg_LSE=t: f32[2, 32, 2048], arg_DELTA=t: f32[2, 32, 2048], arg_DO=t: bf16[2, 32, 2048, 128], arg_DQ=t: bf16[2, 32, 2048, 128], arg_DV=t: bf16[2, 32, 2048, 128], arg_KV_NUM_BLKS=t: i32[2, 32, 16], arg_KV_IDX=t: i32[2, 32, 16, 16], arg_Q_NUM_BLKS=t: i32[2, 32, 16], arg_Q_IDX=t: i32[2, 32, 16, 16], arg_FULL_KV_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_KV_IDX=t: i32[2, 32, 16, 16], arg_FULL_Q_NUM_BLKS=t: i32[2, 32, 16], arg_FULL_Q_IDX=t: i32[2, 32, 16, 16], out_ptr0=t: bf16[2, 32, 2048, 128])
    # post-kernel hashes: {arg_Q: 13385916.068706088, arg_K: 13389356.409105342, arg_V: 13384993.48412523, arg_LSE: 1347168.9026973695, arg_DELTA: 81775.38106592931, arg_DO: 13389213.797377996, arg_DQ: 874474.8084187683, arg_DV: 727742.3138379117, arg_KV_NUM_BLKS: 1024.0, arg_KV_IDX: 122880.0, arg_Q_NUM_BLKS: 1024.0, arg_Q_IDX: 122880.0, arg_FULL_KV_NUM_BLKS: 7680.0, arg_FULL_KV_IDX: 122880.0, arg_FULL_Q_NUM_BLKS: 7680.0, arg_FULL_Q_IDX: 122880.0, out_ptr0: 700542.3431890717}

$$$ MISMATCHES $$$
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_0', 'arg_name': 'arg_MAX', 'pytree_path': None, 'hash1': 0.0, 'hash2': 81775.3811062593, 'rel_diff': 1.0, 'is_input_hash': False}  # I guess this one is misleading? not sure if I'm doing something wrong with waiting for kernel results
mismatch: {'call_type': 'triton kernel', 'call': 'triton_per_fused_zeros_0', 'arg_name': 'out_ptr1', 'pytree_path': None, 'hash1': 81775.3811062593, 'hash2': 81775.38106592931, 'rel_diff': 4.931801261646669e-10, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'arg_DELTA', 'pytree_path': None, 'hash1': 81775.3811062593, 'hash2': 81775.38106592931, 'rel_diff': 4.931801261646669e-10, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'arg_DQ', 'pytree_path': None, 'hash1': 874474.8097136207, 'hash2': 874474.8084187683, 'rel_diff': 1.480720012120795e-09, 'is_input_hash': False}
mismatch: {'call_type': 'triton kernel', 'call': 'triton_tem_fused_zeros_1', 'arg_name': 'out_ptr0', 'pytree_path': None, 'hash1': 700542.3488049245, 'hash2': 700542.3431890717, 'rel_diff': 8.016435812581196e-09, 'is_input_hash': False}
```

note: current hash implementation is basically tensor norm, so tensor closeness -> hash closeness. This is likely to change soon, e.g. maybe to `torch.hash_tensor` (https://github.com/pytorch/pytorch/pull/154149) by default

Sample paste diff between log dumps from 2 runs:
<img width="1665" height="445" alt="Screenshot 2025-11-05 at 11 27 24 PM" src="https://github.com/user-attachments/assets/41402e37-f50b-4a9e-a17c-bb98b5917076" />

Another case where running this for FSDP2 on Llama3-8B, helped narrow down divergence b/w aot_eager <-> inductor, to inductor's FWD RMSNorm kernels: P2027003180

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167028
Approved by: https://github.com/v0i0
2025-11-11 20:37:53 +00:00
051f1fe8e3 Revert "[ROCm][CI] Update docker-cache-mi300.yml to test MI300 CI docker caching (#167554)"
This reverts commit ee387c43feada1cc2049b42a970ec4e2f12f210e.

Reverted https://github.com/pytorch/pytorch/pull/167554 on behalf of https://github.com/jithunnair-amd due to workflow had failure 'Unexpected input(s) 'run_id'' ([comment](https://github.com/pytorch/pytorch/pull/167554#issuecomment-3518642191))
2025-11-11 20:34:44 +00:00
ee387c43fe [ROCm][CI] Update docker-cache-mi300.yml to test MI300 CI docker caching (#167554)
Trigger this workflow on every completed run of `docker-builds.yml` and run on `ubuntu-latest` so it doesn't queue infinitely for `rocm-docker` label

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167554
Approved by: https://github.com/jeffdaily
2025-11-11 19:49:00 +00:00
3a944661d6 Cpython test_math.FMATests (#167217)
Resolves issues running the dynamo cpython math.fma tests.

Though math.fma is enabled to perform a multiply add in dynamo, torch.addcmul is currently used which doesn't guarantee the user request for fma. It was decided to not use inductor fma prim as it would break the contract of using aten/core ir in dynamo output - otherwise export=True may have issues.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167217
Approved by: https://github.com/guilhermeleobas
2025-11-11 19:26:18 +00:00
56034074ca Revert "[Inductor] Naive foreach autotune support (#162053)"
This reverts commit 6c5db82584bf71f5b1db3b598bbd00f44140c28d.

Reverted https://github.com/pytorch/pytorch/pull/162053 on behalf of https://github.com/mlazos due to Sorry, there's an internal slowdown due to the extra triton configs you added ([comment](https://github.com/pytorch/pytorch/pull/162053#issuecomment-3518423369))
2025-11-11 19:23:40 +00:00
88efb10a0c Update base for fix checking for default overloads on "WIP: DTensor fast path: port return_and_correct_aliasing and inplace/out checks"
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-11 11:20:07 -08:00
8def619bbe [user-streams] wait_stream op (#167512)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167512
Approved by: https://github.com/williamwen42
ghstack dependencies: #167510, #167511
2025-11-11 19:18:03 +00:00
61883a5787 [user-streams] Allow new streams to be created and registered during compilation (#167511)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167511
Approved by: https://github.com/williamwen42
ghstack dependencies: #167510
2025-11-11 19:18:03 +00:00
d8ada1ee76 [user-streams] Allow new events to be created and registered during compilation (#167510)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167510
Approved by: https://github.com/williamwen42
2025-11-11 19:18:03 +00:00
fe841a1db4 [DeviceMesh] Log DeviceMesh.__init__ usage (#167375)
Adds (meta-internal-only) API usage logging for DeviceMesh creation.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167375
Approved by: https://github.com/fduwjj
ghstack dependencies: #167374
2025-11-11 19:15:47 +00:00
b65829b84f [DTensor] Log API usage metrics for DTensor and DeviceMesh (#167374)
Logging propagate_op_sharding_non_cached is a compromise between
 - logging in DTensor.__init__ to catch ALL DTensor usage
 - sparing the overhead in a latency-senstitive region like
   DTensor.__init__
 - and 'real' DTensor usage should incur at least one call to sharding
   propagation

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167374
Approved by: https://github.com/zpcore
2025-11-11 19:15:47 +00:00
b0e0ae97ba include thrust/distance.h explicitly in cuda sparse softmax (#167436)
`thrust::distance` is defined there
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167436
Approved by: https://github.com/Skylion007
2025-11-11 19:10:55 +00:00
f44a1ddcb2 Revert "[ROCm][CI] Update docker-cache-mi300.yml to test MI300 CI docker caching (#167554)"
This reverts commit 184e2cbc89570e1bf466b15d70fc36ed71be0eb9.

Reverted https://github.com/pytorch/pytorch/pull/167554 on behalf of https://github.com/jithunnair-amd due to Need to fix lint ([comment](https://github.com/pytorch/pytorch/pull/167554#issuecomment-3518382341))
2025-11-11 19:09:45 +00:00
184e2cbc89 [ROCm][CI] Update docker-cache-mi300.yml to test MI300 CI docker caching (#167554)
Trigger this workflow on every completed run of `docker-builds.yml` and run on `ubuntu-latest` so it doesn't queue infinitely for `rocm-docker` label

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167554
Approved by: https://github.com/jeffdaily
2025-11-11 19:07:19 +00:00
416421c7c4 fix failure of exporting compiled model with nested dynamic shapes (#166358)
## Problems
When exporting a compiled model with nested input like below
```python
import torch
from torch.export import export, Dim

def test_export_compiled_model_with_nested_dynamic_shapes():
   """Test exporting a compiled model with nested dict inputs and dynamic shapes."""
   print("Running test_export_compiled_model_with_nested_dynamic_shapes...")

   class M(torch.nn.Module):
       def forward(self, data_batch):
           return data_batch["a1"] + data_batch["a2"]

   m = M()
   compiled_m = torch.compile(m)
   example_args = ({
       "a1": torch.ones(3, 3),
       "a2": torch.ones(3, 3),
   },)
   dynamic_shapes = ({
       "a1": {0: Dim.DYNAMIC},
       "a2": {0: Dim.DYNAMIC},
   },)

   try:
       ep = export(compiled_m, example_args, dynamic_shapes=dynamic_shapes, strict=True)
       gm = ep.module()
       result_exported = gm(*example_args)
       result_compiled = compiled_m(*example_args)

       assert torch.allclose(result_exported, result_compiled), "Results don't match!"
       print("✓ test_export_compiled_model_with_nested_dynamic_shapes PASSED")
       return True
   except Exception as e:
       print(f"✗ test_export_compiled_model_with_nested_dynamic_shapes FAILED")
       print(f"Error: {e}")
       import traceback
       traceback.print_exc()
       return False

def test_export_compiled_model_with_kwargs_dynamic_shapes():
   """Test exporting a compiled model with kwargs and dynamic shapes."""
   print("\nRunning test_export_compiled_model_with_kwargs_dynamic_shapes...")

   class M(torch.nn.Module):
       def forward(self, a1, a2):
           return a1 + a2

   m = M()
   compiled_m = torch.compile(m)
   example_args = ()
   example_kwargs = {
       "a1": torch.ones(3, 3),
       "a2": torch.ones(3, 3),
   }
   dynamic_shapes = {
       "a1": {0: Dim.DYNAMIC},
       "a2": {0: Dim.DYNAMIC},
   }

   try:
       ep = export(compiled_m, example_args, kwargs=example_kwargs, dynamic_shapes=dynamic_shapes, strict=True)
       gm = ep.module()
       result_exported = gm(**example_kwargs)
       result_compiled = compiled_m(**example_kwargs)

       assert torch.allclose(result_exported, result_compiled), "Results don't match!"
       print("✓ test_export_compiled_model_with_kwargs_dynamic_shapes PASSED")
       return True
   except Exception as e:
       print(f"✗ test_export_compiled_model_with_kwargs_dynamic_shapes FAILED")
       print(f"Error: {e}")
       import traceback
       traceback.print_exc()
       return False

if __name__ == "__main__":
   print("Testing export of compiled models with dynamic shapes\n")
   print("=" * 70)

   results = []
   results.append(test_export_compiled_model_with_nested_dynamic_shapes())
   results.append(test_export_compiled_model_with_kwargs_dynamic_shapes())

   print("\n" + "=" * 70)
   print(f"\nResults: {sum(results)}/{len(results)} tests passed")

   if all(results):
       print("✓ All tests passed!")
   else:
       print("✗ Some tests failed")
       exit(1)
```

It will report
```
======================================================================
Running test_export_compiled_model_with_nested_dynamic_shapes...
✗ test_export_compiled_model_with_nested_dynamic_shapes FAILED
Error: Detected mismatch between the structure of `inputs` and `dynamic_shapes`: `inputs[0]` is a <class 'tuple'>, but `dynamic_shapes[0]` is a <class 'dict'>
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation

The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.
Traceback (most recent call last):
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 614, in _tree_map_with_path
    return tree_map_with_path(f, tree, *dynamic_shapes, is_leaf=is_leaf)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/utils/_pytree.py", line 2055, in tree_map_with_path
    all_keypath_leaves = keypath_leaves + [treespec.flatten_up_to(r) for r in rests]
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/utils/_pytree.py", line 2055, in <listcomp>
    all_keypath_leaves = keypath_leaves + [treespec.flatten_up_to(r) for r in rests]
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/utils/_pytree.py", line 1188, in flatten_up_to
    helper(self, tree, subtrees)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/utils/_pytree.py", line 1185, in helper
    helper(subspec, subtree, subtrees)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/utils/_pytree.py", line 1141, in helper
    raise ValueError(
ValueError: Node type mismatch; expected <class 'tuple'>, but got <class 'dict'>.

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/chzhu/infinitrain/test_exprot.py", line 25, in test_export_compiled_model_with_nested_dynamic_shapes
    ep = export(compiled_m, example_args, dynamic_shapes=dynamic_shapes, strict=True)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/__init__.py", line 311, in export
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/__init__.py", line 277, in export
    return _export(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1163, in wrapper
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1129, in wrapper
    ep = fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/exported_program.py", line 124, in wrapper
    return fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 2255, in _export
    ep = _export_for_training(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1163, in wrapper
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1129, in wrapper
    ep = fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/exported_program.py", line 124, in wrapper
    return fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 2071, in _export_for_training
    export_artifact = export_func(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1415, in _strict_export
    gm_torch_level = _export_to_torch_ir(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 785, in _export_to_torch_ir
    _check_dynamic_shapes(combined_args, dynamic_shapes)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 1031, in _check_dynamic_shapes
    _tree_map_with_path(check_shape, combined_args, dynamic_shapes, tree_name="inputs")
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 686, in _tree_map_with_path
    _compare(tree_spec, other_tree_spec, [])
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 677, in _compare
    _compare(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 652, in _compare
    raise_mismatch_error(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 634, in raise_mismatch_error
    raise UserError(
torch._dynamo.exc.UserError: Detected mismatch between the structure of `inputs` and `dynamic_shapes`: `inputs[0]` is a <class 'tuple'>, but `dynamic_shapes[0]` is a <class 'dict'>
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation

The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.

Running test_export_compiled_model_with_kwargs_dynamic_shapes...
✗ test_export_compiled_model_with_kwargs_dynamic_shapes FAILED
Error: When `dynamic_shapes` is specified as a dict, its top-level keys must be the arg names ['kwargs'] of `inputs`, but here they are ['a1', 'a2']. Since here `inputs` is a list/tuple enclosing a single dict, maybe you just forgot to enclose `dynamic_shapes` in a list/tuple?
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation

The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.
Traceback (most recent call last):
  File "/home/chzhu/infinitrain/test_exprot.py", line 62, in test_export_compiled_model_with_kwargs_dynamic_shapes
    ep = export(compiled_m, example_args, kwargs=example_kwargs, dynamic_shapes=dynamic_shapes, strict=True)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/__init__.py", line 311, in export
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/__init__.py", line 277, in export
    return _export(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1163, in wrapper
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1129, in wrapper
    ep = fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/exported_program.py", line 124, in wrapper
    return fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 2255, in _export
    ep = _export_for_training(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1163, in wrapper
    raise e
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1129, in wrapper
    ep = fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/exported_program.py", line 124, in wrapper
    return fn(*args, **kwargs)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 2071, in _export_for_training
    export_artifact = export_func(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 1415, in _strict_export
    gm_torch_level = _export_to_torch_ir(
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/_trace.py", line 785, in _export_to_torch_ir
    _check_dynamic_shapes(combined_args, dynamic_shapes)
  File "/home/chzhu/infinitrain/build/infinitrain/environments/development-venv/lib/python3.10/site-packages/torch/export/dynamic_shapes.py", line 1007, in _check_dynamic_shapes
    raise UserError(
torch._dynamo.exc.UserError: When `dynamic_shapes` is specified as a dict, its top-level keys must be the arg names ['kwargs'] of `inputs`, but here they are ['a1', 'a2']. Since here `inputs` is a list/tuple enclosing a single dict, maybe you just forgot to enclose `dynamic_shapes` in a list/tuple?
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#dynamic-shapes-validation

The error above occurred when calling torch.export.export. If you would like to view some more information about this error, and get a list of all other errors that may occur in your export call, you can replace your `export()` call with `draft_export()`.

======================================================================
```
## Torch Version
(reproducible nightly version)

## Other Behavior
The model can export regularly when we test without compiling the model
```python
import torch
from torch.export import export, Dim

def test_export_compiled_model_with_nested_dynamic_shapes():
   """Test exporting a compiled model with nested dict inputs and dynamic shapes."""
   print("Running test_export_compiled_model_with_nested_dynamic_shapes...")

   class M(torch.nn.Module):
       def forward(self, data_batch):
           return data_batch["a1"] + data_batch["a2"]

   m = M()
   example_args = ({
       "a1": torch.ones(3, 3),
       "a2": torch.ones(3, 3),
   },)
   dynamic_shapes = ({
       "a1": {0: Dim.DYNAMIC},
       "a2": {0: Dim.DYNAMIC},
   },)

   try:
       ep = export(m, example_args, dynamic_shapes=dynamic_shapes, strict=True)
       gm = ep.module()
       result_exported = gm(*example_args)
       result_compiled = m(*example_args)

       assert torch.allclose(result_exported, result_compiled), "Results don't match!"
       print("✓ test_export_compiled_model_with_nested_dynamic_shapes PASSED")
       return True
   except Exception as e:
       print(f"✗ test_export_compiled_model_with_nested_dynamic_shapes FAILED")
       print(f"Error: {e}")
       import traceback
       traceback.print_exc()
       return False

def test_export_compiled_model_with_kwargs_dynamic_shapes():
   """Test exporting a compiled model with kwargs and dynamic shapes."""
   print("\nRunning test_export_compiled_model_with_kwargs_dynamic_shapes...")

   class M(torch.nn.Module):
       def forward(self, a1, a2):
           return a1 + a2

   m = M()
   example_args = ()
   example_kwargs = {
       "a1": torch.ones(3, 3),
       "a2": torch.ones(3, 3),
   }
   dynamic_shapes = {
       "a1": {0: Dim.DYNAMIC},
       "a2": {0: Dim.DYNAMIC},
   }

   try:
       ep = export(m, example_args, kwargs=example_kwargs, dynamic_shapes=dynamic_shapes, strict=True)
       gm = ep.module()
       result_exported = gm(**example_kwargs)
       result_compiled = m(**example_kwargs)

       assert torch.allclose(result_exported, result_compiled), "Results don't match!"
       print("✓ test_export_compiled_model_with_kwargs_dynamic_shapes PASSED")
       return True
   except Exception as e:
       print(f"✗ test_export_compiled_model_with_kwargs_dynamic_shapes FAILED")
       print(f"Error: {e}")
       import traceback
       traceback.print_exc()
       return False

if __name__ == "__main__":
   print("Testing export of compiled models with dynamic shapes\n")
   print("=" * 70)

   results = []
   results.append(test_export_compiled_model_with_nested_dynamic_shapes())
   results.append(test_export_compiled_model_with_kwargs_dynamic_shapes())

   print("\n" + "=" * 70)
   print(f"\nResults: {sum(results)}/{len(results)} tests passed")

   if all(results):
       print("✓ All tests passed!")
   else:
       print("✗ Some tests failed")
       exit(1)

```
## Root Cause

This is because of a side effect of torch.compile(model). When the model is being compiled, the input signature will become (*args, **kwargs) automatically. In the above example, the `data_batch` will be added into `args` in combined_args [here](dc011d3203/torch/export/dynamic_shapes.py (L720)), and it will look like
```
{'args': ({'a1': tensor([[1., 1., 1.]... 1., 1.]]), 'a2': tensor([[1., 1., 1.]... 1., 1.]])},)}
```
Without the compiling, the combined args will look like
```
{'data_batch': {'a1': tensor([[1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.]]), 'a2': tensor([[1., 1., 1.],
        [1., 1., 1.],
        [1., 1., 1.]])}}

```
Thus causing the mismatch when we use treemap to match the dynamic shape with the input argos

The error is also reproducible when we setup kwargs as example argos (see the 2nd test above)
## Fix
Proposed fix: In [_combine_args](dc011d3203/torch/export/dynamic_shapes.py (L720)) we explicitly flatten out the kwargs and args into combined args.
## Side Effects
There are 2 existing tests that assume this behavior and
1. add `args` explicitly to dynamic shapes
2. wrap args into nested format in dynamic_shape

I have modified those test to make args and dynamic_shapes to be in consistent format.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166358
Approved by: https://github.com/angelayi
2025-11-11 19:04:58 +00:00
bd99ae3315 [Docs] Add warning that torch.export.load uses pickle (#167557)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167557
Approved by: https://github.com/zhxchen17, https://github.com/angelayi
2025-11-11 18:47:14 +00:00
ce8672c24f Fix use of TORCH_CHECK in torch/csrc/stable (#167495)
Tested by above PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167495
Approved by: https://github.com/janeyx99
ghstack dependencies: #166579, #166694, #166695, #167362
2025-11-11 17:58:30 +00:00
402c465030 [ARM] Improve LLM performance & mem usage using int4-bf16 KleidiAI kernels (#158250)
Co-authored-by: Nikhil Gupta [nikhil.gupta2@arm.com](mailto:nikhil.gupta2@arm.com)

This PR enables the use of KleidiAI INT4 kernels that directly produce BF16 outputs within PyTorch to boost LLM prefill & decode performance

**This change improves decode throughput by ~15% & reduces memory required to inference the model by 50%**

### Benchmark Setup
```
Model: meta-llama/Llama-3.1-8B
Test Platform: Neoverse V2
```
### Detailed Results

| Metric                           | With `--compile`         | Without `--compile`      |
|----------------------------------|---------------------------|---------------------------|
| Quantization Scheme              | INT4 symmetric channelwise | INT4 symmetric channelwise |
| Input Precision                  | BF16                      | BF16                      |
| Number of Layers Quantized       | 32                        | 32                        |
| Average Compression Ratio        | 87.49%                    | 87.49%                    |
| Total Quantization Time (s)      | 9.62                      | 10.32                     |
| Compile Time (First) (s)         | 134.48                    | 1.69                      |
| Compile Time (Second) (s)        | 80.44                     | 1.60                      |
| Compile Time (Subsequent) (s)    | 0.19                      | 0.22                      |
| Prefill Tokens                   | 54                        | 54                        |
| Decoded Tokens                   | 33                        | 33                        |
| Prefill Time (s)                 | 0.19                      | 0.22                      |
| Decode Time (s)                  | 0.76                      | 1.38                      |
| E2E Generation Time (s)          | 0.95                      | 1.60                      |
| Prefill Throughput (tokens/s)    | 288.13                    | 249.91                    |
| Decode Throughput (tokens/s)     | 43.42                     | 23.83                     |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158250
Approved by: https://github.com/malfet, https://github.com/aditew01, https://github.com/fadara01

Co-authored-by: Nikhil Gupta <nikhil.gupta2@arm.com>
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-11-11 17:50:22 +00:00
f4f185fa1a Update base for fix obvious brokenness/segfault on "WIP: DTensor fast path: port return_and_correct_aliasing and inplace/out checks"
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-11 09:38:09 -08:00
573a79fffa [OpenReg] Initialize device stream states for all devices in initOpenRegStreamsOnce (#167528)
Fixes #167527

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167528
Approved by: https://github.com/fffrog
2025-11-11 16:53:22 +00:00
4945180468 Add empty tensor check for _pad_packed_sequence (#167521)
That prevents null pointer dereference

Fixes https://github.com/pytorch/pytorch/issues/149622
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167521
Approved by: https://github.com/albanD
2025-11-11 16:46:13 +00:00
1df723e6f5 [inductor] Fix constant creation (#167398)
We ran into this issue when debugging inductor-lite. Calling `torch.tensor` within a fake mode (which is the case inside of inductor) will create a FakeTensor, which causes this FakeTensor to be used as a constant within inductor.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167398
Approved by: https://github.com/eellison, https://github.com/BoyuanFeng
2025-11-11 16:30:46 +00:00
f9b81e23e4 [ROCm] Disable group gemm CK path when composable kernel (CK) is not enabled (#167403)
For ROCm builds without CK support, ensure use_fast_path is false so that the CK path is not triggered, since CK is currently not available in this configuration.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167403
Approved by: https://github.com/Skylion007, https://github.com/ScottTodd, https://github.com/jeffdaily
2025-11-11 16:15:51 +00:00
ffe6cc39c7 [inductor] Optimize cold compile time when cudagraphs-partition is enabled (#167132)
Summary: When cudagraphs-parittion is enabled, we have seen an increase of cold compile time in the vllm benchmark (see https://github.com/vllm-project/vllm/issues/27080). After some profiling, we found Triton compilation time increased the most. Further investigation reveals it was caused by duplicated Triton kernels not being shared among different partitions. This PR fixes the issue by reusing the Trition kernel source code cache at the top-level PythonWrapperCodegen.

In theory we could further reduce the compilation time by completely skipping compiling duplicated partitions. That can come as a furture improvement.

Some vllm benchmarking data,

```
VLLM_USE_STANDALONE_COMPILE=0 VLLM_DISABLE_COMPILE_CACHE=1 vllm bench latency -O.cudagraph_mode=PIECEWISE -O.use_inductor_graph_partition=True --model meta-llama/Meta-Llama-3.1-8
```
Before:
```
torch.compile takes 69.18 s in total
```
After:
```
torch.compile takes 26.81 s in total
```

As a refrence, this is the compile time when turning off inductor graph partition. Looks like we still have some gap to close.
```
VLLM_USE_STANDALONE_COMPILE=0 VLLM_DISABLE_COMPILE_CACHE=1 vllm bench latency -O.cudagraph_mode=PIECEWISE -O.use_inductor_graph_partition=False --model meta-llama/Meta-Llama-3.1-8B

torch.compile takes 19.41 s in total
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167132
Approved by: https://github.com/eellison
ghstack dependencies: #167131
2025-11-11 15:54:31 +00:00
db1f3f6901 [inductor] Only generate compile-time auto-tuning block in the main graph (#167131)
Summary: When cudagraphs partition and autotune_at_compile_time are enabled, currently each subgraph will generate its own auto-tuning code block and run them once by one. This PR improves it by only generating one auto-tuning code block at the main graph level and execute it once time to auto-tune all the kernels.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167131
Approved by: https://github.com/eellison
2025-11-11 15:54:31 +00:00
43041f0a43 Remove superflous/misplaced TestFailure specs (#165989)
The tests are in class `TestInductorDynamic` which isn't affected by the `test_failures` dict which is only used as an argument to `copy_tests` for the `CommonTemplate` defined in another file.

So those have no effect.

Idea: Enhance `copy_tests` to detect unused keys

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165989
Approved by: https://github.com/benjaminglass1, https://github.com/ezyang
2025-11-11 15:36:43 +00:00
dc00842b81 [ROCm][CI] trigger magma build with gfx950 for ROCm7.1 (#167390)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167390
Approved by: https://github.com/jeffdaily
2025-11-11 15:17:37 +00:00
f1a129a6d0 Clarify that crashes/OOB accesses and not security threats (#167519)
Added note on crashes and out of bounds access in PyTorch.

Addresses https://github.com/pytorch/pytorch/issues/166881#issuecomment-3513245388

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167519
Approved by: https://github.com/albanD
2025-11-11 15:14:51 +00:00
fad48ffa62 [ROCm][CI] Match workflow names with workflow file names (#167483)
Fixes issue with uploading artifacts, which was inadvertently disabled for some renamed workflows via https://github.com/pytorch/pytorch/pull/167225

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167483
Approved by: https://github.com/jeffdaily
2025-11-11 14:45:44 +00:00
3e7a66fae1 [BugFix][Refactor] fix several instances which use f = open(...) without a corresponding f.close() (#167423)
This pattern can lead to potential file descriptor leaks, which can cause resource exhaustion or other unpredictable issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167423
Approved by: https://github.com/cyyever, https://github.com/Skylion007
2025-11-11 11:27:59 +00:00
5f0a563dc8 [pallas backend] implement complex indexing (#167493)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167493
Approved by: https://github.com/jansel
ghstack dependencies: #167426
2025-11-11 10:32:37 +00:00
678915d5f1 Update Arm copyright dates in LICENSE file (#167529)
Arm has made contributions to PyTorch this year however the top-level `LICENSE` file has not been updated yet to reflect this, which this PR addresses.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167529
Approved by: https://github.com/mlazos
2025-11-11 10:25:08 +00:00
daed97afff [Inductor] fix CppTile2DKernel for fp8 datatype (#167451)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167451
Approved by: https://github.com/Xia-Weiwen, https://github.com/jansel
2025-11-11 09:25:14 +00:00
53947adb1f [Inductor] optimize the heuristics of sum reduction (#163144)
Fix https://github.com/pytorch/pytorch/issues/151400.
**Summary:**
Optimize the heuristics of sum reduction, reduce the chunk size of cascade sum to improve numerical stability.
I ran the Inductor benchmark with this PR on CPU, and no performance regression is seen.

**Example:**
Take https://github.com/pytorch/pytorch/issues/151400 as an example:
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config

config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)

class Model(torch.nn.Module):

    def __init__(self):
        super().__init__()

    def forward(self, x):
        vec = x.flatten()
        vec_one = torch.ones_like(vec)
        x = torch.outer(vec, vec_one)
        return torch.mean(x, dim=1)

model = Model()

x = torch.randn(3, 8, 64, 64)  # error will be amplified as the input tensor gets larger

inputs = [x]

def run_test(model, inputs, backend):
    if backend != "eager":
        model = torch.compile(model, backend=backend)
    torch.manual_seed(0)
    output = model(*inputs)
    return output

output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
fp64 = run_test(model.to(dtype=torch.float64), [inputs[0].to(dtype=torch.float64)], 'eager')

print(torch.allclose(output, c_output, rtol=1e-3, atol=1e-3))
print(torch.max(torch.abs(c_output - output)))
print(torch._dynamo.utils.same(output, c_output, fp64))

```

**logs:**
- Before
```
False
tensor(0.0052)
False
```

- After
```
True
tensor(0.0004)
True
```

-
**Generated code:**
- Before
```
cpp_fused_mean_mul_ones_like_view_0 = async_compile.cpp_pybinding(['float*', 'const float*'], '''
#include <torch/csrc/inductor/cpp_prefix.h>
extern "C"  void  kernel(float* in_out_ptr0,
                       const float* in_ptr0)
{
    auto out_ptr0 = in_out_ptr0;
    #pragma omp parallel num_threads(240)
    {
        int tid = omp_get_thread_num();
        {
            #pragma omp for
            for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(98304L); x0+=static_cast<int64_t>(16L))
            {
                {
                    float tmp_acc0 = 0;
                    at::vec::Vectorized<float> tmp_acc0_vec = at::vec::Vectorized<float>(0);
                    for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(98304L); x1+=static_cast<int64_t>(1L))
                    {
                        {
                            if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                            {
                                auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                                auto tmp1 = static_cast<float>(1.0);
                                auto tmp2 = at::vec::Vectorized<float>(tmp1);
                                auto tmp3 = tmp0 * tmp2;
                                tmp_acc0_vec = tmp_acc0_vec + tmp3;
                            }
                        }
                    }
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        tmp_acc0_vec.store(out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
                {
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        auto tmp0 = at::vec::Vectorized<float>::loadu(out_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                        auto tmp1 = static_cast<float>(98304.0);
                        auto tmp2 = at::vec::Vectorized<float>(tmp1);
                        auto tmp3 = tmp0 / tmp2;
                        tmp3.store(in_out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
            }
        }
    }
}
''')

async_compile.wait(globals())
del async_compile

class Runner:
    def __init__(self, partitions):
        self.partitions = partitions

    def recursively_apply_fns(self, fns):
        new_callables = []
        for fn, c in zip(fns, self.partitions):
            new_callables.append(fn(c))
        self.partitions = new_callables

    def call(self, args):
        arg0_1, = args
        args.clear()
        assert_size_stride(arg0_1, (3, 8, 64, 64), (32768, 4096, 64, 1))
        buf0 = empty_strided_cpu((98304, ), (1, ), torch.float32)
        buf1 = buf0; del buf0  # reuse
        # [Provenance debug handles] cpp_fused_mean_mul_ones_like_view_0:1
        cpp_fused_mean_mul_ones_like_view_0(buf1, arg0_1)
        del arg0_1
        return (buf1, )
```

- After
```
cpp_fused_mean_mul_ones_like_view_0 = async_compile.cpp_pybinding(['float*', 'const float*'], '''
#include <torch/csrc/inductor/cpp_prefix.h>
extern "C"  void  kernel(float* in_out_ptr0,
                       const float* in_ptr0)
{
    auto out_ptr0 = in_out_ptr0;
    #pragma omp parallel num_threads(240)
    {
        int tid = omp_get_thread_num();
        {
            #pragma omp for
            for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(98304L); x0+=static_cast<int64_t>(16L))
            {
                {
                    float tmp_acc0 = 0;
                    at::vec::Vectorized<float> tmp_acc0_vec = at::vec::Vectorized<float>(0);
                    at::vec::Vectorized<float> masked_tmp_acc0_vec = at::vec::Vectorized<float>(0);
                    CascadeSumHelper<float, 4096> scalar_cascade_helper0(static_cast<int64_t>(98304L));
                    CascadeSumHelper<at::vec::Vectorized<float>, 4096> cascade_helper0(static_cast<int64_t>(98304L));
                    CascadeSumHelper<at::vec::Vectorized<float>, 4096> masked_cascade_helper0(static_cast<int64_t>(0L));
                    for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(98304L); x1+=static_cast<int64_t>(1L))
                    {
                        {
                            if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                            {
                                auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                                auto tmp1 = static_cast<float>(1.0);
                                auto tmp2 = at::vec::Vectorized<float>(tmp1);
                                auto tmp3 = tmp0 * tmp2;
                                tmp_acc0_vec = cascade_sum_combine(tmp3, &cascade_helper0);
                            }
                        }
                    }
                    tmp_acc0 = cascade_sum_final(&scalar_cascade_helper0);
                    tmp_acc0_vec = cascade_sum_final(&cascade_helper0);
                    masked_tmp_acc0_vec = cascade_sum_final(&masked_cascade_helper0);
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        tmp_acc0_vec = tmp_acc0_vec + masked_tmp_acc0_vec;
                        tmp_acc0_vec.store(out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
                {
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        auto tmp0 = at::vec::Vectorized<float>::loadu(out_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                        auto tmp1 = static_cast<float>(98304.0);
                        auto tmp2 = at::vec::Vectorized<float>(tmp1);
                        auto tmp3 = tmp0 / tmp2;
                        tmp3.store(in_out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
            }
        }
    }
}
''')

async_compile.wait(globals())
del async_compile

class Runner:
    def __init__(self, partitions):
        self.partitions = partitions

    def recursively_apply_fns(self, fns):
        new_callables = []
        for fn, c in zip(fns, self.partitions):
            new_callables.append(fn(c))
        self.partitions = new_callables

    def call(self, args):
        arg0_1, = args
        args.clear()
        assert_size_stride(arg0_1, (3, 8, 64, 64), (32768, 4096, 64, 1))
        buf0 = empty_strided_cpu((98304, ), (1, ), torch.float32)
        buf1 = buf0; del buf0  # reuse
        # [Provenance debug handles] cpp_fused_mean_mul_ones_like_view_0:1
        cpp_fused_mean_mul_ones_like_view_0(buf1, arg0_1)
        del arg0_1
        return (buf1, )
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163144
Approved by: https://github.com/CaoE, https://github.com/mingfeima, https://github.com/jansel
2025-11-11 09:25:00 +00:00
c297b02f12 [DTensor] statically_known_true for slice strategy (#166990)
Avoids data-dependent errors for out-of-bounds & redundant slice checks.

The sharding logic that immediately depends on this only checks for redundant slices, and is saying: "it's safe to reuse the input placements if a) the slicing dimension isn't sharded, or b) the slice is redundant, so just pretend this op didn't happen".

This has a slight effect on output placements, when a slice is performed on a shared dim, and dynamic shapes are involved (size/start/end/step). Now if the slice isn't obviously redundant, we won't immediately consider the input placements valid (even if they could be for very particular runtime shapes), and select strategies valid for the general case - in this case I guess unsharding the slicing dim.

For backed symbols, we could choose to recompile when the redundant case is hit, by switching to `guard_or_false`, but it's not obvious how desirable this is.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166990
Approved by: https://github.com/laithsakka
2025-11-11 08:04:09 +00:00
bd24774f50 [XPU][Test] Enable XPU tests in inductor/test_analysis.py (#166840)
This PR enables XPU devices in test_analysis.py.

For performance reason, it skips some slow tests, so a full scope should be enabled by using:

```
export PYTORCH_TEST_WTH_SLOW=1
```

**PR Stack:**

- https://github.com/pytorch/pytorch/pull/166840 : This PR enables the tests, ignores the tests that failed
- https://github.com/pytorch/pytorch/pull/166839 : This fixed the bug and enable the full tests for xpu

**Some skipped test time:**

```
test_augment_trace_against_flop_counter_maxat0_xpu_float16 [49.0863s]
test_augment_trace_against_flop_counter_maxat0_xpu_float32 [18.2268s]
test_augment_trace_against_flop_counter_maxat1_xpu_float16 [85.6549s]
test_augment_trace_against_flop_counter_maxat1_xpu_float32 [329.0832s]
test_augment_trace_against_flop_counter_maxat2_xpu_float16 [24.4825s]
test_augment_trace_against_flop_counter_maxat2_xpu_float32 [19.0688s]
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166840
Approved by: https://github.com/guangyey, https://github.com/jansel
2025-11-11 07:49:07 +00:00
525eb9fab9 Fix command injection vulnerability in PCH compilation (#167502)
Fixed a command injection vulnerability in PreCompiled Header (PCH) compilation where extra_cflags were passed to subprocess with shell=True, allowing arbitrary command execution through malicious compiler flags.

Changed subprocess.check_output(pch_cmd, shell=True) to use shlex.split() to safely parse the command without shell interpretation. This prevents shell metacharacters (;, |, &, etc.) in extra_cflags from being executed as shell commands.

Added test case test_pch_command_injection that verifies:
1. PCH compilation attempts with malicious payloads in extra_cflags
2. Shell commands embedded in flags are not executed
3. Exploit file is not created, proving no shell execution occurred

Note: On RHEL/Fedora and other systems with versioned GCC compilers, the test depends on https://github.com/pytorch/pytorch/pull/167501 being merged first, otherwise the test will be skipped due to GCC detection issues.

Fixes #167480

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167502
Approved by: https://github.com/malfet
2025-11-11 07:27:41 +00:00
7886070fc5 Use stable topological sort in fuse_by_partitions (#167397)
legalize_graph() performs a topo sort that shuffles the nodes is a global way, making the result unpredictable.
We should avoid this in graph pass in general.

This problem is discovered when testing regional_inductor, a single fuse region trigger the global reordering.

Before
https://www.internalfb.com/intern/diffing/?before_paste_number=2029217728&after_paste_number=2029218006&regex_remove_pattern=&enable_regex_remove=0&strip_empty_lines=0&line_wrap=0&selected_tab=plain_diff

After
https://www.internalfb.com/intern/diffing/?paste_number=2029162294&regex_remove_pattern=&enable_regex_remove=0&strip_empty_lines=0&line_wrap=0&selected_tab=plain_diff

Left is gm before regional_inductor, right is after.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167397
Approved by: https://github.com/ezyang
2025-11-11 07:14:02 +00:00
285f29395a Update base for fix a py::handle that should've been py::object on "WIP: DTensor fast path: port return_and_correct_aliasing and inplace/out checks"
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-10 22:32:46 -08:00
87d17e9dee [pallas backend] Implementing Strided/Scatter Access (#167426)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167426
Approved by: https://github.com/yarongmu-google, https://github.com/jansel
2025-11-11 06:32:25 +00:00
53422e6bc8 [MPS] Add mechanism for reporting asserts from kernels (#166615)
Allocate ErrorMessages buffer associated with MPSStream and introduce `c10:🤘:report_error` method(and `TORCH_REPORT_ERROR` macro), that can be used to preserve up to `c10:🤘:error_message_count` messages

Add test that detects those

As results attempt to run something like
```python
import torch
x=torch.rand(10, 1, 10, device='mps')
y=x[:, [1]]
torch.mps.synchonize()
```
will raise `torch.AcceleratorError: index 1 is out of bounds for dimension 0 with size 1`

Fixes https://github.com/pytorch/pytorch/issues/111669
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166615
Approved by: https://github.com/manuelcandales, https://github.com/dcci
ghstack dependencies: #167444, #167445
2025-11-11 06:28:14 +00:00
c34b743eac [Dynamo] Support for xor (#166065)
Add missing support for xor (and maybe some other binary ops later on)

Fixes #146688

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166065
Approved by: https://github.com/ezyang
2025-11-11 05:44:08 +00:00
db250fa895 Revert "Expose THPVariable_Wrap() with a type argument (#167488)"
This reverts commit 52a6b5a4cc9f938b9cda102fb506fd0e4b32ecad.

Reverted https://github.com/pytorch/pytorch/pull/167488 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/167488#issuecomment-3515070469))
2025-11-11 05:39:40 +00:00
52231a7974 show current env before running lint (#166860)
There seems to be some discrepency between CI and local for Pyrefly so logging these to be able to check for different dependency versions
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166860
Approved by: https://github.com/janeyx99
2025-11-11 05:28:31 +00:00
cf71c53eae Fix check_compiler_is_gcc to detect versioned GCC compilers (#167501)
The function was only returning True for compilers named 'c++', but failed to detect g++, gcc, g++-13, g++-14, etc. This fixes the detection to work for any GCC variant by checking for both COLLECT_GCC and 'gcc version' in the compiler output.

The previous implementation used os.path.basename() on the resolved compiler path and only checked if it exactly matched 'c++'. This caused false negatives for versioned GCC installations and direct g++ usage.

The fix simplifies the logic: if both COLLECT_GCC is present in the output (indicating GCC toolchain) and 'gcc version' appears in the version string, it's GCC.

Fixes #167499

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167501
Approved by: https://github.com/ezyang, https://github.com/malfet
2025-11-11 05:14:05 +00:00
f9caae42ed [MPS] Move dispatch_sync_with_rethrow to MPSStream (#167445)
And wrap dispatches to copy sync with rethrow-wrapped method

Needed if execption could be raised during the sync, for example when surfacing async errors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167445
Approved by: https://github.com/Skylion007, https://github.com/manuelcandales, https://github.com/dcci
ghstack dependencies: #167444
2025-11-11 05:03:50 +00:00
52a6b5a4cc Expose THPVariable_Wrap() with a type argument (#167488)
For torchdistx, which is only *mostly* dead

Differential Revision: D86712979

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167488
Approved by: https://github.com/soulitzer
2025-11-11 04:56:50 +00:00
94f6f79e27 [3/N] Use Python 3.10 typing (#167431)
This PR applies new Union and Optional typing syntax to some files.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167431
Approved by: https://github.com/ezyang
2025-11-11 04:40:05 +00:00
715323ecaa Update base for fix bugs in #166808: failure to replace DTensors in OptionalTensorList arguments for local dispatch, and failure to return a list (was pushing multiple retvals onto stack) for list returning ops on "WIP: DTensor fast path: port return_and_correct_aliasing and inplace/out checks"
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-10 20:38:45 -08:00
5676de1157 [PT2] Supply index to fake tensors on mtia device (#167457)
Summary:
When PT2 sees an operation like `torch.empty_strided(8, device="cuda")` it returns a fake tensor on a specific device index, such as `"cuda:0"`. This is implemented via a list of multi-index devices in the fake tensor constructor. If a device supports indices but is not in this list, we can hit fake tensor propagation errors as `"cuda"` is not considered to be the same device as `"cuda:0"`.

This PR adds the `"mtia"` device to this list, to resolve some fake tensor propagation errors we're seeing with the full Inductor backend. (Internal task: T243176905)

Test Plan: Tests are stacked on the internal diff D86605248.

Reviewed By: StellarrZ

Differential Revision: D86605248

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167457
Approved by: https://github.com/eellison, https://github.com/mlazos
2025-11-11 04:31:28 +00:00
2ca0b3f70a [simplefsdp] fix autobucketing pass that takes comm op as input (#167484)
Fix for issue: https://github.com/pytorch/torchtitan/issues/2004

The root cause is that we are scheduling comm ops that are used as input to bwd graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167484
Approved by: https://github.com/eellison
2025-11-11 04:09:02 +00:00
b06453c7cf Make PT2 compile backprop through custom op without autograd key a hard error (#166367)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166367
Approved by: https://github.com/bdhirsh
2025-11-11 03:16:30 +00:00
f0fa39a7e4 Revert "[inductor, 3.14] fix itertools.product pickle error in test_cpu_repro (#167382)"
This reverts commit 5320ca3725c4ccf2811c211b48af1ddebb2b471f.

Reverted https://github.com/pytorch/pytorch/pull/167382 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
b5142f74f9 Revert "[inductor, 3.14] catch pickle.PicklingError exceptions (#167383)"
This reverts commit ad7db3617ec5cc3aa384bd4408fcfbc2acac1a98.

Reverted https://github.com/pytorch/pytorch/pull/167383 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
a14452bfce Revert "[dynamo, 3.14] enable dynamo in 3.14 (#167384)"
This reverts commit 17e70ae459c45d85ef77afa4d19efe5f8b44f573.

Reverted https://github.com/pytorch/pytorch/pull/167384 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
619f329a4b Revert "[3.14, dataloader] handle forkserver default mp start method in 3.14 (#167387)"
This reverts commit cf63b212e330836c2be92bef903f5a5d0dc2c7e9.

Reverted https://github.com/pytorch/pytorch/pull/167387 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
7a48db0809 Revert "[export, 3.14] handle patching methods with functools.partial correctly in non-strict export (#167396)"
This reverts commit fe0bb7cf6001532b14bba14d686baa1ff0b98de0.

Reverted https://github.com/pytorch/pytorch/pull/167396 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:08 +00:00
406f2943d2 Revert "Rework PyObject preservation (#166342)"
This reverts commit 6ca8cc6edf30b5ca882d4871af617e674b6cdd47.

Reverted https://github.com/pytorch/pytorch/pull/166342 on behalf of https://github.com/jeanschmidt due to seems to have introduced test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_var_cpu_int8 [GH job link](https://github.com/pytorch/pytorch/actions/runs/19247187935/job/55027440149) [HUD commit link](6ca8cc6edf) ([comment](https://github.com/pytorch/pytorch/pull/166342#issuecomment-3514771276))
2025-11-11 02:54:00 +00:00
c3bc56c8b4 [xpu][fix] Format XPU c10 and aten code (#167298)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167298
Approved by: https://github.com/cyyever, https://github.com/albanD, https://github.com/Skylion007
2025-11-11 02:07:37 +00:00
b2be4d24c0 [DTensor] Make ExplicitRedistributeContext strict/non-strict mode (#167370)
Also support nesting, enable/disable, and make the class use a
thread-local for storage so independent threads do not confuse each
other.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167370
Approved by: https://github.com/ezyang
ghstack dependencies: #166593
2025-11-11 01:19:16 +00:00
8d5cceeb6a [torchbench][optimus] Add backend optimus (#167357)
Summary: `--optimus [all | vertical_opt | horizontal_opt]` will kick off inductor compile with different fusion strategies.

Test Plan:
TorchBench Runner:

```
$ buck2 run mode/opt //pytorch/benchmark:run -- customized_optimus_illustrative -t train -d cuda
GPU Time per batch:   56.254 milliseconds
CPU Wall Time per batch:  56.326 milliseconds
CPU Wall Time:        56.326 milliseconds
Time to first batch:          420.0777 ms
GPU 0 Peak Memory:              0.0695 GB
CPU Peak Memory:              359.6362 GB
```

PT2 Benchmark Runner (comparing with eager):

```
buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative  --performance --training --inductor

running benchmark: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:02<00:00, 14.37it/s]
4.509x
```

eager latency: ~56 ms
inductor latency: ~11 ms

Optimus backend:

```
$ buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative --performance --training --optimus all
11.02923508733511 ms, 13.884015614166856 ms, 0.794x
```

```
$ buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative --performance --training --optimus vertical_opt
12.47156853787601 ms, 10.699485195800662 ms, 1.166x
```

```
$ buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative --performance --training --optimus horizontal_opt
11.078484123572707 ms, 10.797873372212052 ms, 1.026x
```

optimus latency ~10 ms

Differential Revision: D86524903

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167357
Approved by: https://github.com/mengluy0125
2025-11-11 00:35:30 +00:00
f6331192b4 Enable Doc builds for: Minor Releases RCs. Minor and Patch Releases final RC (#167478)
Enable Doc builds for
1. Minor Releases RCs
2. Minor and Patch Releases final RC

This is done to prevent publishing doc for patch releases when building rcs.
See:
https://github.com/pytorch/docs/pull/57

Followup after: https://github.com/pytorch/pytorch/pull/153973
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167478
Approved by: https://github.com/svekars, https://github.com/seemethere
2025-11-11 00:34:08 +00:00
f8d408d24a [CP] Correctly compile create_cp_block_mask (#167153)
Currently we re-compile create_block_mask every time, which is not very efficient and the global compilation also causes some issues. This PR lazily compile the create_block_mask and does it only once.

Fixes https://github.com/pytorch/pytorch/issues/167064

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167153
Approved by: https://github.com/drisspg, https://github.com/XilunWu
2025-11-11 00:03:06 +00:00
5a85b6eaf8 Migrate TypeTraits, TypeList, Metaprogramming to torch:: headeronly (#167386)
Taking over #163634; adding tests/headeronly APIs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167386
Approved by: https://github.com/albanD, https://github.com/mikaylagawarecki
2025-11-11 00:02:20 +00:00
e3d6896d08 [MTIAGraph][Pytorch][3/n] Implement mtia_graph python wrapper in pytorch (#166964)
- Add python module `mtia_graph.py`, which is a wrapper on top of the c++ logic implemented in previous PRs/diffs
- Add python level integration tests

[Doc](https://docs.google.com/document/d/1Q3xdZAIqhBvuy2HxGDfJyXVmxYXUEeYSZSwsp7bcJF8/edit?tab=t.osb46a42t6wb)

Differential Revision: [D84673488](https://our.internmc.facebook.com/intern/diff/D84673488/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166964
Approved by: https://github.com/malfet
2025-11-10 23:49:20 +00:00
9d9e7c7b1c [Pytorch] Extend OSS conversion benchmarks (#167099)
Summary: We are extending OSS conversion benchmarks, to include all combinations between types

Test Plan: CI

Differential Revision: D86315975

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167099
Approved by: https://github.com/mcfi
2025-11-10 23:36:57 +00:00
4c3721fe70 allow sym_stride, and sym_size lowering in inductor to return ints (#167345)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167345
Approved by: https://github.com/eellison
2025-11-10 23:29:23 +00:00
8ef4099313 Revert "Add min/max support for barebones uint types (#166813)"
This reverts commit 9ffc480c5a928eaccb4ac0e1755a1c596674d884.

Reverted https://github.com/pytorch/pytorch/pull/166813 on behalf of https://github.com/jeanschmidt due to It was reverted internally 6 days ago, but not reverted on OSS, this is causing conflicts ([comment](https://github.com/pytorch/pytorch/pull/166813#issuecomment-3514328895))
2025-11-10 23:25:22 +00:00
de773364be Support AC in default partitioner when functionalization is enabled (#166610)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166610
Approved by: https://github.com/SherlockNoMad
ghstack dependencies: #166536
2025-11-10 23:09:01 +00:00
47da714b8b [inductor][determinism] type errors + use odc to dump imc on exit (#167136)
Summary: fix some type errors + instead of manually creating a filelock when dumping dcache's imc to file we simply use an odc (since this is the intended behavior of odc, anyways)

Test Plan:
```
buck test fbcode//mode/opt caffe2/test/inductor:caching
```

Differential Revision: D86345594

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167136
Approved by: https://github.com/aorenste
2025-11-10 22:51:03 +00:00
69ab1f93e4 Add shim for at::get_num_threads (#167362)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167362
Approved by: https://github.com/janeyx99
ghstack dependencies: #166579, #166694, #166695
2025-11-10 22:21:14 +00:00
232baa33b3 Redo add parallel_for to torch/csrc/stable (#166695)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166695
Approved by: https://github.com/malfet
ghstack dependencies: #166579, #166694
2025-11-10 22:21:14 +00:00
6f0182495f Add stable::Tensor.device() (#166694)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166694
Approved by: https://github.com/janeyx99
ghstack dependencies: #166579
2025-11-10 22:21:14 +00:00
7da82b84e2 Add torch::stable::Device (#166579)
Prior to this PR, the IValue <-> StableIValue conversion for `DeviceObjType` (aka c10::Device) was to pack it into the leading bits of the StableIValue (which is a uint64_t)

After this PR, the IValue <-> StableIValue conversion for `DeviceObjType` expects DeviceType to be packed into the upper 32 bits of StableIValue and DeviceIndex to be packed into the lower 32 bits

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166579
Approved by: https://github.com/janeyx99
2025-11-10 22:21:14 +00:00
cda7604434 [ez] Remove spammy deprecation log (#167470)
"
/packages/pytorch_latest_sixlib_conda/conda/lib/python3.12/site-packages/torch/_dynamo/variables/user_defined.py:1815: FutureWarning: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead.
  return ctor(*args, **kwargs)"

is too spammy

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167470
Approved by: https://github.com/tugsbayasgalan
2025-11-10 21:49:23 +00:00
6ca8cc6edf Rework PyObject preservation (#166342)
Make the PyObject preservation scheme thread-safe with free threaded (nogil) Python. The general idea is:

* Python Tensor and Storage objects always hold a strong reference to their underlying c10 object
* c10 objects hold a strong reference to their Python objects if there's at least one other reference to the c10 object

This is implemented in `intrusive_ptr`:

* The top most bit (`kHasPyObject`) from the weakref count is now used to indicate if the `intrusive_ptr_target` has an associated PyObject. So `kHasPyObject` is one bit, the weakref count is now 31 bits and the strong refcount remains 32 bits.
* When the reference count increases from one to two and `kHasPyObject` is set, we incref the associated Python object to ensure that it's kept alive.
* When the reference count decreases from two to one (i.e., there are no C++ reference to the `intrusive_ptr_target` other than from the Python object), we decre the associated Python object to break the cycle.

Other benefits:

* We can delete a lot of the copypasta from Python internal `subtype_dealloc`
* This fixes the weakref and GC bugs we had in the previous scheme. Python weakrefs on Tensors and Storages should just work as expected now.

Risks:

* Extra branch for reference count operations on `intrusive_ptr<TensorImpl>`, `intrusive_ptr<StorageImpl>`, and the generic `intrusive_ptr<intrusive_ptr_target>` even when we're not using Python.
* It's a big change
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166342
Approved by: https://github.com/albanD
2025-11-10 21:47:53 +00:00
bb37483464 Use c7i.2xlarge for B200 build (#167078)
The build system is oversized for what is necessary. Reduce the size to optimize costs. The default workflow runner is `linux.c7i.2xlarge` so we are just removing the runner definition in the workflow so that it uses the default.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167078
Approved by: https://github.com/nWEIdia, https://github.com/seemethere
2025-11-10 21:45:45 +00:00
e23ceeae86 Update base for fix two test failures on "WIP: DTensor fast path: port return_and_correct_aliasing and inplace/out checks"
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-10 13:25:40 -08:00
2751b1d3c3 Support repr on user defined objects (#167372)
Fixes: https://github.com/pytorch/pytorch/issues/167369

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167372
Approved by: https://github.com/anijain2305
2025-11-10 21:06:37 +00:00
fe0bb7cf60 [export, 3.14] handle patching methods with functools.partial correctly in non-strict export (#167396)
Note: dynamo is not affected by this since patching class methods are not supported right now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167396
Approved by: https://github.com/angelayi
ghstack dependencies: #167382, #167383, #167384, #167387
2025-11-10 20:52:05 +00:00
cf63b212e3 [3.14, dataloader] handle forkserver default mp start method in 3.14 (#167387)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167387
Approved by: https://github.com/malfet
ghstack dependencies: #167382, #167383, #167384
2025-11-10 20:52:05 +00:00
17e70ae459 [dynamo, 3.14] enable dynamo in 3.14 (#167384)
dynamo tests are passing in the CI PR above - so we could probably just enable dynamo right now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167384
Approved by: https://github.com/Skylion007, https://github.com/mlazos
ghstack dependencies: #167382, #167383
2025-11-10 20:52:05 +00:00
ad7db3617e [inductor, 3.14] catch pickle.PicklingError exceptions (#167383)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167383
Approved by: https://github.com/aorenste
ghstack dependencies: #167382
2025-11-10 20:52:04 +00:00
5320ca3725 [inductor, 3.14] fix itertools.product pickle error in test_cpu_repro (#167382)
`inductor/test_cpu_cpp_wrapper` was failing since it was attempting to pickle`itertools.product`, and that is no longer picklable in 3.14. We work around by eagerly generating a list.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167382
Approved by: https://github.com/atalman, https://github.com/malfet
2025-11-10 20:52:04 +00:00
3e4faca130 [torch.export] Refactor placeholder_naming_pass to reduce CCN (#166600)
Summary: Reduced CCN from 37 to 28 of placeholder_naming_pass method

Test Plan: Existing tests

Differential Revision: D85820388

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166600
Approved by: https://github.com/angelayi
2025-11-10 20:44:18 +00:00
0c2f206ded Typo fix - baddbmm_strategy (#166963)
This is called by registration with decorator, so function not called directly. For clarity, add the "b" for "batch" in function name.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166963
Approved by: https://github.com/janeyx99
2025-11-10 20:35:42 +00:00
6cf21fa331 Fix -ffunction-sections, -fdata-sections not being added on aarch64. (#166407)
Preferred solution to #166380

Changes:

- Moved summary print to bottom of CMakeLists.txt
- Fix the problem 'add_compile_options' should be called before targets defined, so opted for `append_cxx_flag_if_supported` and `append_c_flag_if_supported` ( new ).
- Added extra verbosity so it can be seen when linker script added.

( unfortunately linker script has to be added per-target rather than globally due to ninja/cmake depdendency tracking ).

Also move summary print to bottom of CMakeLists.txt and improve logging
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166407
Approved by: https://github.com/Aidyn-A, https://github.com/atalman
2025-11-10 20:32:08 +00:00
cdc8460f2c Use c7i.2xlarge for H100 build (#167466)
The build system maybe oversized for what is necessary. Reduce the size to optimize costs. The default workflow runner is linux.c7i.2xlarge so we are just removing the runner definition in the workflow so that it uses the default.

Relates to pytorch/test-infra#7175.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167466
Approved by: https://github.com/seemethere
2025-11-10 20:20:54 +00:00
86130aa2ca Fix flaky memory profiler test [2] (#167268)
Fixes #167037

Move the module definition outside of the unit test so when we run the unit test multiple times, the module is not re-compiled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167268
Approved by: https://github.com/angelayi
2025-11-10 19:51:38 +00:00
9491830c79 move subgraph_has_impure_ops from node.is_impure into const_fold to unblock production (#167443)
Summary:
https://github.com/pytorch/pytorch/pull/166609 updates `node.is_impure` to consider a submodule as impure if submodule contains impure node. This in turn changes `graph.eliminate_dead_code()` function behavior, which does not eliminate nodes with side effects, see [pytorch documentation](https://docs.pytorch.org/docs/stable/fx.html#torch.fx.Graph.eliminate_dead_code)
> Remove all dead code from the graph, based on each node’s number of users, and whether the nodes have any side effects.

While this is correct that a submodule containing side-effectful ops is side-effectful and should not be dead code eliminated, some customers rely on the dead code elimination to eliminate submodules that contain impure ops which is the behavior before #166609 fix.

Due to production environment constraints, we have to revert https://github.com/pytorch/pytorch/pull/166609 and move the side-effectful submodule check logic to `const_fold.py`, which will correctly **not** const-fold a submodule that contains impure ops.

NOTE other call sites that use `node.is_impure()` to make decisions are still incorrectly eliminating side-effectful submodules, but we can't safely change that today.

## This pr
- move `_subgraph_has_impure_op` into `fx/experimental/const_fold.py`, check and prevent const-folding an impure submodule
- added a note in `node.is_impure` to highlight the incorrect behavior and context in case people go looking in the future.

Test Plan: run test_fx_const_fold and all tests pass

Differential Revision: D86641994

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167443
Approved by: https://github.com/jfix71
2025-11-10 19:29:54 +00:00
58a35542d5 Update base for fix apparent memory safety issue in #166808 on "WIP: DTensor fast path: port return_and_correct_aliasing and inplace/out checks"
This seems to generate a several-microsecond performance improvement in the detach benchmark I've been using.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-10 11:23:51 -08:00
04a85b4c21 [compile-on-one-rank] Step 1: DeviceId (#166680)
Add a "--virtual-local-rank" mode to torchrun. When used instead of passing the
local rank in LOCAL_RANK it uses a LOCAL_RANK of "0" and adjusts
CUDA_VISIBLE_DEVICES to reflect the desired GPU index.

Testing:
(tweaked run_train.sh to use `--log-dir`)
```
export NGPU=8
export CONFIG_FILE="./torchtitan/models/llama3/train_configs/debug_model.toml"
with-proxy ./run_train.sh --model.name compiler_toolkit.llama3 --compile.enable --parallelism.data_parallel_shard_degree=2 --parallelism.tensor_parallel_degree=4
```

And then comparing ranks:

Without --virtual-local-rank gives a lot of differences like:
```
 [rank#]:        mul_1: "f32[8, 512, 256]" = torch.ops.aten.mul.Tensor(mul, view_9);  mul = None
-[rank#]:        _to_copy_3: "bf16[8, 512, 256]" = torch.ops.aten._to_copy.default(mul_1, dtype = torch.bfloat16, layout = torch.strided, device = device(type='cuda', index=0));  mul_1 = None
+[rank#]:        _to_copy_3: "bf16[8, 512, 256]" = torch.ops.aten._to_copy.default(mul_1, dtype = torch.bfloat16, layout = torch.strided, device = device(type='cuda', index=1));  mul_1 = None
 [rank#]:        detach: "f32[8, 512, 1]" = torch.ops.aten.detach.default(rsqrt);  rsqrt = None
```

With --virtual-local-rank makes those differences go away.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166680
Approved by: https://github.com/ezyang
2025-11-10 18:47:31 +00:00
a4437d76f0 Add some labeler rules that used to be in the autolabel bot (#167330)
See https://github.com/pytorch/test-infra/pull/7446 for the paths

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167330
Approved by: https://github.com/huydhn
2025-11-10 18:38:42 +00:00
198e8f8f7d Update on "WIP: extend C++ fast path to local operator dispatch"
cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-10 10:29:30 -08:00
ba6a438b33 Update base for Update on "WIP: extend C++ fast path to local operator dispatch"
cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-10 10:29:30 -08:00
3ea829a337 Fix torch.cond HOP device in inductor (#167354)
Fixes #166918

The output device may not be on the same device as the predicate device.

```
python test/inductor/test_control_flow.py -k test_output_on_different_device
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167354
Approved by: https://github.com/ydwu4, https://github.com/zou3519
2025-11-10 18:19:38 +00:00
3966b5ad05 [BE] Fix out-of-bounds index_put in test_mps.py (#167444)
Discovered while enabling assertions on out-of-bounds accesses. Otherwise test fails with
```
ERROR: test_sdpa_mask_fp16_L6_S17_NH23_HS121 (__main__.TestSDPA.test_sdpa_mask_fp16_L6_S17_NH23_HS121)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/malfet/git/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 3334, in wrapper
    method(*args, **kwargs)
    ~~~~~~^^^^^^^^^^^^^^^^^
  File "/Users/malfet/git/pytorch/pytorch/build/../test/test_mps.py", line 9494, in test_sdpa_mask_fp16_L6_S17_NH23_HS121
    self._test_sdpa_mask(torch.float16, 7, 17, 23, 121)
    ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/malfet/git/pytorch/pytorch/build/../test/test_mps.py", line 9478, in _test_sdpa_mask
    y_ref = F.scaled_dot_product_attention(q.cpu(), k.cpu(), v.cpu(), attn_mask=mask.cpu(), dropout_p=0.0, is_causal=False)
                                           ~~~~~^^
torch.AcceleratorError: index out of range

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167444
Approved by: https://github.com/Skylion007, https://github.com/manuelcandales
2025-11-10 18:19:28 +00:00
f6a79b2a4a [inductor] Wrap pallas_call in jax.jit (#167441)
My understanding is this is needed for performance.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167441
Approved by: https://github.com/oulgen
2025-11-10 17:29:56 +00:00
2fcf41dd8e Add the ruff rule and skip everything for now (#167360)
Part of https://github.com/pytorch/pytorch/issues/164878
We can start narrowing the skips and remove them as PRs keep landing.

This PR is just to setup the scaffolding, fix will be in follow up
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167360
Approved by: https://github.com/janeyx99
2025-11-10 17:10:15 +00:00
31ccd8f13e [AOTI] Fix a mixed-device bug for scatter_add (#167341)
Summary: Fix https://github.com/pytorch/pytorch/issues/166841. AOTI incorrectly generates a call to aoti_torch_cuda_scatter_reduce_two_out while the op should actually run on CPU. Fix by using the correct device when calling _generate_scatter_fallback in the wrapper codegen.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167341
Approved by: https://github.com/yushangdi
2025-11-10 16:59:44 +00:00
90cd892d06 fix handling of tensorlist arguments on "WIP: extend C++ fast path to local operator dispatch"
cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-07 13:06:49 -08:00
2416313e06 Update base for fix handling of tensorlist arguments on "WIP: extend C++ fast path to local operator dispatch"
cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-07 13:06:49 -08:00
3eb7bac74c restack atop #167051 on "WIP: extend C++ fast path to local operator dispatch"
cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-06 13:35:27 -08:00
b3dd792111 Update base for restack atop #167051 on "WIP: extend C++ fast path to local operator dispatch"
cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-06 13:35:27 -08:00
8775b7b061 fix bugs in #166371 on "WIP: extend C++ fast path to local operator dispatch"
cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-04 10:16:11 -08:00
9bbb02ecf8 Update base for fix bugs in #166371 on "WIP: extend C++ fast path to local operator dispatch"
cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-04 10:16:11 -08:00
9e6036a4e6 fix several bugs in #166371 that tests found on "WIP: extend C++ fast path to local operator dispatch"
cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-03 14:57:30 -08:00
cc186e2957 Update base for fix several bugs in #166371 that tests found on "WIP: extend C++ fast path to local operator dispatch"
cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-03 14:57:30 -08:00
6f7badea6f WIP: extend C++ fast path to local operator dispatch
[ghstack-poisoned]
2025-11-01 15:49:14 -07:00
d282299076 Update on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-01 15:49:14 -07:00
88651e6af0 Update base for Update on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-01 15:49:14 -07:00
2111d19fed fix dependency on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-01 15:35:19 -07:00
f8ef995c40 Update base for fix dependency on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-11-01 15:35:18 -07:00
264fe678a5 fixed dependency on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-10-31 14:36:12 -07:00
81b43285b9 Update base for fixed dependency on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-10-31 14:36:12 -07:00
1ac4d5a4a6 Update on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-10-30 22:15:00 -07:00
b4ccfc39b1 Update base for Update on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-10-30 22:15:00 -07:00
e41a65c5f4 fix dtensor dispatch key PRs on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-10-30 21:40:13 -07:00
25f8922f20 Update base for fix dtensor dispatch key PRs on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-10-30 21:40:13 -07:00
4197251fcd rebase, significant fixes to #166370 and following PR on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-10-30 17:08:34 -07:00
f0826ff88f Update base for rebase, significant fixes to #166370 and following PR on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-10-30 17:08:34 -07:00
e5f494eb9b fix bottom-of-stack PR on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-10-28 10:16:50 -07:00
fcd1207841 Update base for fix bottom-of-stack PR on "Avoid creating Python OpSchema in the DTensor dispatch fast path"
All we need to do is move a few checks around.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta msaroufim dcci

[ghstack-poisoned]
2025-10-28 10:16:50 -07:00
df4eb5104c Avoid creating Python OpSchema in the DTensor dispatch fast path
All we need to do is move a few checks around.

[ghstack-poisoned]
2025-10-27 21:14:59 -07:00
236e76560a DTensor dispatch fast path: C++ unwrap_to_op_info, OpSchema creation, & cached sharding prop
Incremental addition of C++ fast path, taking advantage of the DTensor
dispatch key to let us work with IValues without Python conversion for
the fast path. I tried to just port unwrap_to_op_info to C++, but that
didn't get much of a win; the nice thing here seems to be the fusion
of unwrap_to_op_info with recompute_comparison_key.

This + the following PR appear to reduce DTensor dispatch time for
detach() from 43-46 usec (possibly 40-43 usec, don't have firm number
written down due to noise) to something like 33-36 usec (using a
benchmark similar to the one on #160580).

[ghstack-poisoned]
2025-10-27 21:14:54 -07:00
9161c3bb30 Do our own call into Python for DTensor dispatch
Another incremental step: don't deal with custom ops on the critical
path, let the dispatcher do that. Take control of calling into Python
on the critical path as well.

[ghstack-poisoned]
2025-10-27 21:14:50 -07:00
e8c3e60649 Add DTensor dispatch key (but don't do anything special with it yet)
Just a baby step to make sure we and CI are happy with this before moving on to a combined fast path.

[ghstack-poisoned]
2025-10-27 21:14:46 -07:00
1cf593ab09 Refactor: extract OperatorArgsKwargsView from parseIValuesToPyArgsKwargs
Intended to make it easier to reuse this logic for processing operator arguments as IValues in following PR(s).

Testing: python test/test_python_dispatch.py (broke during development, seems to work now)

[ghstack-poisoned]
2025-10-27 21:14:41 -07:00
8b194d35d0 Update on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-10-27 21:14:41 -07:00
fbf258bb46 Update base for Update on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-10-27 21:14:41 -07:00
2c9473c84c Update on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-10-07 11:39:53 -07:00
0e9b283b5e Update base for Update on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-10-07 11:39:52 -07:00
6b03cfa431 rebase to pick up reapply of dependency on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-10-07 09:26:12 -07:00
f427647328 Update base for rebase to pick up reapply of dependency on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-10-07 09:26:12 -07:00
f682b0d74d Update on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-10-01 15:32:23 -07:00
d1beb0f0f1 Update base for Update on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-10-01 15:32:23 -07:00
580ef872c5 rebase on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-26 21:05:34 -07:00
cc400925ef Update base for rebase on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-26 21:05:34 -07:00
9001155ffe Update on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-25 15:10:31 -07:00
4f7fabc043 Update base for Update on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-25 15:10:31 -07:00
869dd37eca fix missing torch.distributed.fsdp import in test that used torch.ops.fsdp on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-23 15:11:07 -07:00
b61cc19dae Update base for fix missing torch.distributed.fsdp import in test that used torch.ops.fsdp on "Remove torch.distributed.tensor.OpSchema.has_symints"
It appears to be unused based on `cd torch; rg has_symints`.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-23 15:11:07 -07:00
99fbad06d7 Remove torch.distributed.tensor.OpSchema.has_symints
It appears to be unused based on `cd torch; rg has_symints`.

[ghstack-poisoned]
2025-09-23 10:46:34 -07:00
98ff76ed73 fix lintrunner mypy on 162990 on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames Lucaskabela

[ghstack-poisoned]
2025-09-22 22:53:30 -07:00
a2aeb27bee Update base for fix lintrunner mypy on 162990 on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames Lucaskabela

[ghstack-poisoned]
2025-09-22 22:53:30 -07:00
e5c65308a6 Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames Lucaskabela

[ghstack-poisoned]
2025-09-22 16:35:38 -07:00
09998faa18 Update base for Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci voznesenskym penguinwu EikanWang jgong5 Guobing-Chen XiaobingSuper zhuhaozhe blzheng wenzhe-nrv jiayisunx chenyang78 kadeng chauhang amjames Lucaskabela

[ghstack-poisoned]
2025-09-22 16:35:38 -07:00
283b29e2bf Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-22 15:51:12 -07:00
91b8fad8b9 Update base for Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-22 15:51:12 -07:00
f5edbfb3c1 reorder #162990 after #163030 given current custom op uncertainty on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-18 21:23:05 -07:00
f434235a8f Update base for reorder #162990 after #163030 given current custom op uncertainty on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-18 21:23:05 -07:00
094cd55b7d parenthesize python_compat macros on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-18 15:44:56 -07:00
f5bab128af Update base for parenthesize python_compat macros on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-18 15:44:56 -07:00
2b8100381d Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-18 15:23:46 -07:00
6e5dba5e1c Update base for Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-18 15:23:46 -07:00
f3b29b9d81 Python 3.13 renamed _PyCFunctionFast to remove the underscore on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-18 13:19:21 -07:00
ebfcfe912e Update base for Python 3.13 renamed _PyCFunctionFast to remove the underscore on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-18 13:19:21 -07:00
6861fe634e Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-17 17:55:22 -07:00
144a863515 Update base for Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-17 17:55:22 -07:00
eaa298cbe9 Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-17 15:27:46 -07:00
7ace17a639 Update base for Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-17 15:27:46 -07:00
efce4e8c20 Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-17 11:41:39 -07:00
500fbbeb77 Update base for Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-17 11:41:39 -07:00
573f2101b2 Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-16 23:55:29 -07:00
ff302143ea Update base for Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-16 23:55:29 -07:00
f2d143f719 Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-15 18:10:54 -07:00
16e588f41d Update base for Update on "DTensor: C++ compute_global_tensor_info"
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-15 18:10:54 -07:00
651bacb2eb DTensor: C++ compute_global_tensor_info
compute_global_tensor_info is on the hot path for DTensor.{from,to}_local. More incremental progress toward C++.

[ghstack-poisoned]
2025-09-15 12:45:40 -07:00
7d9e5ba5d0 Update on "Add basic tests for torch.distributed.tensor._utils.compute_global_tensor_info"
Next PR writes a C++ implementation. Seems good to have tests first.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-15 12:45:40 -07:00
4c90efdf38 Update base for Update on "Add basic tests for torch.distributed.tensor._utils.compute_global_tensor_info"
Next PR writes a C++ implementation. Seems good to have tests first.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-15 12:45:40 -07:00
d592a952fc Add basic tests for torch.distributed.tensor._utils.compute_global_tensor_info
Next PR writes a C++ implementation. Seems good to have tests first.

[ghstack-poisoned]
2025-09-15 09:03:02 -07:00
5be3d2dda5 Update on "Fully native DTensor.__new__"
Move the entirety of `__new__` into C++, saving a layer of disable_dynamo and making progress toward all-C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-15 09:03:02 -07:00
d2299b7838 Update base for Update on "Fully native DTensor.__new__"
Move the entirety of `__new__` into C++, saving a layer of disable_dynamo and making progress toward all-C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-15 09:03:02 -07:00
46fa23c043 rebase on "Fully native DTensor.__new__"
Move the entirety of `__new__` into C++, saving a layer of disable_dynamo and making progress toward all-C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-11 16:51:53 -07:00
60e3695480 Update base for rebase on "Fully native DTensor.__new__"
Move the entirety of `__new__` into C++, saving a layer of disable_dynamo and making progress toward all-C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-11 16:51:52 -07:00
ab0c1521d1 Update on "Fully native DTensor.__new__"
Move the entirety of `__new__` into C++, saving a layer of disable_dynamo and making progress toward all-C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-09 11:45:04 -07:00
8bb1c9a8d8 Update base for Update on "Fully native DTensor.__new__"
Move the entirety of `__new__` into C++, saving a layer of disable_dynamo and making progress toward all-C++.

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta ezyang msaroufim dcci

[ghstack-poisoned]
2025-09-09 11:45:04 -07:00
af9b4f0734 Fully native DTensor.__new__
Move the entirety of `__new__` into C++, saving a layer of disable_dynamo and making progress toward all-C++.

[ghstack-poisoned]
2025-09-09 10:44:37 -07:00
535912c5fe Update on "Proof of concept: naive partial port of OpSchema.__post_init__ to C++ does make it faster"
I initially didn't see good results porting this, but it was apparently because of pybind11 function calling overhead. (pybind11's object-handling primitives seem fine enough.) I'm interested in setting up nanobind, but this demonstrates it's not blocking.

Differential Revision: [D81530102](https://our.internmc.facebook.com/intern/diff/D81530102)

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta

[ghstack-poisoned]
2025-09-09 10:44:37 -07:00
da876b507e Update base for Update on "Proof of concept: naive partial port of OpSchema.__post_init__ to C++ does make it faster"
I initially didn't see good results porting this, but it was apparently because of pybind11 function calling overhead. (pybind11's object-handling primitives seem fine enough.) I'm interested in setting up nanobind, but this demonstrates it's not blocking.

Differential Revision: [D81530102](https://our.internmc.facebook.com/intern/diff/D81530102)

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta

[ghstack-poisoned]
2025-09-09 10:44:37 -07:00
c3665e33cf Update on "Proof of concept: naive partial port of OpSchema.__post_init__ to C++ does make it faster"
I initially didn't see good results porting this, but it was apparently because of pybind11 function calling overhead. (pybind11's object-handling primitives seem fine enough.) I'm interested in setting up nanobind, but this demonstrates it's not blocking.

Differential Revision: [D81530102](https://our.internmc.facebook.com/intern/diff/D81530102)

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta

[ghstack-poisoned]
2025-09-06 10:36:52 -07:00
7664c7b8c2 Update base for Update on "Proof of concept: naive partial port of OpSchema.__post_init__ to C++ does make it faster"
I initially didn't see good results porting this, but it was apparently because of pybind11 function calling overhead. (pybind11's object-handling primitives seem fine enough.) I'm interested in setting up nanobind, but this demonstrates it's not blocking.

Differential Revision: [D81530102](https://our.internmc.facebook.com/intern/diff/D81530102)

cc H-Huang awgu wanchaol fegin fduwjj wz337 wconstab d4l3k pragupta

[ghstack-poisoned]
2025-09-06 10:36:52 -07:00
12db8395b2 rebase on "Proof of concept: naive partial port of OpSchema.__post_init__ to C++ does make it faster"
I initially didn't see good results porting this, but it was apparently because of pybind11 function calling overhead. (pybind11's object-handling primitives seem fine enough.) I'm interested in setting up nanobind, but this demonstrates it's not blocking.

[ghstack-poisoned]
2025-09-02 14:41:21 -07:00
2111a9cf7a Update base for rebase on "Proof of concept: naive partial port of OpSchema.__post_init__ to C++ does make it faster"
I initially didn't see good results porting this, but it was apparently because of pybind11 function calling overhead. (pybind11's object-handling primitives seem fine enough.) I'm interested in setting up nanobind, but this demonstrates it's not blocking.

[ghstack-poisoned]
2025-09-02 14:41:21 -07:00
b84eac19cb rebase on "Proof of concept: naive partial port of OpSchema.__post_init__ to C++ does make it faster"
I initially didn't see good results porting this, but it was apparently because of pybind11 function calling overhead. (pybind11's object-handling primitives seem fine enough.) I'm interested in setting up nanobind, but this demonstrates it's not blocking.

[ghstack-poisoned]
2025-08-29 14:32:36 -07:00
b991793cb1 Update base for rebase on "Proof of concept: naive partial port of OpSchema.__post_init__ to C++ does make it faster"
I initially didn't see good results porting this, but it was apparently because of pybind11 function calling overhead. (pybind11's object-handling primitives seem fine enough.) I'm interested in setting up nanobind, but this demonstrates it's not blocking.

[ghstack-poisoned]
2025-08-29 14:32:36 -07:00
853e6cac6c Proof of concept: naive partial port of OpSchema.__post_init__ to C++ does make it faster
I initially didn't see good results porting this, but it was apparently because of pybind11 function calling overhead. (pybind11's object-handling primitives seem fine enough.) I'm interested in setting up nanobind, but this demonstrates it's not blocking.

[ghstack-poisoned]
2025-08-27 23:59:31 -07:00
889a253216 Remove unnecessary asserts in _correct_storage_aliasing
These don't seem necessary to get reasonable errors:
- func was already passed to get_alias_info earlier in the calling function
- all we require for args and outs is the ability to index into them, which will cleanly throw if it fails anyway

[ghstack-poisoned]
2025-08-27 23:59:27 -07:00
4c52ac8494 Add mark_subclass_constructor_exportable_experimental-specific cache for is_traceable_wrapper_subclass_type
See new code comment for explanation. In short, global cache seems wrong because of monkey patching but this use case is checking for a programming error IIUC.

[ghstack-poisoned]
2025-08-27 23:59:23 -07:00
8fd9a260ee Call checkLong in is_int_or_symint, completing TODO
Calling this first minimizes overhead for plain old ints, making cheap things cheap.

[ghstack-poisoned]
2025-08-27 23:59:20 -07:00
4a15f67bd1 Update on "PythonArgParser::symintlist{,Optional} should use SymDimVector"
They're usually/always used for sizes & strides, which is what SymDimVector is for (saves heap allocation of the list itself). Had to patch OptionalArray because there's a bunch of generated code that wants to convert these to SymIntArrayRef and the generator isn't easy to patch to manually wrap. Clear but small improvement in perf on "detach DTensor in a loop" benchmark; we aren't heap-allocating symdimlists anymore, though there's still some cost with destroying these because SymInt needs cleanup.

[ghstack-poisoned]
2025-08-27 23:59:20 -07:00
39ac72e35d Update base for Update on "PythonArgParser::symintlist{,Optional} should use SymDimVector"
They're usually/always used for sizes & strides, which is what SymDimVector is for (saves heap allocation of the list itself). Had to patch OptionalArray because there's a bunch of generated code that wants to convert these to SymIntArrayRef and the generator isn't easy to patch to manually wrap. Clear but small improvement in perf on "detach DTensor in a loop" benchmark; we aren't heap-allocating symdimlists anymore, though there's still some cost with destroying these because SymInt needs cleanup.

[ghstack-poisoned]
2025-08-27 23:59:20 -07:00
4b10d9a042 PythonArgParser::symintlist{,Optional} should use SymDimVector
They're usually/always used for sizes & strides, which is what SymDimVector is for (saves heap allocation of the list itself). Had to patch OptionalArray because there's a bunch of generated code that wants to convert these to SymIntArrayRef and the generator isn't easy to patch to manually wrap.

[ghstack-poisoned]
2025-08-27 10:56:05 -07:00
81079d584d fastpath type Tensor in THPVariable_NewWithVar
It is cheap to do an exact check against Tensor and much faster when it works (PyType_IsSubtype does not have this fastpath, I checked [source](9ee0214b5d/Objects/typeobject.c (L2889))). Spot-checked in perf on detach-DTensor-in-a-loop benchmark; small win but clear.

[ghstack-poisoned]
2025-08-27 10:56:01 -07:00
ddb70b719b Avoid redundant PyTuple_GetSize call in _maybe_handle_torch_function
py::args::size() calls PyTuple_GetSize. Compiler can't know the two calls will always return the same result, so we have to consolidate them ourselves.

[ghstack-poisoned]
2025-08-27 10:55:58 -07:00
8850e0ec91 Use specific Tuple/List APIs instead of PySequence in is_int_or_symint_list
These APIs are faster and, since we can use the container-specific unchecked indexing APIs  don't repeat bounds checks.

[ghstack-poisoned]
2025-08-27 09:12:00 -07:00
13d101b384 Remove logger.debug statements in DTensor dispatch
These seem to have been costing us 5-10 usec per detach (out of ~~95 usec total).  If they need to ship let's talk about requirements and how we can make this more efficient given that we would prefer if an entire DTensor op could finish in 10 usec.

[ghstack-poisoned]
2025-08-26 22:51:48 -07:00
b662dd6a47 RFC: Add C++ function for torch.distributed.tensor._op_schema.is_view_op
This seems to have been an especially slow one because of the repeated pybind access (schema is a pybind, as is arguments, and then we hit each argument). It's still ~~1% of total benchmark runtime because of the repeated single pybind function call, but that's a lot better.

[ghstack-poisoned]
2025-08-26 22:51:44 -07:00
b1ec7ceb47 can't make memory-safety-critical checks debug-only on "Don't call check_has_torch_dispatch in THPVariable_NewWithVar if we already know"
We already know when we're called from make_wrapper_subclass or make_dtensor. The check isn't particularly cheap.

[ghstack-poisoned]
2025-08-26 22:02:02 -07:00
048b4383ea Update base for can't make memory-safety-critical checks debug-only on "Don't call check_has_torch_dispatch in THPVariable_NewWithVar if we already know"
We already know when we're called from make_wrapper_subclass or make_dtensor. The check isn't particularly cheap.

[ghstack-poisoned]
2025-08-26 22:02:02 -07:00
3f85bbd18d Don't call check_has_torch_dispatch in THPVariable_NewWithVar if we already know
We already know when we're called from make_wrapper_subclass or make_dtensor. The check isn't particularly cheap.

[ghstack-poisoned]
2025-08-26 21:17:27 -07:00
76bd87e46e Add torch.Tensor._make_dtensor to accelerate DTensor.__new__ further
This seems to be a (very very roughly) ~8% improvmeent on DTensor benchmark very similar to the benchmark from #160580 (120ish ms -> 110ish ms)

[ghstack-poisoned]
2025-08-26 21:17:23 -07:00
be827bbde5 Add C++ function to accelerate DTensor.__new__
As the new comment explains, pybinding DispatchKeySet is expensive; one native function avoids the number of DispatchKeySets we incur pybind overhead for to 1.

Reviewers please advise on a better location for the new torch._C function; I just threw it in Module.cpp because I didn't know where to put it.

[ghstack-poisoned]
2025-08-26 18:48:17 -07:00
46ab2e6503 Add inline fast paths for SymInt operators
If SymInt::maybe_as_int() returns non-empty, then we get an inline
fast path. The philosophy here (as with the previous PR) is to
preserve performance in the "plain old ints" case.

Observed time spent in SymInt functions in computeStorageNBytes to
drop (and not cost shift elsewhere in the function) after this change,
profiling detach() using code similar to the benchmark from #160580
and Linux perf.

[ghstack-poisoned]
2025-08-26 18:24:22 -07:00
10b1549b3e Outline SymInt::maybe_as_int_slow_path
Keeps SymInt::maybe_as_int small enough to inline.

[ghstack-poisoned]
2025-08-25 19:50:05 -07:00
c648c13e7f PythonArgs::toBool: order cheap mutually exclusive checks first
symbools are not identical with Py_True or PyFalse, so we can do those cheap checks first and at least get plain old bools to go fast

[ghstack-poisoned]
2025-08-25 17:33:50 -07:00
275b500db9 Remove early torch::is_symint call in is_int_or_symint
If I am reading [THPUtils_checkIndex](cf94cadbee/torch/csrc/utils.cpp (L28)) correctly, it already checks torch::is_symint before hitting __index__ or __int__.

[ghstack-poisoned]
2025-08-25 14:11:22 -07:00
08842bbbe6 Check for Index before Tensors in is_int_or_symint
Cheaper (and cheap in an absolute sense) check should come first.

[ghstack-poisoned]
2025-08-25 13:02:39 -07:00
751ec21dae Stop trying to intern arguments in PyObject_FastGetAttrString
If we want them interned, we should intern at callsites. (The numpy reference has bit rotted; see b222eb66c7 (diff-6bdb6105198083838f51c57b55b3a49472ed23043bb40018f1ea41138e687163))

Profiling a simple torchdispatch benchmark with perf before/after seems to show that time spent copying std::strings and interning Python strings is gone, though there is some noise and the improvement is very small.

[ghstack-poisoned]
2025-08-25 13:02:33 -07:00
b98214f1d9 Update on "Fix accidental copy in pushPyOutToStack"
`auto` forces a copy. Confirmed this did something noticable with perf.

[ghstack-poisoned]
2025-08-25 13:02:33 -07:00
a00ca04fea Update base for Update on "Fix accidental copy in pushPyOutToStack"
`auto` forces a copy. Confirmed this did something noticable with perf.

[ghstack-poisoned]
2025-08-25 13:02:32 -07:00
4392f7dc26 Fix accidental copy in pushPyOutToStack
`auto` forces a copy. Confirmed this did something noticable with perf.

[ghstack-poisoned]
2025-08-22 17:08:19 -07:00
375940d2c8 Avoid double hash lookup in torch._library.simple_registry
Not a huge cost, but free win is free.

[ghstack-poisoned]
2025-08-22 17:01:15 -07:00
110f8ce653 Update on "Improve assert perf in _python_dispatch._correct_storage_aliasing"
This assertion was expensive because of is_traceable_wrapper_subclass. Finding a cheap check to run first that's likely to let us skip the rest seems to improve things significantly.

[ghstack-poisoned]
2025-08-22 16:05:22 -07:00
09a7f8d900 Improve assert perf in _python_dispatch._correct_storage_aliasing
This assertion was expensive because of is_traceable_wrapper_subclass. Finding a cheap check to run first that's likely to let us skip the rest seems to improve things significantly.

[ghstack-poisoned]
2025-08-22 15:23:41 -07:00
60ee217e25 Fix pybind enum efficiency issue in return_and_correct_aliasing
Scanning a list of pybind enums with `in` is slow. See NOTE in code for full explanation.

This is a significant optimization; will be updating the torchdispatch/return_and_correct_aliasing portion of this stack with benchmark and results soonish.

[ghstack-poisoned]
2025-08-22 15:02:34 -07:00
749c9d2faf Update on "Optimize _python_dispatch.return_and_correct_aliasing.get_write_alias"
- Empty containers are Falsey
- Hoist cheap checks first
- Microbenchmarked single-element set access method

Benchmark code:
```
import timeit

to_test = [
    ('list(x)', 'x = set([3])'),
    ('x[0]', 'x = [3]'),
    ('list(x)[0]', 'x = set([3])'),
    ('next(iter(x))', 'x = set([3])'),
]

for (stmt, setup) in to_test:
    res = timeit.timeit(stmt=stmt, setup=setup)
    print(f"Time for `{stmt}`: {res}")
```

Result with Python 3.13 on Mac (with excess digits manually trimmed; directionally matches result on Linux)
```
Time for `list(x)`: 0.03418
Time for `x[0]`: 0.00852
Time for `list(x)[0]`: 0.03561
Time for `next(iter(x))`: 0.02278
```

FWIW, I was surprised by this result, so I guess I'm glad I wrote the benchmark!

[ghstack-poisoned]
2025-08-22 15:02:34 -07:00
f746ca4628 Update base for Update on "Optimize _python_dispatch.return_and_correct_aliasing.get_write_alias"
- Empty containers are Falsey
- Hoist cheap checks first
- Microbenchmarked single-element set access method

Benchmark code:
```
import timeit

to_test = [
    ('list(x)', 'x = set([3])'),
    ('x[0]', 'x = [3]'),
    ('list(x)[0]', 'x = set([3])'),
    ('next(iter(x))', 'x = set([3])'),
]

for (stmt, setup) in to_test:
    res = timeit.timeit(stmt=stmt, setup=setup)
    print(f"Time for `{stmt}`: {res}")
```

Result with Python 3.13 on Mac (with excess digits manually trimmed; directionally matches result on Linux)
```
Time for `list(x)`: 0.03418
Time for `x[0]`: 0.00852
Time for `list(x)[0]`: 0.03561
Time for `next(iter(x))`: 0.02278
```

FWIW, I was surprised by this result, so I guess I'm glad I wrote the benchmark!

[ghstack-poisoned]
2025-08-22 15:02:34 -07:00
e156940989 Optimize _python_dispatch.return_and_correct_aliasing.get_write_alias
- Empty containers are Falsey
- Hoist cheap checks first
- Microbenchmarked single-element set access method

Benchmark code:
```
import timeit

to_test = [
    ('list(x)', 'x = set([3])'),
    ('x[0]', 'x = [3]'),
    ('list(x)[0]', 'x = set([3])'),
    ('next(iter(x))', 'x = set([3])'),
]

for (stmt, setup) in to_test:
    res = timeit.timeit(stmt=stmt, setup=setup)
    print(f"Time for `{stmt}`: {res}")
```

Result with Python 3.13 on Mac (with excess digits manually trimmed; directionally matches result on Linux)
```
Time for `list(x)`: 0.03418
Time for `x[0]`: 0.00852
Time for `list(x)[0]`: 0.03561
Time for `next(iter(x))`: 0.02278
```

FWIW, I was surprised by this result, so I guess I'm glad I wrote the benchmark!

[ghstack-poisoned]
2025-08-22 13:37:36 -07:00
a7f61aaeea Use is, not ==, to check exact type matches in _python_dispatch
`is` checks object identity and is more efficient. Google seems to confirm it is the correct way to do an exact type check.

[ghstack-poisoned]
2025-08-22 13:17:09 -07:00
eac61d3e73 Update on "Stop accessing func._schema in _python_dispatch.correct_storage_aliasing"
func._schema is a pybind, accessing the arguments/returns is expensive, and we have no reason to do it anyway.

[ghstack-poisoned]
2025-08-22 12:41:59 -07:00
a008d5e1c0 Update base for Update on "Stop accessing func._schema in _python_dispatch.correct_storage_aliasing"
func._schema is a pybind, accessing the arguments/returns is expensive, and we have no reason to do it anyway.

[ghstack-poisoned]
2025-08-22 12:41:59 -07:00
b30da6f6c9 Stop accessing func._schema in _python_dispatch.correct_storage_aliasing
func._schema is a pybind, accessing the arguments/returns is expensive, and we have no reason to do it anyway.

[ghstack-poisoned]
2025-08-22 11:36:44 -07:00
f466f68388 Update on "Replace manual cache in _python_dispatch.get_alias_info with functools.cache"
In addition to being more code, the manual cache was doing an extra dictionary lookup on each cache hit.

[ghstack-poisoned]
2025-08-22 10:59:54 -07:00
838fcbe179 Update base for Update on "Replace manual cache in _python_dispatch.get_alias_info with functools.cache"
In addition to being more code, the manual cache was doing an extra dictionary lookup on each cache hit.

[ghstack-poisoned]
2025-08-22 10:59:54 -07:00
e7818b6c52 Replace manual cache in _python_dispatch.get_alias_info with functools.cache
In addition to being more code, the manual cache was doing an extra dictionary lookup on each cache hit.

[ghstack-poisoned]
2025-08-22 10:41:14 -07:00
a69b16441e Inline is_read_only_alias_match in _correct_storage_aliasing
Drives down the overhead of return_and_correct_storage_aliasing slightly. Hopefully you'll agree it doesn't compromise readability.

[ghstack-poisoned]
2025-08-22 10:41:10 -07:00
736ab6ff7c Remove unnecessary len() call in _correct_storage_aliasing.is_read_only_alias_match
Containers are truthy iff they're non-empty.

[ghstack-poisoned]
2025-08-22 10:41:06 -07:00
7063939a93 fix lint
[ghstack-poisoned]
2025-08-22 09:17:25 -07:00
e8de32034f fix lint (base update)
[ghstack-poisoned]
2025-08-22 09:17:25 -07:00
23f9555cc0 Update
[ghstack-poisoned]
2025-08-21 20:17:38 -07:00
c4cdf38750 Update (base update)
[ghstack-poisoned]
2025-08-21 20:17:38 -07:00
344 changed files with 10342 additions and 3107 deletions

View File

@ -30,7 +30,6 @@ into a tarball, with the following structure:
More specifically, `build_magma.sh` copies over the relevant files from the `package_files` directory depending on the ROCm version.
Outputted binaries should be in the `output` folder.
## Pushing
Packages can be uploaded to an S3 bucket using:

View File

@ -96,7 +96,6 @@ function pip_build_and_install() {
python3 -m pip wheel \
--no-build-isolation \
--no-deps \
--no-use-pep517 \
-w "${wheel_dir}" \
"${build_target}"
fi

View File

@ -63,7 +63,7 @@ self-hosted-runner:
- linux.rocm.gpu.gfx942.1
- linux.rocm.gpu.gfx942.2
- linux.rocm.gpu.gfx942.4
- rocm-docker
- linux.rocm.gfx942.docker-cache
# Org wise AWS `mac2.metal` runners (2020 Mac mini hardware powered by Apple silicon M1 processors)
- macos-m1-stable
- macos-m1-14

View File

@ -1 +1 @@
ad5816f0eee1c873df1b7d371c69f1f811a89387
07b6cbde121417a70e4dc871adb6d27030e0ce3f

View File

@ -1 +1 @@
ccb801b88af136454798b945175c4c87e636ac33
acccf86477759b2d3500f1ae1be065f7b1e409ec

13
.github/labeler.yml vendored
View File

@ -165,3 +165,16 @@
- torch/_inductor/kernel/mm.py
- test/inductor/test_max_autotune.py
- third_party/fbgemm
"ciflow/mps":
- aten/src/ATen/mps/**
- aten/src/ATen/native/mps/**
- torch/_inductor/codegen/mps.py
- test/test_mps.py
- test/inductor/test_mps_basic.py
"ciflow/h100-symm-mem":
- torch/csrc/distributed/c10d/symm_mem/**
- torch/distributed/_symmetric_memory/**
- test/distributed/**/*mem*
- test/distributed/**/*mem*/**

View File

@ -34,6 +34,9 @@ python3 torch/utils/data/datapipes/gen_pyi.py
# Also check generated pyi files
find torch -name '*.pyi' -exec git add --force -- "{}" +
# Print current environment
python3 -m pip freeze
RC=0
# Run lintrunner on all files
if ! lintrunner --force-color --tee-json=lint.json ${ADDITIONAL_LINTRUNNER_ARGS} 2> /dev/null; then

View File

@ -119,6 +119,22 @@ jobs:
with:
docker-image: ${{ steps.build-docker-image.outputs.docker-image }}
- name: Generate output
if: contains(matrix.docker-image-name, 'rocm')
id: generate_output
run: |
docker_image_name="${{ matrix.docker-image-name }}"
docker_image_tag="${{ steps.build-docker-image.outputs.docker-image }}"
echo "${docker_image_name}=${docker_image_tag}" >> docker-builds-output-${docker_image_name}.txt
- name: Upload artifacts
uses: actions/upload-artifact@v4.4.0
if: contains(matrix.docker-image-name, 'rocm')
with:
name: docker-builds-artifacts-${{ matrix.docker-image-name }}
retention-days: 14
path: ./docker-builds-output-${{ matrix.docker-image-name }}.txt
- uses: nick-fields/retry@7152eba30c6575329ac0576536151aca5a72780e # v3.0.0
name: Push to https://ghcr.io/
id: push-to-ghcr-io

View File

@ -1,55 +0,0 @@
name: docker-cache-mi300
on:
# run every 6 hours
schedule:
- cron: 0 0,6,12,18 * * *
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name }}
cancel-in-progress: true
permissions:
id-token: write
contents: read
jobs:
docker-cache:
if: github.repository_owner == 'pytorch'
runs-on: rocm-docker
steps:
- name: Checkout PyTorch
uses: pytorch/pytorch/.github/actions/checkout-pytorch@main
with:
no-sudo: true
- name: configure aws credentials
id: aws_creds
uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 # v4.1.0
with:
role-to-assume: arn:aws:iam::308535385114:role/gha_workflow_s3_and_ecr_read_only
aws-region: us-east-1
role-duration-seconds: 18000
- name: Login to Amazon ECR
id: login-ecr
continue-on-error: false
uses: aws-actions/amazon-ecr-login@062b18b96a7aff071d4dc91bc00c4c1a7945b076 # v2.0.1
- name: Calculate docker image
id: calculate-docker-image
uses: pytorch/test-infra/.github/actions/calculate-docker-image@main
with:
docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3
push: false
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: ${{ steps.calculate-docker-image.outputs.docker-image }}
- name: Tar and upload to S3 bucket
run: |
sudo docker save -o ~/docker-data/pytorch/pytorch_docker_image.tar ${{ steps.calculate-docker-image.outputs.docker-image }}
sudo rclone copy -P --s3-upload-concurrency 64 --s3-chunk-size 200M --s3-upload-cutoff 300M ~/docker-data/pytorch/pytorch_docker_image.tar oci:pytorchbucket0002/pytorch_docker_image --progress

105
.github/workflows/docker-cache-rocm.yml vendored Normal file
View File

@ -0,0 +1,105 @@
name: docker-cache-rocm
on:
workflow_run:
workflows: [docker-builds]
branches: [main, release]
types:
- completed
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.sha }}-${{ github.event_name }}
cancel-in-progress: true
permissions:
id-token: write
contents: read
actions: read
jobs:
download-docker-builds-artifacts:
if: github.repository_owner == 'pytorch'
name: download-docker-builds-artifacts
runs-on: ubuntu-latest
outputs:
pytorch-linux-jammy-rocm-n-py3: ${{ steps.process-artifacts.outputs.pytorch-linux-jammy-rocm-n-py3 }}
pytorch-linux-noble-rocm-n-py3: ${{ steps.process-artifacts.outputs.pytorch-linux-noble-rocm-n-py3 }}
pytorch-linux-jammy-rocm-n-py3-benchmarks: ${{ steps.process-artifacts.outputs.pytorch-linux-jammy-rocm-n-py3-benchmarks }}
steps:
- name: Download artifacts
uses: actions/download-artifact@v4.1.7
with:
run-id: ${{ github.event.workflow_run.id }}
path: ./docker-builds-artifacts
merge-multiple: true
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Process artifacts
id: process-artifacts
run: |
ls -R ./docker-builds-artifacts
cat ./docker-builds-artifacts/*txt >> "${GITHUB_OUTPUT}"
cat "${GITHUB_OUTPUT}"
docker-cache:
if: github.repository_owner == 'pytorch'
needs: download-docker-builds-artifacts
strategy:
fail-fast: false
matrix:
runner: [linux.rocm.gfx942.docker-cache]
docker-image: [
"${{ needs.download-docker-builds-artifacts.outputs.pytorch-linux-jammy-rocm-n-py3 }}",
"${{ needs.download-docker-builds-artifacts.outputs.pytorch-linux-noble-rocm-n-py3 }}",
"${{ needs.download-docker-builds-artifacts.outputs.pytorch-linux-jammy-rocm-n-py3-benchmarks }}"
]
runs-on: "${{ matrix.runner }}"
steps:
- name: debug
run: |
JSON_STRINGIFIED="${{ toJSON(needs.download-docker-builds-artifacts.outputs) }}"
echo "Outputs of download-docker-builds-artifacts job: ${JSON_STRINGIFIED}"
- name: configure aws credentials
id: aws_creds
uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 # v4.1.0
with:
role-to-assume: arn:aws:iam::308535385114:role/gha_workflow_s3_and_ecr_read_only
aws-region: us-east-1
role-duration-seconds: 18000
- name: Login to Amazon ECR
id: login-ecr
continue-on-error: false
uses: aws-actions/amazon-ecr-login@062b18b96a7aff071d4dc91bc00c4c1a7945b076 # v2.0.1
- name: Generate ghrc.io tag
id: ghcr-io-tag
run: |
ecr_image="${{ matrix.docker-image }}"
ghcr_image="ghcr.io/pytorch/ci-image:${ecr_image##*:}"
echo "ghcr_image=${ghcr_image}" >> "$GITHUB_OUTPUT"
- name: Pull docker image
uses: pytorch/test-infra/.github/actions/pull-docker-image@main
with:
docker-image: ${{ steps.ghcr-io-tag.outputs.ghcr_image }}
- name: Save as tarball
run: |
docker_image_tag=${{ matrix.docker-image }}
docker_image_tag="${docker_image_tag#*:}" # Remove everything before and including first ":"
docker_image_tag="${docker_image_tag%-*}" # Remove everything after and including last "-"
ref_name=${{ github.event.workflow_run.head_branch }}
if [[ $ref_name =~ "release/" ]]; then
ref_suffix="release"
elif [[ $ref_name == "main" ]]; then
ref_suffix="main"
else
echo "Unexpected branch in ref_name: ${ref_name}" && exit 1
fi
docker tag ${{ steps.ghcr-io-tag.outputs.ghcr_image }} ${{ matrix.docker-image }}
# mv is atomic operation, so we use intermediate tar.tmp file to prevent read-write contention
docker save -o ~/pytorch-data/docker/${docker_image_tag}.tar.tmp ${{ matrix.docker-image }}
mv ~/pytorch-data/docker/${docker_image_tag}.tar.tmp ~/pytorch-data/docker/${docker_image_tag}_${ref_suffix}.tar

View File

@ -37,7 +37,6 @@ jobs:
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runner: "linux.c7i.12xlarge"
build-environment: linux-jammy-cuda12.8-py3.10-gcc11-sm90-dist
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '9.0'

View File

@ -1,4 +1,4 @@
name: inductor-rocm
name: inductor-rocm-mi200
on:
schedule:

View File

@ -5,9 +5,11 @@ on:
- cron: 0 0 * * *
push:
tags:
# NOTE: Doc build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
# NOTE: Doc build pipelines should only get triggered on:
# Major or minor release candidates builds
- v[0-9]+.[0-9]+.0+-rc[0-9]+
# Final RC for major, minor and patch releases
- v[0-9]+.[0-9]+.[0-9]+
- ciflow/nightly/*
workflow_dispatch:

View File

@ -1,4 +1,4 @@
name: rocm
name: rocm-mi200
on:
push:

View File

@ -41,7 +41,6 @@ jobs:
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runner: linux.12xlarge.memory
build-environment: linux-jammy-cuda12.8-py3.10-gcc11-sm90
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '9.0'

83
.github/workflows/trunk-rocm-mi300.yml vendored Normal file
View File

@ -0,0 +1,83 @@
name: trunk-rocm-mi300
on:
push:
branches:
- main
- release/*
workflow_dispatch:
schedule:
- cron: 29 8 * * * # about 1:29am PDT
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}
cancel-in-progress: true
permissions:
id-token: write
contents: read
jobs:
llm-td:
if: github.repository_owner == 'pytorch'
name: before-test
uses: ./.github/workflows/llm_td_retrieval.yml
permissions:
id-token: write
contents: read
target-determination:
name: before-test
uses: ./.github/workflows/target_determination.yml
needs: llm-td
permissions:
id-token: write
contents: read
get-label-type:
name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with:
triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
curr_branch: ${{ github.head_ref || github.ref_name }}
curr_ref_type: ${{ github.ref_type }}
linux-jammy-rocm-py3_10-build:
name: linux-jammy-rocm-py3.10
uses: ./.github/workflows/_linux-build.yml
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build-environment: linux-jammy-rocm-py3.10
docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3
sync-tag: rocm-build
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "default", shard: 2, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "default", shard: 3, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "default", shard: 4, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "default", shard: 5, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "default", shard: 6, num_shards: 6, runner: "linux.rocm.gpu.gfx942.1.b" },
{ config: "distributed", shard: 1, num_shards: 3, runner: "linux.rocm.gpu.gfx942.4.b" },
{ config: "distributed", shard: 2, num_shards: 3, runner: "linux.rocm.gpu.gfx942.4.b" },
{ config: "distributed", shard: 3, num_shards: 3, runner: "linux.rocm.gpu.gfx942.4.b" },
]}
secrets: inherit
linux-jammy-rocm-py3_10-test:
permissions:
id-token: write
contents: read
name: linux-jammy-rocm-py3.10
uses: ./.github/workflows/_rocm-test.yml
needs:
- linux-jammy-rocm-py3_10-build
- target-determination
with:
build-environment: linux-jammy-rocm-py3.10
docker-image: ${{ needs.linux-jammy-rocm-py3_10-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-rocm-py3_10-build.outputs.test-matrix }}
secrets: inherit

View File

@ -5,6 +5,7 @@ on:
workflows:
- pull
- trunk
- trunk-rocm-mi300
- periodic
- periodic-rocm-mi200
- periodic-rocm-mi300

View File

@ -186,6 +186,8 @@ include_patterns = [
'aten/src/ATen/native/nested/cuda/*.h',
'aten/src/ATen/native/nested/*.cpp',
'aten/src/ATen/native/nested/*.h',
'aten/src/ATen/xpu/**/*.h',
'aten/src/ATen/xpu/**/*.cpp',
'c10/**/*.cpp',
'c10/**/*.h',
'torch/*.h',

View File

@ -736,6 +736,44 @@ if(NOT DEFINED USE_BLAS)
set(USE_BLAS ON)
endif()
# Prioritized Text Linker Optimization
if(USE_PRIORITIZED_TEXT_FOR_LD)
set(LINKER_SCRIPT_FILE_IN "${CMAKE_SOURCE_DIR}/cmake/prioritized_text.txt")
set(LINKER_SCRIPT_FILE_OUT "${CMAKE_SOURCE_DIR}/cmake/linker_script.ld")
execute_process(
COMMAND ${Python_EXECUTABLE}
${CMAKE_SOURCE_DIR}/tools/setup_helpers/generate_linker_script.py
--filein "${LINKER_SCRIPT_FILE_IN}"
--fout "${LINKER_SCRIPT_FILE_OUT}"
RESULT_VARIABLE _gen_result
OUTPUT_VARIABLE _gen_output
ERROR_VARIABLE _gen_error
)
if(NOT _gen_result EQUAL 0)
message(FATAL_ERROR
"Failed to generate linker script:\n${_gen_output}\n${_gen_error}")
endif()
append_cxx_flag_if_supported("-ffunction-sections" CMAKE_CXX_FLAGS)
append_cxx_flag_if_supported("-fdata-sections" CMAKE_CXX_FLAGS)
append_c_flag_if_supported("-ffunction-sections" CMAKE_C_FLAGS)
append_c_flag_if_supported("-fdata-sections" CMAKE_C_FLAGS)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -T${LINKER_SCRIPT_FILE_OUT}")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} -T${LINKER_SCRIPT_FILE_OUT}")
else()
if(LINUX AND CPU_AARCH64)
message(WARNING [[
It is strongly recommend to enable linker script optimization for all AArch64 Linux builds.
To do so please export USE_PRIORITIZED_TEXT_FOR_LD=1
]])
endif()
endif()
# Build libtorch mobile library, which contains ATen/TH ops and native support
# for TorchScript model, but doesn't contain not-yet-unified caffe2 ops;
if(INTERN_BUILD_MOBILE)
@ -1402,9 +1440,6 @@ if(BUILD_JNI)
add_subdirectory(android/pytorch_android)
endif()
include(cmake/Summary.cmake)
caffe2_print_configuration_summary()
# Parse custom debug info
if(DEFINED USE_CUSTOM_DEBINFO)
string(REPLACE ";" " " SOURCE_FILES "${USE_CUSTOM_DEBINFO}")
@ -1444,56 +1479,5 @@ if(BUILD_BUNDLE_PTXAS AND USE_CUDA)
DESTINATION "${CMAKE_INSTALL_BINDIR}")
endif()
if(USE_PRIORITIZED_TEXT_FOR_LD)
add_compile_options(
$<$<COMPILE_LANGUAGE:C,CXX>:-ffunction-sections>
$<$<COMPILE_LANGUAGE:C,CXX>:-fdata-sections>
)
set(LINKER_SCRIPT_FILE_OUT "${CMAKE_SOURCE_DIR}/cmake/linker_script.ld")
set(LINKER_SCRIPT_FILE_IN "${CMAKE_SOURCE_DIR}/cmake/prioritized_text.txt")
add_custom_command(
OUTPUT "${LINKER_SCRIPT_FILE_OUT}"
COMMAND ${Python_EXECUTABLE} ${CMAKE_SOURCE_DIR}/tools/setup_helpers/generate_linker_script.py --filein "${LINKER_SCRIPT_FILE_IN}" --fout "${LINKER_SCRIPT_FILE_OUT}"
DEPENDS ${CMAKE_SOURCE_DIR}/tools/setup_helpers/generate_linker_script.py "${LINKER_SCRIPT_FILE_IN}"
COMMENT "Generating prioritized text linker files"
VERBATIM
)
add_custom_target(generate_linker_script DEPENDS "${LINKER_SCRIPT_FILE_OUT}")
if(BUILD_PYTHON)
set(LINKER_OPT_TARGETS torch_python)
endif()
if(NOT BUILD_LIBTORCHLESS)
list(APPEND LINKER_OPT_TARGETS torch_cpu c10)
if(USE_CUDA)
list(APPEND LINKER_OPT_TARGETS torch_cuda c10_cuda)
endif()
if(USE_XPU)
list(APPEND LINKER_OPT_TARGETS torch_xpu c10_xpu)
endif()
if(USE_ROCM)
list(APPEND LINKER_OPT_TARGETS torch_hip c10_hip)
endif()
endif()
foreach(tgt IN LISTS LINKER_OPT_TARGETS)
if(TARGET ${tgt})
add_dependencies("${tgt}" generate_linker_script)
target_link_options_if_supported(${tgt} "-T,${LINKER_SCRIPT_FILE_OUT}")
set_property(TARGET ${tgt} APPEND PROPERTY LINK_DEPENDS "${LINKER_SCRIPT_FILE_OUT}")
else()
message(WARNING "Requested target '${tgt}' for linker script optimization was not found.")
endif()
endforeach()
else()
if(LINUX AND CPU_AARCH64)
message(WARNING [[
It is strongly recommend to enable linker script optimization for all AArch64 Linux builds.
To do so please export USE_PRIORITIZED_TEXT_FOR_LD=1
]])
endif()
endif()
include(cmake/Summary.cmake)
caffe2_print_configuration_summary()

View File

@ -37,7 +37,7 @@ Copyright (c) 2024 Tri Dao.
All rights reserved.
All contributions by Arm:
Copyright (c) 2021, 2023-2024 Arm Limited and/or its affiliates
Copyright (c) 2021, 2023-2025 Arm Limited and/or its affiliates
All contributions from Caffe:
Copyright(c) 2013, 2014, 2015, the respective contributors

View File

@ -18,6 +18,8 @@ Please report security issues using https://github.com/pytorch/pytorch/security/
All reports submitted through the security advisories mechanism would **either be made public or dismissed by the team within 90 days of the submission**. If advisory has been closed on the grounds that it is not a security issue, please do not hesitate to create an [new issue](https://github.com/pytorch/pytorch/issues/new?template=bug-report.yml) as it is still likely a valid issue within the framework.
**Note on crashes and out of bounds access**: PyTorch is a computational framework that performs operations on behalf of the caller. Like many low-level libraries, PyTorch generally does not validate all inputs to every function—the responsibility for providing valid arguments lies with the calling code. While crashes and out of bounds memory access should be reported as bugs, they are generally not considered security vulnerabilities in PyTorch's threat model.
Please refer to the following page for our responsible disclosure policy, reward guidelines, and those things that should not be reported:
https://www.facebook.com/whitehat

View File

@ -55,14 +55,6 @@ struct numeric_limits<int8_t> {
static inline __host__ __device__ int8_t upper_bound() { return INT8_MAX; }
};
template <>
struct numeric_limits<uint16_t> {
static inline __host__ __device__ uint16_t lowest() { return 0; }
static inline __host__ __device__ uint16_t max() { return UINT16_MAX; }
static inline __host__ __device__ uint16_t lower_bound() { return 0; }
static inline __host__ __device__ uint16_t upper_bound() { return UINT16_MAX; }
};
template <>
struct numeric_limits<int16_t> {
static inline __host__ __device__ int16_t lowest() { return INT16_MIN; }
@ -71,14 +63,6 @@ struct numeric_limits<int16_t> {
static inline __host__ __device__ int16_t upper_bound() { return INT16_MAX; }
};
template <>
struct numeric_limits<uint32_t> {
static inline __host__ __device__ uint32_t lowest() { return 0; }
static inline __host__ __device__ uint32_t max() { return UINT32_MAX; }
static inline __host__ __device__ uint32_t lower_bound() { return 0; }
static inline __host__ __device__ uint32_t upper_bound() { return UINT32_MAX; }
};
template <>
struct numeric_limits<int32_t> {
static inline __host__ __device__ int32_t lowest() { return INT32_MIN; }
@ -87,21 +71,6 @@ struct numeric_limits<int32_t> {
static inline __host__ __device__ int32_t upper_bound() { return INT32_MAX; }
};
template <>
struct numeric_limits<uint64_t> {
#ifdef _MSC_VER
static inline __host__ __device__ uint64_t lowest() { return 0; }
static inline __host__ __device__ uint64_t max() { return _UI64_MAX; }
static inline __host__ __device__ uint64_t lower_bound() { return 0; }
static inline __host__ __device__ uint64_t upper_bound() { return _UI64_MAX; }
#else
static inline __host__ __device__ uint64_t lowest() { return 0; }
static inline __host__ __device__ uint64_t max() { return UINT64_MAX; }
static inline __host__ __device__ uint64_t lower_bound() { return 0; }
static inline __host__ __device__ uint64_t upper_bound() { return UINT64_MAX; }
#endif
};
template <>
struct numeric_limits<int64_t> {
#ifdef _MSC_VER

View File

@ -440,7 +440,7 @@ bool MPSHeapAllocatorImpl::release_cached_buffers() {
// we need to release the lock temporarily as synchronizing may cause deadlock with completion handlers.
m_mutex.unlock();
auto stream = getDefaultMPSStream();
dispatch_sync(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
stream->synchronize(SyncType::COMMIT_AND_WAIT);
});
m_mutex.lock();

View File

@ -110,6 +110,9 @@ class TORCH_API MPSStream {
return _stream;
}
MTLBuffer_t getErrorBuffer();
void checkLastError();
private:
Stream _stream;
MTLCommandQueue_t _commandQueue = nil;
@ -121,6 +124,8 @@ class TORCH_API MPSStream {
dispatch_queue_t _serialQueue = nullptr;
// CommitAndContinue is enabled by default
bool _enableCommitAndContinue = true;
// Buffer that contains last raised error
MTLBuffer_t _errorBuffer = nil;
// use synchronize() to access any of these commit functions outside MPSStream
void commit();
@ -155,4 +160,7 @@ class TORCH_API MPSStreamImpl {
MPSStreamImpl();
};
#ifdef __OBJC__
void dispatch_sync_with_rethrow(dispatch_queue_t queue, void (^block)());
#endif
} // namespace at::mps

View File

@ -3,13 +3,13 @@
#include <ATen/mps/MPSAllocatorInterface.h>
#include <ATen/mps/MPSProfiler.h>
#include <ATen/mps/MPSStream.h>
#include <c10/metal/error.h>
@interface MPSGraphExecutionDescriptor ()
@property(readwrite, atomic) BOOL enableCommitAndContinue;
@end
namespace at::mps {
//-----------------------------------------------------------------
// MPSStream
//-----------------------------------------------------------------
@ -30,6 +30,10 @@ MPSStream::MPSStream(Stream stream) : _stream(stream) {
// Choose level which optimizes for GPU
_compilationDescriptor.optimizationLevel = MPSGraphOptimizationLevel0;
_executionDescriptor.compilationDescriptor = _compilationDescriptor;
_errorBuffer = [MPSDevice::getInstance()->device() newBufferWithLength:sizeof(c10::metal::ErrorMessages)
options:MTLResourceStorageModeShared];
std::memset([_errorBuffer contents], 0, 1024);
}
MPSStream::~MPSStream() {
@ -38,6 +42,8 @@ MPSStream::~MPSStream() {
[_executionDescriptor release];
[_compilationDescriptor release];
_executionDescriptor = nil;
[_errorBuffer release];
_errorBuffer = nil;
_compilationDescriptor = nil;
assert(_commandBuffer == nil);
@ -104,6 +110,7 @@ void MPSStream::commitAndWait() {
[_prevCommandBuffer waitUntilCompleted];
[_prevCommandBuffer release];
_prevCommandBuffer = nil;
checkLastError();
}
if (_commandBuffer) {
@ -111,6 +118,7 @@ void MPSStream::commitAndWait() {
[_commandBuffer waitUntilCompleted];
[_commandBuffer release];
_commandBuffer = nil;
checkLastError();
}
}
@ -153,7 +161,7 @@ void MPSStream::fill(id<MTLBuffer> buffer, uint8_t value, size_t length, size_t
if (length == 0) {
return;
}
dispatch_sync(_serialQueue, ^() {
dispatch_sync_with_rethrow(_serialQueue, ^() {
@autoreleasepool {
endKernelCoalescing();
id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer() blitCommandEncoder];
@ -183,7 +191,7 @@ void MPSStream::copy(id<MTLBuffer> srcBuffer,
size_t dstOffset,
uint64_t profileId,
SyncType syncType) {
dispatch_sync(_serialQueue, ^() {
dispatch_sync_with_rethrow(_serialQueue, ^() {
@autoreleasepool {
endKernelCoalescing();
id<MTLBlitCommandEncoder> blitEncoder = [commandBuffer() blitCommandEncoder];
@ -236,7 +244,7 @@ void MPSStream::executeMPSGraph(MPSGraph* mpsGraph, NSDictionary* feeds, NSDicti
auto& profiler = getMPSProfiler();
const bool isGraphProfilingEnabled = profiler.isOperationProfilingEnabled();
dispatch_sync(_serialQueue, ^() {
dispatch_sync_with_rethrow(_serialQueue, ^() {
endKernelCoalescing();
if (isGraphProfilingEnabled) {
// this function call is only relevant for interval-based Signposts
@ -266,6 +274,24 @@ void MPSStream::executeMPSGraph(MPSGraph* mpsGraph, NSDictionary* feeds, NSDicti
});
}
id<MTLBuffer> MPSStream::getErrorBuffer() {
return _errorBuffer;
}
void MPSStream::checkLastError() {
auto msgs = reinterpret_cast<c10::metal::ErrorMessages*>([_errorBuffer contents]);
const auto& msg = msgs->msg[0];
if (!msgs) {
return;
}
unsigned int count = 0;
std::swap(count, msgs->count);
if (!count) {
return;
}
throw c10::AcceleratorError({msg.func, msg.file, msg.line}, 1, msg.message);
}
//-----------------------------------------------------------------
// MPSStreamImpl
//-----------------------------------------------------------------
@ -289,4 +315,19 @@ MPSStream* getDefaultMPSStream() {
return MPSStreamImpl::getInstance();
}
// Helper methods
void dispatch_sync_with_rethrow(dispatch_queue_t queue, void (^block)()) {
__block std::optional<std::exception_ptr> block_exception;
dispatch_sync(queue, ^() {
try {
block();
} catch (...) {
block_exception = std::current_exception();
}
});
if (block_exception) {
std::rethrow_exception(*block_exception);
}
}
} // namespace at::mps

View File

@ -142,6 +142,7 @@ Tensor _pack_padded_sequence_backward_symint(const Tensor& grad, c10::SymIntArra
std::tuple<Tensor, Tensor> _pad_packed_sequence(const Tensor& data, const Tensor& _batch_sizes, bool batch_first, const Scalar& padding_value, int64_t total_length) {
auto batch_sizes_t = _batch_sizes.contiguous();
checkLongTensor(batch_sizes_t);
TORCH_CHECK(batch_sizes_t.numel() > 0, "batch_sizes can not be empty");
int64_t * batch_sizes = batch_sizes_t.data_ptr<int64_t>();
int64_t max_batch_size = batch_sizes[0];

View File

@ -23,6 +23,7 @@
#include <ATen/ops/_aminmax_native.h>
#include <ATen/ops/_assert_async_native.h>
#include <ATen/ops/_assert_scalar_native.h>
#include <ATen/ops/_async_error_native.h>
#include <ATen/ops/_functional_assert_async_native.h>
#include <ATen/ops/_functional_assert_scalar_native.h>
#include <ATen/ops/_make_per_tensor_quantized_tensor.h>
@ -479,6 +480,14 @@ Tensor isfinite(const Tensor& self) {
});
}
void _async_error(std::string_view msg) {
TORCH_CHECK(0, msg);
}
void _async_error_meta(std::string_view msg) {
// Do NOT error, it's an async error!
}
void _assert_async_cpu(const Tensor& self) {
TORCH_CHECK(
native::is_nonzero(self),

View File

@ -5,7 +5,6 @@
#include <ATen/native/ReduceOpsUtils.h>
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/Parallel.h>
#include <ATen/TensorIterator.h>
#include <ATen/OpMathType.h>
@ -79,12 +78,12 @@ void min_all_kernel_impl(Tensor& result, const Tensor& input) {
reduce_all_impl<int64_t>(result, input, upper_bound<int64_t>(),
[=](int64_t a, int64_t b) -> int64_t { return min_impl(a, b); });
} else {
AT_DISPATCH_V2(input.scalar_type(), "min_all", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, input.scalar_type(), "min_all", [&] {
using Vec = Vectorized<opmath_type<scalar_t>>;
reduce_all_impl_vec<scalar_t>(result, input, upper_bound<scalar_t>(),
[=] (scalar_t a , scalar_t b) -> scalar_t { return min_impl(a, b); },
[=](Vec a, Vec b) -> Vec { return minimum(a, b); });
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
});
}
}
@ -104,12 +103,12 @@ void max_all_kernel_impl(Tensor& result, const Tensor& input) {
reduce_all_impl<int64_t>(result, input, lower_bound<int64_t>(),
[=](int64_t a, int64_t b) -> int64_t { return max_impl(a, b); });
} else {
AT_DISPATCH_V2(input.scalar_type(), "max_all", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, input.scalar_type(), "max_all", [&] {
using Vec = Vectorized<opmath_type<scalar_t>>;
reduce_all_impl_vec<scalar_t>(result, input, lower_bound<scalar_t>(),
[=] (scalar_t a , scalar_t b) -> scalar_t { return max_impl(a, b); },
[=](Vec a, Vec b) -> Vec { return maximum(a, b); });
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
});
}
}
@ -200,7 +199,7 @@ void aminmax_allreduce_kernel(
}
);
} else {
AT_DISPATCH_V2(input.scalar_type(), "aminmax_cpu", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND2(kBFloat16, kHalf, input.scalar_type(), "aminmax_cpu", [&] {
using Vec = Vectorized<opmath_type<scalar_t>>;
using scalar_t_pair = std::pair<scalar_t, scalar_t>;
reduce_all_impl_vec_two_outputs<scalar_t>(
@ -215,7 +214,7 @@ void aminmax_allreduce_kernel(
[=](Vec a, Vec b) -> Vec { return minimum(a, b); },
[=](Vec a, Vec b) -> Vec { return maximum(a, b); }
);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
});
}
}

View File

@ -3,7 +3,6 @@
#include <ATen/core/Tensor.h>
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/OpMathType.h>
#include <ATen/cpu/vec/vec.h>
#include <ATen/cpu/vec/functional.h>
@ -348,35 +347,34 @@ struct MinValuesOps: public at::native::MinOps<scalar_t> {
};
void min_values_kernel_impl(TensorIterator& iter) {
// This case is special because of Vectorized<int64_t> does not
// handle upper_bound<int64_t>().
// See: https://github.com/pytorch/pytorch/issues/43254
if (iter.dtype() == kLong || iter.dtype() == kUInt64) {
AT_DISPATCH_V2(iter.dtype(), "min_values_cpu", AT_WRAP([&iter] {
binary_kernel_reduce(
iter,
MinValuesOps<scalar_t>{},
std::pair<scalar_t, int64_t>(upper_bound<scalar_t>(), -1));
}), kLong, kUInt64);
if (iter.dtype() == kLong) {
// This case is special because of Vectorized<int64_t> does not
// handle upper_bound<int64_t>().
// See: https://github.com/pytorch/pytorch/issues/43254
using scalar_t = int64_t;
binary_kernel_reduce(
iter,
MinValuesOps<scalar_t>{},
std::pair<scalar_t, int64_t>(upper_bound<scalar_t>(), -1));
return;
}
AT_DISPATCH_V2(iter.dtype(), "min_values_cpu", AT_WRAP([&iter] {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.dtype(), "min_values_cpu", [&iter] {
binary_kernel_reduce_vec(
iter,
[](scalar_t a, scalar_t b) -> scalar_t { return min_impl(a, b); },
[](Vectorized<scalar_t> a, Vectorized<scalar_t> b) { return minimum(a, b); },
static_cast<double>(upper_bound<scalar_t>()));
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void max_values_kernel_impl(TensorIterator& iter) {
AT_DISPATCH_V2(iter.dtype(), "max_values_cpu", AT_WRAP([&iter] {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.dtype(), "max_values_cpu", [&iter] {
binary_kernel_reduce_vec(
iter,
[](scalar_t a, scalar_t b) -> scalar_t { return max_impl(a, b); },
[](Vectorized<scalar_t> a, Vectorized<scalar_t> b) { return maximum(a, b); },
lower_bound<scalar_t>());
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void argmax_kernel_impl(TensorIterator &iter) {

View File

@ -11,7 +11,6 @@
#include <vector>
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/Parallel.h>
#include <ATen/NumericUtils.h>
#include <ATen/TensorIterator.h>
@ -107,7 +106,7 @@ void min_kernel_impl(
bool keepdim) {
int64_t self_dim_size = ensure_nonempty_size(self, dim);
AT_DISPATCH_V2(self.scalar_type(), "min_cpu", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(ScalarType::Half, ScalarType::BFloat16, ScalarType::Bool, self.scalar_type(), "min_cpu", [&] {
compare_base_kernel<scalar_t>(result, indice, self, dim, keepdim, [&] (
scalar_t* result_data, int64_t* indice_data,
const scalar_t* self_data, auto self_dim_stride) {
@ -129,7 +128,7 @@ void min_kernel_impl(
*indice_data = index;
}
);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), ScalarType::Half, ScalarType::BFloat16, ScalarType::Bool);
});
}
void max_kernel_impl(
@ -140,7 +139,7 @@ void max_kernel_impl(
bool keepdim) {
int64_t self_dim_size = ensure_nonempty_size(self, dim);
AT_DISPATCH_V2(self.scalar_type(), "max_cpu", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(ScalarType::Half, ScalarType::BFloat16, ScalarType::Bool, self.scalar_type(), "max_cpu", [&] {
compare_base_kernel<scalar_t>(result, indice, self, dim, keepdim, [&] (
scalar_t* result_data, int64_t* indice_data,
const scalar_t* self_data, auto self_dim_stride) {
@ -162,7 +161,7 @@ void max_kernel_impl(
*indice_data = index;
}
);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), ScalarType::Half, ScalarType::BFloat16, ScalarType::Bool);
});
}
void aminmax_kernel(
@ -187,7 +186,7 @@ void aminmax_kernel(
return;
}
AT_DISPATCH_V2(self.scalar_type(), "aminmax_cpu", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(ScalarType::Bool, ScalarType::BFloat16, ScalarType::Half, self.scalar_type(), "aminmax_cpu", [&] {
compare_base_kernel<scalar_t, scalar_t>(min_result, max_result, self, wrap_dim, keepdim, [&] (
scalar_t* min_result_data, scalar_t* max_result_data,
const scalar_t* self_data, auto self_dim_stride) {
@ -210,7 +209,7 @@ void aminmax_kernel(
*max_result_data = max_number;
}
);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), ScalarType::Bool, ScalarType::BFloat16, ScalarType::Half);
});
}
void where_kernel_impl(TensorIterator &iter) {

View File

@ -669,9 +669,12 @@ std::optional<c10::ScalarType> out_dtype) {
// _scaled_mm_allowed_device is used here within _grouped_mm_cuda which seems incorrect since scale is not used.
// the _grouped_mm_fallback should be safe for any ROCm GPU since it's just calling typical mm/bmm
bool use_fast_path = false;
// On non CK system(w/ ROCm), make sure use_fast_path is false
#if defined(USE_ROCM_CK_GEMM)
if (at::detail::getCUDAHooks().isGPUArch({"gfx942", "gfx950"})) {
use_fast_path = true;
}
#endif //USE_ROCM_CK_GEMM
#endif
const auto out_dtype_ = _resolve_grouped_mm_out_dtype(mat_a, mat_b, out_dtype);
Tensor out = create_grouped_gemm_output_tensor(mat_a, mat_b, offs, out_dtype_);
@ -680,7 +683,11 @@ std::optional<c10::ScalarType> out_dtype) {
#ifndef USE_ROCM
at::cuda::detail::bf16bf16_grouped_mm(mat_a, mat_b, offs, bias, out);
#else
#if defined(USE_ROCM_CK_GEMM)
at::hip::detail::group_gemm_ck(mat_a, mat_b, offs, bias, out);
#else
TORCH_WARN("ROCm: Group Gemm through CK not selected.");
#endif //USE_ROCM_CK_GEMM
#endif
} else {
_grouped_mm_fallback(mat_a, mat_b, offs, bias, out_dtype, out);

View File

@ -1,6 +1,5 @@
#define TORCH_ASSERT_NO_OPERATORS
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/NumericUtils.h>
#include <ATen/native/DispatchStub.h>
#include <ATen/native/ReduceAllOps.h>
@ -29,22 +28,22 @@ void _min_max_values_kernel_cuda_impl(TensorIterator& iter) {
}
void aminmax_allreduce_launch_kernel(TensorIterator& iter) {
AT_DISPATCH_V2(
iter.input_dtype(), "aminmax_all_cuda", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(
kBFloat16, kHalf, kBool, iter.input_dtype(), "aminmax_all_cuda", [&] {
_min_max_values_kernel_cuda_impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void aminmax_launch_kernel(TensorIterator& iter) {
AT_DISPATCH_V2(
iter.input_dtype(), "aminmax_cuda", AT_WRAP([&]() {
AT_DISPATCH_ALL_TYPES_AND3(
kBFloat16, kHalf, kBool, iter.input_dtype(), "aminmax_cuda", [&]() {
gpu_reduce_kernel<scalar_t, scalar_t>(
iter,
MinMaxOps<scalar_t, scalar_t, int32_t>{},
thrust::pair<scalar_t, scalar_t>(
at::numeric_limits<scalar_t>::upper_bound(),
at::numeric_limits<scalar_t>::lower_bound()));
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
} // namespace at::native

View File

@ -1,6 +1,5 @@
#define TORCH_ASSERT_NO_OPERATORS
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/NumericUtils.h>
#include <ATen/native/DispatchStub.h>
#include <ATen/native/ReduceAllOps.h>
@ -34,27 +33,27 @@ void max_values_kernel_cuda_impl(TensorIterator& iter) {
}
void max_values_kernel_cuda(TensorIterator& iter) {
AT_DISPATCH_V2(
iter.dtype(), "max_values_cuda", AT_WRAP([&]() {
AT_DISPATCH_ALL_TYPES_AND3(
kBFloat16, kHalf, kBool, iter.dtype(), "max_values_cuda", [&]() {
max_values_kernel_cuda_impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void max_launch_kernel(TensorIterator& iter) {
AT_DISPATCH_V2(
iter.input_dtype(), "max_cuda", AT_WRAP([&]() {
AT_DISPATCH_ALL_TYPES_AND3(
kBFloat16, kHalf, kBool, iter.input_dtype(), "max_cuda", [&]() {
gpu_reduce_kernel<scalar_t, scalar_t>(
iter,
MaxOps<scalar_t>{},
thrust::pair<scalar_t, int64_t>(
at::numeric_limits<scalar_t>::lower_bound(), 0));
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void max_all_launch_kernel(TensorIterator &iter) {
AT_DISPATCH_V2(iter.input_dtype(), "max_all_cuda", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.input_dtype(), "max_all_cuda", [&] {
max_values_kernel_cuda_impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
REGISTER_DISPATCH(max_values_stub, &max_values_kernel_cuda)

View File

@ -12,7 +12,6 @@
#include <ATen/NumericUtils.h>
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
#include <ATen/NumericUtils.h>
#include <ATen/cuda/NumericLimits.cuh>
@ -34,24 +33,24 @@ void min_values_kernel_cuda_impl(TensorIterator& iter) {
}
void min_values_kernel_cuda(TensorIterator& iter) {
AT_DISPATCH_V2(iter.dtype(), "min_values_cuda", AT_WRAP([&]() {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.dtype(), "min_values_cuda", [&]() {
min_values_kernel_cuda_impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void min_launch_kernel(TensorIterator &iter) {
AT_DISPATCH_V2(iter.input_dtype(), "min_cuda", AT_WRAP([&]() {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.input_dtype(), "min_cuda", [&]() {
gpu_reduce_kernel<scalar_t, scalar_t>(
iter,
MinOps<scalar_t>{},
thrust::pair<scalar_t, int64_t>(at::numeric_limits<scalar_t>::upper_bound(), 0));
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
void min_all_launch_kernel(TensorIterator &iter) {
AT_DISPATCH_V2(iter.input_dtype(), "min_all_cuda", AT_WRAP([&] {
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, iter.input_dtype(), "min_all_cuda", [&] {
min_values_kernel_cuda_impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf, kBool);
});
}
REGISTER_DISPATCH(min_values_stub, &min_values_kernel_cuda)

View File

@ -40,8 +40,6 @@ using namespace at::mps;
namespace at::native::mps {
void dispatch_sync_with_rethrow(dispatch_queue_t queue, void (^block)());
struct MPSScalar {
id<MTLBuffer> getMTLBuffer() const {
return __builtin_bit_cast(id<MTLBuffer>, buffer.get());

View File

@ -53,21 +53,6 @@
@end
namespace at::native::mps {
void dispatch_sync_with_rethrow(dispatch_queue_t queue, void (^block)()) {
__block std::optional<std::exception_ptr> block_exception;
dispatch_sync(queue, ^() {
try {
block();
} catch (...) {
block_exception = std::current_exception();
}
});
if (block_exception) {
std::rethrow_exception(*block_exception);
}
}
/**
* Computes distance from lowest to highest element offset in given tensor.
*/

View File

@ -1,4 +1,5 @@
#include <c10/metal/atomic.h>
#include <c10/metal/error.h>
#include <c10/metal/indexing.h>
#include <metal_stdlib>
@ -31,10 +32,24 @@ OffsetT index_apply_indices(
constant IndexAB* indices,
constant int64_t* sizes,
constant int64_t* strides,
uint num_indices) {
uint num_indices,
thread bool& error,
device ErrorMessages* error_buf) {
OffsetT rc = offs.x;
for (uint i = 0; i < num_indices; i++) {
auto idx = indices[i].indexArray[offs.y];
if (idx < -sizes[i] || idx >= sizes[i]) {
TORCH_REPORT_ERROR(
error_buf,
"index ",
idx,
" is out of bounds for dimension ",
i,
" with size ",
sizes[i]);
error = true;
break;
}
if (idx < 0) {
idx += sizes[i];
}
@ -55,6 +70,7 @@ kernel void index_select(
constant int64_t* index_sizes,
constant int64_t* index_strides,
constant uint4& ndim_nindices_numel,
device ErrorMessages* error_buffer,
uint thread_index [[thread_position_in_grid]]) {
const auto ndim = ndim_nindices_numel.x;
const auto num_indices = ndim_nindices_numel.y;
@ -65,8 +81,19 @@ kernel void index_select(
indices_strides,
ndim,
thread_index);
bool error = false;
auto input_offs = index_apply_indices<OffsetT>(
offs.yz, indices, index_sizes, index_strides, num_indices);
offs.yz,
indices,
index_sizes,
index_strides,
num_indices,
error,
error_buffer);
if (error) {
output[offs.x / sizeof(T)] = 0;
return;
}
output[offs.x / sizeof(T)] = input[input_offs / sizeof(T)];
}
@ -82,7 +109,9 @@ inline void index_put_impl(
constant int64_t* index_sizes,
constant int64_t* index_strides,
constant uint4& ndim_nindices_numel,
device ErrorMessages* error_buffer,
uint thread_index) {
bool error = false;
const auto ndim = ndim_nindices_numel.x;
const auto num_indices = ndim_nindices_numel.y;
const auto offs = index_get_offsets(
@ -93,7 +122,16 @@ inline void index_put_impl(
ndim,
thread_index);
auto output_offs = index_apply_indices<OffsetT>(
offs.xz, indices, index_sizes, index_strides, num_indices);
offs.xz,
indices,
index_sizes,
index_strides,
num_indices,
error,
error_buffer);
if (error) {
return;
}
output[output_offs / sizeof(T)] = input[offs.y / sizeof(T)];
}
@ -109,6 +147,7 @@ kernel void index_put(
constant int64_t* index_sizes,
constant int64_t* index_strides,
constant uint4& ndim_nindices_numel,
device ErrorMessages* error_buffer,
uint thread_index [[thread_position_in_grid]]) {
index_put_impl(
output,
@ -121,6 +160,7 @@ kernel void index_put(
index_sizes,
index_strides,
ndim_nindices_numel,
error_buffer,
thread_index);
}
@ -136,6 +176,7 @@ kernel void index_put_serial(
constant int64_t* index_sizes,
constant int64_t* index_strides,
constant uint4& ndim_nindices_numel,
device ErrorMessages* error_buffer,
uint thread_index [[thread_position_in_grid]]) {
(void)thread_index; // Suppress unused vairable varning
for (uint idx = 0; idx < ndim_nindices_numel.z; ++idx) {
@ -150,6 +191,7 @@ kernel void index_put_serial(
index_sizes,
index_strides,
ndim_nindices_numel,
error_buffer,
idx);
}
}
@ -166,6 +208,7 @@ kernel void index_put_accumulate(
constant int64_t* index_sizes,
constant int64_t* index_strides,
constant uint4& ndim_nindices_numel,
device ErrorMessages* error_buffer,
uint thread_index [[thread_position_in_grid]]) {
const auto ndim = ndim_nindices_numel.x;
const auto num_indices = ndim_nindices_numel.y;
@ -176,8 +219,18 @@ kernel void index_put_accumulate(
indices_strides,
ndim,
thread_index);
bool error = false;
auto output_offs = index_apply_indices<OffsetT>(
offs.xz, indices, index_sizes, index_strides, num_indices);
offs.xz,
indices,
index_sizes,
index_strides,
num_indices,
error,
error_buffer);
if (error) {
return;
}
AtomicType<T>::atomic_add(
reinterpret_cast<device AtomicType_t<T>*>(output),
output_offs / sizeof(T),
@ -197,6 +250,7 @@ kernel void index_put_accumulate(
constant int64_t* index_sizes, \
constant int64_t* index_strides, \
constant uint4& ndim_nindices_numel, \
device ErrorMessages* error_buffer, \
uint thread_index [[thread_position_in_grid]])
#define REGISTER_INDEX_OP_ALL_DTYPES(OP_NAME) \

View File

@ -220,7 +220,7 @@ Tensor _embedding_bag_dense_backward_mps(const Tensor& output_grad,
auto num_threads = (params.mode == EmbeddingBagMode::MAX) ? output_grad.numel() : num_indices * params.feature_size;
MPSStream* stream = getCurrentMPSStream();
mps::dispatch_sync_with_rethrow(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
@autoreleasepool {
id<MTLComputeCommandEncoder> computeEncoder = stream->commandEncoder();
auto pipeline_state = lib.getPipelineStateForFunc(fmt::format("embedding_bag_backward_{}_{}",
@ -273,7 +273,7 @@ Tensor _embedding_bag_per_sample_weights_backward_mps(const Tensor& output_grad,
auto num_threads = num_indices * feature_size;
MPSStream* stream = getCurrentMPSStream();
mps::dispatch_sync_with_rethrow(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
@autoreleasepool {
id<MTLComputeCommandEncoder> computeEncoder = stream->commandEncoder();
auto pipeline_state = lib.getPipelineStateForFunc(fmt::format("embedding_bag_per_sample_weights_backward_{}_{}",

View File

@ -179,7 +179,8 @@ static void dispatch_index_kernel(TensorIteratorBase& iter,
iter.strides(2),
index_size,
index_stride,
ndim_nindiees);
ndim_nindiees,
mpsStream->getErrorBuffer());
mtl_dispatch1DJob(computeEncoder, indexSelectPSO, serial ? 1 : iter.numel());
});
}
@ -299,7 +300,7 @@ static Tensor& nonzero_out_native_mps(const Tensor& self, Tensor& out_) {
MPSStream* stream = getCurrentMPSStream();
using CachedGraph = MPSUnaryCachedGraph;
dispatch_sync(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
stream->synchronize(SyncType::COMMIT_AND_WAIT);
});
int64_t total_nonzero = at::count_nonzero(self).item<int64_t>();
@ -384,7 +385,7 @@ Tensor& nonzero_out_mps(const Tensor& self, Tensor& out_) {
MPSStream* stream = getCurrentMPSStream();
using CachedGraph = MPSUnaryCachedGraph;
dispatch_sync(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
stream->synchronize(SyncType::COMMIT_AND_WAIT);
});
int64_t total_nonzero = at::count_nonzero(self).item<int64_t>();

View File

@ -923,7 +923,7 @@ std::tuple<Tensor, Tensor, Tensor> layer_norm_mps(const Tensor& input,
MPSStream* stream = getCurrentMPSStream();
TORCH_CHECK_NOT_IMPLEMENTED(input.scalar_type() != kLong, "Not implemented for long on MPS");
@autoreleasepool {
mps::dispatch_sync_with_rethrow(stream->queue(), ^() {
dispatch_sync_with_rethrow(stream->queue(), ^() {
// which kernel variant to use based on the normalized axis N size
const int N_READS = 4;
auto metalType = mps::scalarToMetalTypeString(input);

View File

@ -192,6 +192,11 @@
CompositeExplicitAutograd: _assert_tensor_metadata
Meta: _assert_tensor_metadata_meta_symint
- func: _async_error(str msg) -> ()
dispatch:
CompositeExplicitAutograd: _async_error
Meta: _async_error_meta
- func: _print(str s) -> ()
dispatch:
CompositeExplicitAutograd: _print

View File

@ -47,6 +47,7 @@
#include <c10/macros/Macros.h>
#include <thrust/copy.h>
#include <thrust/device_ptr.h>
#include <thrust/distance.h>
#include <thrust/for_each.h>
#include <thrust/functional.h>
#include <thrust/gather.h>

View File

@ -1,191 +1,3 @@
#pragma once
#include <ATen/xpu/XPUContext.h>
#include <optional>
namespace at::xpu {
/*
* XPUEvent are movable not copyable wrappers around SYCL event. XPUEvent are
* constructed lazily when first recorded. It has a device, and this device is
* acquired from the first recording stream. Later streams that record the event
* must match the same device.
*
* Currently, XPUEvent does NOT support to export an inter-process event from
* another process via inter-process communication(IPC). So it means that
* inter-process communication for event handles between different processes is
* not available. This could impact some applications that rely on cross-process
* synchronization and communication.
*/
struct TORCH_XPU_API XPUEvent {
// Constructors
XPUEvent(bool enable_timing = false) noexcept
: enable_timing_{enable_timing} {}
~XPUEvent() {
if (isCreated()) {
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_deletion(
at::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
}
}
XPUEvent(const XPUEvent&) = delete;
XPUEvent& operator=(const XPUEvent&) = delete;
XPUEvent(XPUEvent&& other) = default;
XPUEvent& operator=(XPUEvent&& other) = default;
operator sycl::event&() const {
return event();
}
std::optional<at::Device> device() const {
if (isCreated()) {
return at::Device(at::kXPU, device_index_);
} else {
return std::nullopt;
}
}
inline bool isCreated() const {
return (event_.get() != nullptr);
}
DeviceIndex device_index() const {
return device_index_;
}
sycl::event& event() const {
return *event_;
}
bool query() const {
using namespace sycl::info;
if (!isCreated()) {
return true;
}
return event().get_info<event::command_execution_status>() ==
event_command_status::complete;
}
void record() {
record(getCurrentXPUStream());
}
void recordOnce(const XPUStream& stream) {
if (!isCreated()) {
record(stream);
}
}
void record(const XPUStream& stream) {
if (!isCreated()) {
device_index_ = stream.device_index();
assignEvent(stream.queue());
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_creation(
at::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
} else {
TORCH_CHECK(
device_index_ == stream.device_index(),
"Event device ",
device_index_,
" does not match recording stream's device ",
stream.device_index(),
".");
reassignEvent(stream.queue());
}
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_record(
at::kXPU,
reinterpret_cast<uintptr_t>(event_.get()),
reinterpret_cast<uintptr_t>(&stream.queue()));
}
}
void block(const XPUStream& stream) {
if (isCreated()) {
std::vector<sycl::event> event_list{event()};
// Make this stream wait until event_ is completed.
stream.queue().ext_oneapi_submit_barrier(event_list);
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_wait(
at::kXPU,
reinterpret_cast<uintptr_t>(event_.get()),
reinterpret_cast<uintptr_t>(&stream.queue()));
}
}
}
double elapsed_time(const XPUEvent& other) const {
TORCH_CHECK(
isCreated() && other.isCreated(),
"Both events must be recorded before calculating elapsed time.");
TORCH_CHECK(
query() && other.query(),
"Both events must be completed before calculating elapsed time.");
TORCH_CHECK(
enable_timing_ && other.enable_timing_,
"Both events must be created with argument 'enable_timing=True'.");
#if SYCL_COMPILER_VERSION < 20250000
TORCH_CHECK_NOT_IMPLEMENTED(
false,
"elapsed_time of XPUEvent requires PyTorch to be built with SYCL compiler version 2025.0.0 or newer.");
#endif
using namespace sycl::info::event_profiling;
// Block until both of the recorded events are completed.
uint64_t end_time_ns = other.event().get_profiling_info<command_end>();
uint64_t start_time_ns = event().get_profiling_info<command_end>();
// Return the eplased time in milliseconds.
return 1e-6 *
(static_cast<double>(end_time_ns) - static_cast<double>(start_time_ns));
}
void synchronize() const {
if (isCreated()) {
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_synchronization(
at::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
event().wait_and_throw();
}
}
private:
void assignEvent(sycl::queue& queue) {
#if SYCL_COMPILER_VERSION >= 20250000
if (enable_timing_) {
event_ = std::make_unique<sycl::event>(
sycl::ext::oneapi::experimental::submit_profiling_tag(queue));
} else {
event_ = std::make_unique<sycl::event>(queue.ext_oneapi_submit_barrier());
}
#else
event_ = std::make_unique<sycl::event>(queue.ext_oneapi_submit_barrier());
#endif
}
void reassignEvent(sycl::queue& queue) {
event_.reset();
assignEvent(queue);
}
bool enable_timing_ = false;
DeviceIndex device_index_ = -1;
// Only need to track the last event, as events in an in-order queue are
// executed sequentially.
std::unique_ptr<sycl::event> event_;
};
} // namespace at::xpu
#include <c10/xpu/XPUEvent.h>

View File

@ -50,7 +50,7 @@ nfnet_l0,pass,7
repvgg_a2,fail_accuracy,7
repvgg_a2,pass,7

1 name accuracy graph_breaks
50
51
52
53
54
55
56

View File

@ -952,7 +952,7 @@ def latency_experiment_summary(suite_name, args, model, timings, **kwargs):
first_fields.append(kwargs["tag"])
headers = first_headers + ["speedup", "abs_latency"]
row = first_fields + [float(speedup), median[1] * 1000]
msg = f"{speedup:.3f}x"
msg = f"{median[0] * 1000} ms, {median[1] * 1000} ms, {speedup:.3f}x"
if args.baseline:
headers.extend(
[
@ -1010,7 +1010,7 @@ def latency_experiment_summary(suite_name, args, model, timings, **kwargs):
# Hypothetically you can use this from other places, but it's currently
# inaccessible, and when this assert fails you need to update the
# event_name here to account for the other cases you are using this
assert args.quantization is not None
assert any([args.quantization, args.optimus])
output_signpost(
dict(zip(headers, row)),
args,
@ -2288,11 +2288,9 @@ class BenchmarkRunner:
)
):
is_same = False
except Exception as e:
except Exception:
# Sometimes torch.allclose may throw RuntimeError
exception_string = str(e)
accuracy_status = f"fail_exception: {exception_string}"
return record_status(accuracy_status, dynamo_start_stats=start_stats)
is_same = False
if not is_same:
accuracy_status = "eager_two_runs_differ"
@ -2409,11 +2407,9 @@ class BenchmarkRunner:
force_max_multiplier=force_max_multiplier,
):
is_same = False
except Exception as e:
except Exception:
# Sometimes torch.allclose may throw RuntimeError
exception_string = str(e)
accuracy_status = f"fail_exception: {exception_string}"
return record_status(accuracy_status, dynamo_start_stats=start_stats)
is_same = False
if not is_same:
if self.args.skip_accuracy_check:
@ -2587,6 +2583,9 @@ class BenchmarkRunner:
**experiment_kwargs,
)
# reset dynamo
torch._dynamo.reset()
if self.args.export_aot_inductor:
optimized_model_iter_fn = optimize_ctx
else:
@ -2950,7 +2949,7 @@ class BenchmarkRunner:
status = self.check_tolerance(name, model, example_inputs, optimize_ctx)
print(status)
elif self.args.performance:
if self.args.backend == "torchao":
if self.args.backend in ["torchao", "optimus"]:
status = self.run_performance_test_non_alternate(
name, model, example_inputs, optimize_ctx, experiment, tag
)
@ -3526,6 +3525,12 @@ def parse_args(args=None):
action="store_true",
help="Measure speedup with TorchInductor",
)
group.add_argument(
"--optimus",
choices=["vertical_opt", "horizontal_opt", "all"],
default=None,
help="Measure speedup of Optimus with TorchInductor baseline",
)
group.add_argument(
"--quantization",
choices=[
@ -3783,6 +3788,9 @@ def run(runner, args, original_dir=None):
if args.inductor:
assert args.backend is None
args.backend = "inductor"
if args.optimus:
assert args.backend is None
args.backend = "optimus"
if args.quantization:
assert args.backend is None
args.backend = "torchao"
@ -4067,10 +4075,22 @@ def run(runner, args, original_dir=None):
runner.model_iter_fn = model_iter_fn_and_mark_step
optimize_ctx = torchao_optimize_ctx(args.quantization)
elif args.backend == "optimus":
from .optimus import get_baseline_ctx, get_optimus_optimize_ctx
baseline_ctx = get_baseline_ctx(
nopython=args.nopython, inductor_compile_mode=args.inductor_compile_mode
)
runner.model_iter_fn = baseline_ctx(runner.model_iter_fn)
optimize_ctx = get_optimus_optimize_ctx(
args.optimus, args.nopython, args.inductor_compile_mode
)
else:
optimize_ctx = torch._dynamo.optimize(args.backend, nopython=args.nopython)
experiment = (
speedup_experiment if args.backend != "torchao" else latency_experiment
speedup_experiment
if args.backend not in ["torchao", "optimus"]
else latency_experiment
)
if args.accuracy:
output_filename = f"accuracy_{args.backend}.csv"
@ -4091,7 +4111,12 @@ def run(runner, args, original_dir=None):
if args.only in runner.disable_cudagraph_models:
args.disable_cudagraphs = True
if args.inductor or args.backend == "inductor" or args.export_aot_inductor:
if (
args.inductor
or args.backend == "inductor"
or args.export_aot_inductor
or args.backend == "optimus"
):
inductor_config.triton.cudagraphs = not args.disable_cudagraphs
inductor_config.triton.persistent_reductions = (
not args.disable_persistent_reductions

View File

@ -0,0 +1,62 @@
import functools
import torch
def get_baseline_ctx(nopython, inductor_compile_mode):
return functools.partial(
torch.compile,
backend="inductor",
fullgraph=nopython,
mode=inductor_compile_mode,
)
def get_optimus_optimize_ctx(config, nopython, inductor_compile_mode):
if config == "vertical_opt":
optimus_inductor_config = {
"pre_grad_fusion_options": {
"normalization_pass": {},
"merge_splits_pass": {},
"split_cat_pass": {},
"unbind_stack_pass": {},
"unbind_cat_to_view_pass": {},
}
}
elif config == "horizontal_opt":
optimus_inductor_config = {
"pre_grad_fusion_options": {
"normalization_pass": {},
"batch_linear": {},
"batch_layernorm": {},
},
}
elif config == "all":
optimus_inductor_config = {
"pre_grad_fusion_options": {
"normalization_pass": {},
"batch_linear": {},
"batch_layernorm": {},
"merge_splits_pass": {},
"split_cat_pass": {},
"unbind_stack_pass": {},
"unbind_cat_to_view_pass": {},
},
}
else:
raise RuntimeError(f"Unknown optimus config: {config}")
def _inner(fn):
if "pre_grad_fusion_options" in optimus_inductor_config:
torch._inductor.config.pre_grad_fusion_options = optimus_inductor_config[
"pre_grad_fusion_options"
]
if "post_grad_fusion_options" in optimus_inductor_config:
torch._inductor.config.post_grad_fusion_options = optimus_inductor_config[
"post_grad_fusion_options"
]
return torch.compile(
fn, backend="inductor", fullgraph=nopython, mode=inductor_compile_mode
)
return _inner

View File

@ -2,6 +2,7 @@ import csv
import os
import re
import sys
from pathlib import Path
# This script takes the logs produced by the benchmark scripts (e.g.,
@ -15,8 +16,7 @@ import sys
# This script is not very well written, feel free to rewrite it as necessary
assert len(sys.argv) == 2
full_log = open(sys.argv[1]).read()
full_log = Path(sys.argv[1]).read_text()
# If the log contains a gist URL, extract it so we can include it in the CSV
gist_url = ""

View File

@ -484,24 +484,106 @@ PyTorch,sum,sum_R256_V512_dim0_contiguousTrue_cpu,short,False,50.954394,0.000000
PyTorch,sum,sum_R256_V512_dim0_contiguousFalse_cpu,short,False,57.957757,0.000000
PyTorch,sum,sum_R256_V512_dim1_contiguousTrue_cpu,short,False,53.592068,0.000000
PyTorch,sum,sum_R256_V512_dim1_contiguousFalse_cpu,short,False,51.339726,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N16_cpu,short,False,7.040985,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N64_cpu,short,False,7.168604,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N128_cpu,short,False,7.434442,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N16_cpu,short,False,7.078318,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N64_cpu,short,False,7.426670,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N128_cpu,short,False,7.679027,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N16_cpu,short,False,7.281365,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N64_cpu,short,False,7.682783,0.000000
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N128_cpu,short,False,8.381938,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N16_cpu,short,False,7.039854,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N64_cpu,short,False,7.399855,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N128_cpu,short,False,7.715193,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N16_cpu,short,False,7.255140,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N64_cpu,short,False,7.753522,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N128_cpu,short,False,8.364281,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N16_cpu,short,False,7.476377,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N64_cpu,short,False,8.458564,0.000000
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N128_cpu,short,False,9.391939,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bool,short,False,0.927,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.uint8,short,False,6.261,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int8,short,False,6.351,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int16,short,False,6.177,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int32,short,False,6.333,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int64,short,False,6.588,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float16,short,False,8.117,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bfloat16,short,False,9.358,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float32,short,False,7.844,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float64,short,False,8.097,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bool,short,False,6.159,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.uint8,short,False,0.926,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int8,short,False,6.192,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int16,short,False,6.276,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int32,short,False,6.461,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int64,short,False,6.524,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float16,short,False,8.136,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bfloat16,short,False,6.854,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float32,short,False,6.446,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float64,short,False,6.829,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bool,short,False,6.088,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.uint8,short,False,6.059,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int8,short,False,0.922,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int16,short,False,6.263,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int32,short,False,6.330,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int64,short,False,6.688,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float16,short,False,8.176,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bfloat16,short,False,6.959,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float32,short,False,6.430,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float64,short,False,6.818,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bool,short,False,6.350,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.uint8,short,False,6.221,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int8,short,False,6.193,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int16,short,False,0.922,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int32,short,False,6.263,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int64,short,False,6.525,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float16,short,False,7.960,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bfloat16,short,False,6.801,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float32,short,False,6.594,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float64,short,False,7.089,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bool,short,False,6.498,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.uint8,short,False,6.358,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int8,short,False,6.390,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int16,short,False,6.415,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int32,short,False,0.925,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int64,short,False,6.657,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float16,short,False,7.954,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bfloat16,short,False,6.930,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float32,short,False,6.737,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float64,short,False,6.948,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bool,short,False,6.757,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.uint8,short,False,6.402,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int8,short,False,6.550,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int16,short,False,6.518,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int32,short,False,6.766,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int64,short,False,0.929,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float16,short,False,8.557,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bfloat16,short,False,9.045,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float32,short,False,7.672,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float64,short,False,7.276,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bool,short,False,6.414,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.uint8,short,False,7.736,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int8,short,False,7.889,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int16,short,False,8.170,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int32,short,False,7.783,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int64,short,False,7.743,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float16,short,False,0.927,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bfloat16,short,False,7.018,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float32,short,False,8.428,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float64,short,False,6.767,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bool,short,False,6.479,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.uint8,short,False,7.827,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int8,short,False,6.450,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int16,short,False,6.320,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int32,short,False,6.385,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int64,short,False,8.119,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float16,short,False,8.063,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bfloat16,short,False,0.925,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float32,short,False,8.629,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float64,short,False,6.638,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bool,short,False,6.425,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.uint8,short,False,7.803,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int8,short,False,6.502,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int16,short,False,6.429,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int32,short,False,6.549,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int64,short,False,7.749,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float16,short,False,7.301,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bfloat16,short,False,7.682,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float32,short,False,0.930,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float64,short,False,6.738,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bool,short,False,6.798,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.uint8,short,False,6.506,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int8,short,False,6.494,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int16,short,False,6.668,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int32,short,False,6.696,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int64,short,False,7.115,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float16,short,False,7.910,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bfloat16,short,False,7.410,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float32,short,False,6.868,0.000000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float64,short,False,0.924,0.000000
PyTorch,addcmul,addcmul_M1_N2_cpu_dtypetorch.float32,short,False,4.461410,0.000000
PyTorch,addcmul,addcmul_M1_N2_cpu_dtypetorch.bfloat16,short,False,4.560082,0.000000
PyTorch,addcmul,addcmul_M32_N64_cpu_dtypetorch.float32,short,False,5.141248,0.000000

1 Benchmarking Framework Benchmarking Module Name Case Name tag run_backward Execution Time Peak Memory (KB)
484 PyTorch sum sum_R256_V512_dim0_contiguousFalse_cpu short False 57.957757 0.000000
485 PyTorch sum sum_R256_V512_dim1_contiguousTrue_cpu short False 53.592068 0.000000
486 PyTorch sum sum_R256_V512_dim1_contiguousFalse_cpu short False 51.339726 0.000000
487 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bool short False 7.040985 0.927 0.000000
488 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.uint8 short False 7.168604 6.261 0.000000
489 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int8 short False 7.434442 6.351 0.000000
490 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int16 short False 7.078318 6.177 0.000000
491 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int32 short False 7.426670 6.333 0.000000
492 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int64 short False 7.679027 6.588 0.000000
493 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float16 short False 7.281365 8.117 0.000000
494 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bfloat16 short False 7.682783 9.358 0.000000
495 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float32 short False 8.381938 7.844 0.000000
496 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float64 short False 7.039854 8.097 0.000000
497 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bool short False 7.399855 6.159 0.000000
498 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.uint8 short False 7.715193 0.926 0.000000
499 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int8 short False 7.255140 6.192 0.000000
500 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int16 short False 7.753522 6.276 0.000000
501 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int32 short False 8.364281 6.461 0.000000
502 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int64 short False 7.476377 6.524 0.000000
503 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float16 short False 8.458564 8.136 0.000000
504 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bfloat16 short False 9.391939 6.854 0.000000
505 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float32 short False 6.446 0.000000
506 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float64 short False 6.829 0.000000
507 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bool short False 6.088 0.000000
508 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.uint8 short False 6.059 0.000000
509 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int8 short False 0.922 0.000000
510 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int16 short False 6.263 0.000000
511 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int32 short False 6.330 0.000000
512 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int64 short False 6.688 0.000000
513 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float16 short False 8.176 0.000000
514 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bfloat16 short False 6.959 0.000000
515 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float32 short False 6.430 0.000000
516 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float64 short False 6.818 0.000000
517 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bool short False 6.350 0.000000
518 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.uint8 short False 6.221 0.000000
519 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int8 short False 6.193 0.000000
520 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int16 short False 0.922 0.000000
521 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int32 short False 6.263 0.000000
522 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int64 short False 6.525 0.000000
523 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float16 short False 7.960 0.000000
524 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bfloat16 short False 6.801 0.000000
525 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float32 short False 6.594 0.000000
526 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float64 short False 7.089 0.000000
527 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bool short False 6.498 0.000000
528 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.uint8 short False 6.358 0.000000
529 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int8 short False 6.390 0.000000
530 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int16 short False 6.415 0.000000
531 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int32 short False 0.925 0.000000
532 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int64 short False 6.657 0.000000
533 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float16 short False 7.954 0.000000
534 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bfloat16 short False 6.930 0.000000
535 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float32 short False 6.737 0.000000
536 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float64 short False 6.948 0.000000
537 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bool short False 6.757 0.000000
538 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.uint8 short False 6.402 0.000000
539 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int8 short False 6.550 0.000000
540 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int16 short False 6.518 0.000000
541 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int32 short False 6.766 0.000000
542 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int64 short False 0.929 0.000000
543 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float16 short False 8.557 0.000000
544 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bfloat16 short False 9.045 0.000000
545 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float32 short False 7.672 0.000000
546 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float64 short False 7.276 0.000000
547 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bool short False 6.414 0.000000
548 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.uint8 short False 7.736 0.000000
549 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int8 short False 7.889 0.000000
550 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int16 short False 8.170 0.000000
551 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int32 short False 7.783 0.000000
552 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int64 short False 7.743 0.000000
553 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float16 short False 0.927 0.000000
554 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bfloat16 short False 7.018 0.000000
555 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float32 short False 8.428 0.000000
556 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float64 short False 6.767 0.000000
557 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bool short False 6.479 0.000000
558 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.uint8 short False 7.827 0.000000
559 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int8 short False 6.450 0.000000
560 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int16 short False 6.320 0.000000
561 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int32 short False 6.385 0.000000
562 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int64 short False 8.119 0.000000
563 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float16 short False 8.063 0.000000
564 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bfloat16 short False 0.925 0.000000
565 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float32 short False 8.629 0.000000
566 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float64 short False 6.638 0.000000
567 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bool short False 6.425 0.000000
568 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.uint8 short False 7.803 0.000000
569 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int8 short False 6.502 0.000000
570 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int16 short False 6.429 0.000000
571 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int32 short False 6.549 0.000000
572 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int64 short False 7.749 0.000000
573 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float16 short False 7.301 0.000000
574 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bfloat16 short False 7.682 0.000000
575 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float32 short False 0.930 0.000000
576 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float64 short False 6.738 0.000000
577 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bool short False 6.798 0.000000
578 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.uint8 short False 6.506 0.000000
579 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int8 short False 6.494 0.000000
580 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int16 short False 6.668 0.000000
581 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int32 short False 6.696 0.000000
582 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int64 short False 7.115 0.000000
583 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float16 short False 7.910 0.000000
584 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bfloat16 short False 7.410 0.000000
585 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float32 short False 6.868 0.000000
586 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float64 short False 0.924 0.000000
587 PyTorch addcmul addcmul_M1_N2_cpu_dtypetorch.float32 short False 4.461410 0.000000
588 PyTorch addcmul addcmul_M1_N2_cpu_dtypetorch.bfloat16 short False 4.560082 0.000000
589 PyTorch addcmul addcmul_M32_N64_cpu_dtypetorch.float32 short False 5.141248 0.000000

View File

@ -4,74 +4,84 @@ import torch
tensor_conversion_short_configs = op_bench.cross_product_configs(
M=(
8,
16,
32,
),
N=(
16,
64,
128,
),
M=[32],
N=[128],
device=["cpu", "cuda"],
dtype_one=[
torch.bool,
torch.uint8,
torch.int8,
torch.int16,
torch.int32,
torch.int64,
torch.half,
torch.bfloat16,
torch.float,
torch.double,
],
dtype_two=[
torch.bool,
torch.uint8,
torch.int8,
torch.int16,
torch.int32,
torch.int64,
torch.half,
torch.bfloat16,
torch.float,
torch.double,
],
tags=["short"],
)
tensor_conversion_long_configs = op_bench.cross_product_configs(
M=(
64,
128,
256,
512,
),
N=(
256,
512,
1024,
2048,
),
M=[1024],
N=[1024],
device=["cpu", "cuda"],
dtype_one=[
torch.bool,
torch.uint8,
torch.int8,
torch.int16,
torch.int32,
torch.int64,
torch.half,
torch.bfloat16,
torch.float,
torch.double,
],
dtype_two=[
torch.bool,
torch.uint8,
torch.int8,
torch.int16,
torch.int32,
torch.int64,
torch.half,
torch.bfloat16,
torch.float,
torch.double,
],
tags=["long"],
)
class FloatToHalfTensorConversionBenchmark(op_bench.TorchBenchmarkBase):
def init(self, M, N, device):
class TensorConversionBenchmark(op_bench.TorchBenchmarkBase):
def init(self, M, N, dtype_one, dtype_two, device):
self.inputs = {
"input": torch.rand(
M, N, device=device, requires_grad=False, dtype=torch.float
)
).to(dtype=dtype_one)
}
self.dtype_one = dtype_one
self.dtype_two = dtype_two
def forward(self, input):
return input.to(torch.half)
return input.to(dtype=self.dtype_two)
class HalfToFloatTensorConversionBenchmark(op_bench.TorchBenchmarkBase):
def init(self, M, N, device):
self.inputs = {
"input": torch.rand(
M, N, device=device, requires_grad=False, dtype=torch.half
)
}
def forward(self, input):
return input.to(torch.float)
op_bench.generate_pt_test(
tensor_conversion_short_configs, FloatToHalfTensorConversionBenchmark
)
op_bench.generate_pt_test(
tensor_conversion_long_configs, FloatToHalfTensorConversionBenchmark
)
op_bench.generate_pt_test(
tensor_conversion_short_configs, HalfToFloatTensorConversionBenchmark
)
op_bench.generate_pt_test(
tensor_conversion_long_configs, HalfToFloatTensorConversionBenchmark
)
op_bench.generate_pt_test(tensor_conversion_short_configs, TensorConversionBenchmark)
op_bench.generate_pt_test(tensor_conversion_long_configs, TensorConversionBenchmark)
if __name__ == "__main__":
op_bench.benchmark_runner.main()

View File

@ -349,24 +349,106 @@ PyTorch,sum,sum_R256_V512_dim0_contiguousTrue_cpu,short,FALSE,12.5841
PyTorch,sum,sum_R256_V512_dim0_contiguousFALSE_cpu,short,FALSE,20.8765
PyTorch,sum,sum_R256_V512_dim1_contiguousTrue_cpu,short,FALSE,15.4414
PyTorch,sum,sum_R256_V512_dim1_contiguousFALSE_cpu,short,FALSE,15.3287
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N16_cpu,short,FALSE,5.0499
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N64_cpu,short,FALSE,5.3229
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M8_N128_cpu,short,FALSE,5.4418
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N16_cpu,short,FALSE,5.0868
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N64_cpu,short,FALSE,5.4495
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M16_N128_cpu,short,FALSE,5.5578
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N16_cpu,short,FALSE,5.2631
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N64_cpu,short,FALSE,5.5646
PyTorch,FloatToHalfTensorConversionBenchmark,FloatToHalfTensorConversionBenchmark_M32_N128_cpu,short,FALSE,5.7898
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N16_cpu,short,FALSE,5.0228
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N64_cpu,short,FALSE,5.3692
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M8_N128_cpu,short,FALSE,5.4006
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N16_cpu,short,FALSE,5.1107
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N64_cpu,short,FALSE,5.4119
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M16_N128_cpu,short,FALSE,5.5583
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N16_cpu,short,FALSE,5.3818
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N64_cpu,short,FALSE,5.5742
PyTorch,HalfToFloatTensorConversionBenchmark,HalfToFloatTensorConversionBenchmark_M32_N128_cpu,short,FALSE,6.8414
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bool,short,False,0.797
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.uint8,short,False,6.071
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int8,short,False,6.031
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int16,short,False,6.243
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int32,short,False,7.231
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int64,short,False,7.791
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float16,short,False,12.661
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bfloat16,short,False,11.225
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float32,short,False,9.772
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float64,short,False,9.872
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bool,short,False,6.033
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.uint8,short,False,0.781
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int8,short,False,6.060
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int16,short,False,6.180
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int32,short,False,7.258
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int64,short,False,7.758
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float16,short,False,10.504
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bfloat16,short,False,6.749
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float32,short,False,7.679
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float64,short,False,7.797
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bool,short,False,6.019
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.uint8,short,False,6.079
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int8,short,False,0.785
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int16,short,False,6.188
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int32,short,False,7.288
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int64,short,False,7.770
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float16,short,False,10.466
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bfloat16,short,False,6.676
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float32,short,False,7.736
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float64,short,False,7.780
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bool,short,False,6.130
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.uint8,short,False,6.221
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int8,short,False,6.101
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int16,short,False,0.791
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int32,short,False,6.254
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int64,short,False,7.733
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float16,short,False,10.562
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bfloat16,short,False,6.704
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float32,short,False,7.819
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float64,short,False,8.276
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bool,short,False,6.361
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.uint8,short,False,6.364
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int8,short,False,6.309
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int16,short,False,6.362
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int32,short,False,0.791
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int64,short,False,7.746
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float16,short,False,9.462
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bfloat16,short,False,6.678
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float32,short,False,7.827
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float64,short,False,8.200
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bool,short,False,6.925
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.uint8,short,False,6.947
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int8,short,False,6.962
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int16,short,False,6.906
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int32,short,False,7.664
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int64,short,False,0.782
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float16,short,False,10.528
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bfloat16,short,False,10.123
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float32,short,False,9.234
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float64,short,False,8.694
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bool,short,False,12.653
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.uint8,short,False,9.348
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int8,short,False,8.774
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int16,short,False,9.063
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int32,short,False,10.012
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int64,short,False,13.641
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float16,short,False,0.788
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bfloat16,short,False,13.757
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float32,short,False,7.170
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float64,short,False,12.511
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bool,short,False,6.516
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.uint8,short,False,8.539
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int8,short,False,6.483
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int16,short,False,6.468
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int32,short,False,7.752
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int64,short,False,9.868
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float16,short,False,10.556
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bfloat16,short,False,0.792
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float32,short,False,7.577
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float64,short,False,8.267
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bool,short,False,6.819
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.uint8,short,False,7.715
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int8,short,False,6.754
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int16,short,False,6.825
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int32,short,False,7.790
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int64,short,False,9.219
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float16,short,False,5.977
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bfloat16,short,False,7.069
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float32,short,False,0.794
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float64,short,False,8.301
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bool,short,False,7.401
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.uint8,short,False,7.843
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int8,short,False,7.117
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int16,short,False,7.170
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int32,short,False,8.000
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int64,short,False,9.284
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float16,short,False,7.179
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bfloat16,short,False,7.645
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float32,short,False,7.988
PyTorch,TensorConversionBenchmark,TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float64,short,False,0.792
PyTorch,relu,"relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.quint8",short,FALSE,9.4657
PyTorch,relu,"relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.qint8",short,FALSE,9.4625
PyTorch,relu,"relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.qint32",short,FALSE,9.4165

1 Benchmarking Framework Benchmarking Module Name Case Name tag run_backward Execution Time
349 PyTorch sum sum_R256_V512_dim0_contiguousFALSE_cpu short FALSE 20.8765
350 PyTorch sum sum_R256_V512_dim1_contiguousTrue_cpu short FALSE 15.4414
351 PyTorch sum sum_R256_V512_dim1_contiguousFALSE_cpu short FALSE 15.3287
352 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bool short FALSE False 5.0499 0.797
353 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.uint8 short FALSE False 5.3229 6.071
354 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M8_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int8 short FALSE False 5.4418 6.031
355 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int16 short FALSE False 5.0868 6.243
356 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int32 short FALSE False 5.4495 7.231
357 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M16_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.int64 short FALSE False 5.5578 7.791
358 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float16 short FALSE False 5.2631 12.661
359 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.bfloat16 short FALSE False 5.5646 11.225
360 PyTorch FloatToHalfTensorConversionBenchmark TensorConversionBenchmark FloatToHalfTensorConversionBenchmark_M32_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float32 short FALSE False 5.7898 9.772
361 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bool_dtype_twotorch.float64 short FALSE False 5.0228 9.872
362 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bool short FALSE False 5.3692 6.033
363 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M8_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.uint8 short FALSE False 5.4006 0.781
364 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int8 short FALSE False 5.1107 6.060
365 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int16 short FALSE False 5.4119 6.180
366 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M16_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int32 short FALSE False 5.5583 7.258
367 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N16_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.int64 short FALSE False 5.3818 7.758
368 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N64_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float16 short FALSE False 5.5742 10.504
369 PyTorch HalfToFloatTensorConversionBenchmark TensorConversionBenchmark HalfToFloatTensorConversionBenchmark_M32_N128_cpu TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.bfloat16 short FALSE False 6.8414 6.749
370 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float32 short False 7.679
371 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.uint8_dtype_twotorch.float64 short False 7.797
372 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bool short False 6.019
373 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.uint8 short False 6.079
374 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int8 short False 0.785
375 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int16 short False 6.188
376 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int32 short False 7.288
377 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.int64 short False 7.770
378 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float16 short False 10.466
379 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.bfloat16 short False 6.676
380 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float32 short False 7.736
381 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int8_dtype_twotorch.float64 short False 7.780
382 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bool short False 6.130
383 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.uint8 short False 6.221
384 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int8 short False 6.101
385 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int16 short False 0.791
386 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int32 short False 6.254
387 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.int64 short False 7.733
388 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float16 short False 10.562
389 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.bfloat16 short False 6.704
390 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float32 short False 7.819
391 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int16_dtype_twotorch.float64 short False 8.276
392 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bool short False 6.361
393 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.uint8 short False 6.364
394 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int8 short False 6.309
395 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int16 short False 6.362
396 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int32 short False 0.791
397 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.int64 short False 7.746
398 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float16 short False 9.462
399 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.bfloat16 short False 6.678
400 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float32 short False 7.827
401 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int32_dtype_twotorch.float64 short False 8.200
402 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bool short False 6.925
403 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.uint8 short False 6.947
404 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int8 short False 6.962
405 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int16 short False 6.906
406 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int32 short False 7.664
407 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.int64 short False 0.782
408 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float16 short False 10.528
409 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.bfloat16 short False 10.123
410 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float32 short False 9.234
411 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.int64_dtype_twotorch.float64 short False 8.694
412 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bool short False 12.653
413 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.uint8 short False 9.348
414 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int8 short False 8.774
415 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int16 short False 9.063
416 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int32 short False 10.012
417 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.int64 short False 13.641
418 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float16 short False 0.788
419 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.bfloat16 short False 13.757
420 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float32 short False 7.170
421 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float16_dtype_twotorch.float64 short False 12.511
422 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bool short False 6.516
423 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.uint8 short False 8.539
424 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int8 short False 6.483
425 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int16 short False 6.468
426 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int32 short False 7.752
427 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.int64 short False 9.868
428 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float16 short False 10.556
429 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.bfloat16 short False 0.792
430 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float32 short False 7.577
431 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.bfloat16_dtype_twotorch.float64 short False 8.267
432 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bool short False 6.819
433 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.uint8 short False 7.715
434 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int8 short False 6.754
435 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int16 short False 6.825
436 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int32 short False 7.790
437 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.int64 short False 9.219
438 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float16 short False 5.977
439 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.bfloat16 short False 7.069
440 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float32 short False 0.794
441 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float32_dtype_twotorch.float64 short False 8.301
442 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bool short False 7.401
443 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.uint8 short False 7.843
444 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int8 short False 7.117
445 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int16 short False 7.170
446 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int32 short False 8.000
447 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.int64 short False 9.284
448 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float16 short False 7.179
449 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.bfloat16 short False 7.645
450 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float32 short False 7.988
451 PyTorch TensorConversionBenchmark TensorConversionBenchmark_M32_N128_cpu_dtype_onetorch.float64_dtype_twotorch.float64 short False 0.792
452 PyTorch relu relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.quint8 short FALSE 9.4657
453 PyTorch relu relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.qint8 short FALSE 9.4625
454 PyTorch relu relu_dims(3,4,5)_contigFALSE_inplaceFALSE_dtypetorch.qint32 short FALSE 9.4165

View File

@ -83,10 +83,13 @@ if __name__ == "__main__":
if args.outfile == "stdout":
outfile = sys.stdout
need_close = False
elif args.outfile == "stderr":
outfile = sys.stderr
need_close = False
else:
outfile = open(args.outfile, "a")
need_close = True
test_count = args.test_count
m = args.m
@ -147,3 +150,5 @@ if __name__ == "__main__":
time,
file=outfile,
)
if need_close:
outfile.close()

View File

@ -82,10 +82,13 @@ if __name__ == "__main__":
if args.outfile == "stdout":
outfile = sys.stdout
need_close = False
elif args.outfile == "stderr":
outfile = sys.stderr
need_close = False
else:
outfile = open(args.outfile, "a")
need_close = True
test_count = args.test_count
m = args.m
@ -132,3 +135,5 @@ if __name__ == "__main__":
time_csr,
file=outfile,
)
if need_close:
outfile.close()

View File

@ -179,10 +179,13 @@ if __name__ == "__main__":
if args.outfile == "stdout":
outfile = sys.stdout
need_close = False
elif args.outfile == "stderr":
outfile = sys.stderr
need_close = False
else:
outfile = open(args.outfile, "a")
need_close = True
ops = args.ops.split(",")
@ -434,3 +437,5 @@ if __name__ == "__main__":
if op not in {"bsr_scatter_mm6", "bsr_dense_mm_with_meta"}:
# Break on operations that do not consume parameters
break
if need_close:
outfile.close()

111
c10/metal/error.h Normal file
View File

@ -0,0 +1,111 @@
#pragma once
#include <c10/metal/common.h>
namespace c10 {
namespace metal {
C10_METAL_CONSTEXPR unsigned error_message_count = 30;
struct ErrorMessage {
char file[128];
char func[128];
char message[250];
unsigned int line;
};
struct ErrorMessages {
#ifdef __METAL__
::metal::atomic<unsigned int> count;
#else
unsigned int count;
#endif
ErrorMessage msg[error_message_count];
};
#ifdef __METAL__
namespace detail {
static uint strncpy(device char* dst, constant const char* src, unsigned len) {
uint i = 0;
while (src[i] != 0 && i < len - 1) {
dst[i] = src[i];
i++;
}
dst[i] = 0;
return i;
}
inline uint print_arg(
device char* ptr,
unsigned len,
constant const char* arg) {
return strncpy(ptr, arg, len);
}
// Returns number length as string in base10
static inline uint base10_length(long num) {
uint rc = 1;
if (num < 0) {
num = -num;
rc += 1;
}
while (num > 9) {
num /= 10;
rc++;
}
return rc;
}
// Converts signed integer to string
inline uint print_arg(device char* ptr, unsigned len, long arg) {
const auto arg_len = base10_length(arg);
if (arg_len >= len)
return 0;
if (arg < 0) {
ptr[0] = '-';
arg = -arg;
}
uint idx = 1;
do {
ptr[arg_len - idx] = '0' + (arg % 10);
arg /= 10;
idx++;
} while (arg > 0);
ptr[arg_len] = 0;
return arg_len;
}
template <typename T>
inline void print_args(device char* ptr, unsigned len, T arg) {
print_arg(ptr, len, arg);
}
template <typename T, typename... Args>
inline void print_args(device char* ptr, unsigned len, T arg, Args... args) {
const auto rc = print_arg(ptr, len, arg);
print_args(ptr + rc, len - rc, args...);
}
} // namespace detail
template <typename... Args>
static void report_error(
device ErrorMessages* msgs,
constant const char* file,
int line,
constant const char* func,
Args... args) {
const auto idx =
atomic_fetch_add_explicit(&msgs->count, 1, ::metal::memory_order_relaxed);
if (idx >= error_message_count) {
return;
}
device auto* msg = &msgs->msg[idx];
detail::strncpy(msg->file, file, 128);
detail::strncpy(msg->func, func, 128);
detail::print_args(msg->message, 250, args...);
msg->line = line;
}
#define TORCH_REPORT_ERROR(buf, ...) \
::c10::metal::report_error(buf, __FILE__, __LINE__, __func__, __VA_ARGS__)
#endif
} // namespace metal
} // namespace c10

View File

@ -1 +0,0 @@
#include <c10/util/Metaprogramming.h>

View File

@ -1,224 +1 @@
#pragma once
#include <c10/util/TypeList.h>
#include <type_traits>
namespace c10::guts {
/**
* Access information about result type or arguments from a function type.
* Example:
* using A = function_traits<int (float, double)>::return_type // A == int
* using A = function_traits<int (float, double)>::parameter_types::tuple_type
* // A == tuple<float, double>
*/
template <class Func>
struct function_traits {
static_assert(
!std::is_same_v<Func, Func>,
"In function_traits<Func>, Func must be a plain function type.");
};
template <class Result, class... Args>
struct function_traits<Result(Args...)> {
using func_type = Result(Args...);
using return_type = Result;
using parameter_types = typelist::typelist<Args...>;
static constexpr auto number_of_parameters = sizeof...(Args);
};
/**
* infer_function_traits: creates a `function_traits` type for a simple
* function (pointer) or functor (lambda/struct). Currently does not support
* class methods.
*/
template <typename Functor>
struct infer_function_traits {
using type = function_traits<
c10::guts::detail::strip_class_t<decltype(&Functor::operator())>>;
};
template <typename Result, typename... Args>
struct infer_function_traits<Result (*)(Args...)> {
using type = function_traits<Result(Args...)>;
};
template <typename Result, typename... Args>
struct infer_function_traits<Result(Args...)> {
using type = function_traits<Result(Args...)>;
};
template <typename T>
using infer_function_traits_t = typename infer_function_traits<T>::type;
/**
* make_function_traits: creates a `function_traits` type given a Return type
* and a typelist of Argument types
*
* Example:
* bool f(int, int);
*
* infer_function_traits_t<f> == make_function_traits_t<bool,
* typelist::typelist<int, int>>
*/
template <typename Result, typename ArgList>
struct make_function_traits {
static_assert(
false_t<ArgList>::value,
"In guts::make_function_traits<Result, TypeList>, the ArgList argument must be typelist<...>.");
};
template <typename Result, typename... Args>
struct make_function_traits<Result, typelist::typelist<Args...>> {
using type = function_traits<Result(Args...)>;
};
template <typename Result, typename ArgList>
using make_function_traits_t =
typename make_function_traits<Result, ArgList>::type;
/**
* make_offset_index_sequence<Start, N>
* Like make_index_sequence<N>, but starting from Start instead of 0.
*
* Example:
* make_offset_index_sequence<10, 3> == std::index_sequence<10, 11, 12>
*/
template <size_t Start, size_t N, size_t... Is>
struct make_offset_index_sequence_impl
: make_offset_index_sequence_impl<Start, N - 1, Start + N - 1, Is...> {
static_assert(
static_cast<int>(Start) >= 0,
"make_offset_index_sequence: Start < 0");
static_assert(static_cast<int>(N) >= 0, "make_offset_index_sequence: N < 0");
};
template <size_t Start, size_t... Is>
struct make_offset_index_sequence_impl<Start, 0, Is...> {
typedef std::index_sequence<Is...> type;
};
template <size_t Start, size_t N>
using make_offset_index_sequence =
typename make_offset_index_sequence_impl<Start, N>::type;
/**
* Use tuple_elements to extract a position-indexed subset of elements
* from the argument tuple into a result tuple.
*
* Example:
* std::tuple<int, const char*, double> t = std::make_tuple(0, "HEY", 2.0);
* std::tuple<int, double> result = tuple_elements(t, std::index_sequence<0,
* 2>());
*/
template <class Tuple, size_t... Is>
constexpr auto tuple_elements(Tuple t, std::index_sequence<Is...> /*unused*/) {
return std::tuple<std::tuple_element_t<Is, Tuple>...>(std::get<Is>(t)...);
}
/**
* Use tuple_take to extract the first or last n elements from the argument
* tuple into a result tuple.
*
* Example:
* std::tuple<int, const char*, double> t = std::make_tuple(0, "HEY", 2.0);
* std::tuple<int, const char*> first_two = tuple_take<decltype(t), 2>(t);
* std::tuple<const char*, double> last_two = tuple_take<decltype(t), -2>(t);
*/
template <class Tuple, int N, class Enable = void>
struct TupleTake {};
template <class Tuple, int N>
struct TupleTake<Tuple, N, std::enable_if_t<N >= 0, void>> {
static auto call(Tuple t) {
constexpr size_t size = std::tuple_size<Tuple>();
static_assert(N <= size, "tuple_take: N > size");
return tuple_elements(t, std::make_index_sequence<N>{});
}
};
template <class Tuple, int N>
struct TupleTake < Tuple,
N, std::enable_if_t<N<0, void>> {
static auto call(Tuple t) {
constexpr size_t size = std::tuple_size<Tuple>();
static_assert(-N <= size, "tuple_take: -N > size");
return tuple_elements(t, make_offset_index_sequence<size + N, -N>{});
}
};
template <class Tuple, int N>
auto tuple_take(Tuple t) {
return TupleTake<Tuple, N>::call(t);
}
/**
* Use tuple_slice to extract a contiguous subtuple from the argument.
*
* Example:
* std::tuple<int, const char*, double, bool> t = std::make_tuple(0,
* "HEY", 2.0, false); std::tuple<int, const char*> middle_two =
* tuple_slice<decltype(t), 1, 2>(t);
*/
template <class Tuple, size_t Start, size_t N>
constexpr auto tuple_slice(Tuple t) {
constexpr size_t size = std::tuple_size<Tuple>();
static_assert(Start + N <= size, "tuple_slice: Start + N > size");
return tuple_elements(t, make_offset_index_sequence<Start, N>{});
}
/**
* Use tuple_map to run a mapping function over a tuple to get a new tuple.
*
* Example 1:
* auto result = tuple_map(std::tuple<int32_t, int32_t, int32_t>(3, 4, 5), []
* (int32_t a) -> int16_t {return a+1;});
* // result == std::tuple<int16_t, int16_t, int16_t>(4, 5, 6)
*
* Example 2:
* struct Mapper {
* std::string operator()(int32_t a) const {
* return std::to_string(a);
* }
* int64_t operator()(const std::string& a) const {
* return atoi(a.c_str());
* }
* };
* auto result = tuple_map(std::tuple<int32_t, std::string>(3, "4"),
* Mapper());
* // result == std::tuple<std::string, int64_t>("3", 4)
*
* Example 3:
* struct A final {
* int32_t func() {
* return 5;
* }
* };
* struct B final {
* std::string func() {
* return "5";
* }
* };
* auto result = tuple_map(std::make_tuple(A(), B()), [] (auto a) { return
* a.func(); });
* // result == std::tuple<int32_t, std::string>(5, "5");
*/
namespace detail {
template <class Mapper, class... Args, size_t... Indices>
auto tuple_map(
// NOLINTNEXTLINE(cppcoreguidelines-rvalue-reference-param-not-moved)
std::tuple<Args...>&& tuple,
const Mapper& mapper,
std::index_sequence<Indices...> /*unused*/) {
return std::tuple<decltype(mapper(std::forward<Args>(std::get<Indices>(
tuple))))...>(mapper(std::forward<Args>(std::get<Indices>(tuple)))...);
}
} // namespace detail
template <class Mapper, class... Args>
auto tuple_map(std::tuple<Args...>&& tuple, const Mapper& mapper) {
return detail::tuple_map(
std::move(tuple), mapper, std::index_sequence_for<Args...>());
}
} // namespace c10::guts
#include <torch/headeronly/util/Metaprogramming.h>

View File

@ -1,515 +1 @@
#pragma once
#include <c10/util/TypeTraits.h>
#include <algorithm>
#include <cstddef>
#include <tuple>
#include <type_traits>
#include <utility>
namespace c10::guts {
template <class... T>
struct false_t : std::false_type {};
template <template <class> class... T>
struct false_higher_t : std::false_type {};
namespace typelist {
/**
* Type holding a list of types for compile time type computations
*/
template <class... Items>
struct typelist final {
public:
typelist() = delete; // not for instantiation
};
/**
* Returns the number of types in a typelist
* Example:
* 3 == size<typelist<int, int, double>>::value
*/
template <class TypeList>
struct size final {
static_assert(
false_t<TypeList>::value,
"In typelist::size<T>, T must be typelist<...>.");
};
template <class... Types>
struct size<typelist<Types...>> final {
static constexpr size_t value = sizeof...(Types);
};
/**
* Transforms a list of types into a tuple holding these types.
* Example:
* std::tuple<int, string> == to_tuple_t<typelist<int, string>>
*/
template <class TypeList>
struct to_tuple final {
static_assert(
false_t<TypeList>::value,
"In typelist::to_tuple<T>, T must be typelist<...>.");
};
template <class... Types>
struct to_tuple<typelist<Types...>> final {
using type = std::tuple<Types...>;
};
template <class TypeList>
using to_tuple_t = typename to_tuple<TypeList>::type;
/**
* Creates a typelist containing the types of a given tuple.
* Example:
* typelist<int, string> == from_tuple_t<std::tuple<int, string>>
*/
template <class Tuple>
struct from_tuple final {
static_assert(
false_t<Tuple>::value,
"In typelist::from_tuple<T>, T must be std::tuple<...>.");
};
template <class... Types>
struct from_tuple<std::tuple<Types...>> final {
using type = typelist<Types...>;
};
template <class Tuple>
using from_tuple_t = typename from_tuple<Tuple>::type;
/**
* Concatenates multiple type lists.
* Example:
* typelist<int, string, int> == concat_t<typelist<int, string>,
* typelist<int>>
*/
template <class... TypeLists>
struct concat final {
static_assert(
false_t<TypeLists...>::value,
"In typelist::concat<T1, ...>, the T arguments each must be typelist<...>.");
};
template <class... Head1Types, class... Head2Types, class... TailLists>
struct concat<typelist<Head1Types...>, typelist<Head2Types...>, TailLists...>
final {
using type =
typename concat<typelist<Head1Types..., Head2Types...>, TailLists...>::
type;
};
template <class... HeadTypes>
struct concat<typelist<HeadTypes...>> final {
using type = typelist<HeadTypes...>;
};
template <>
struct concat<> final {
using type = typelist<>;
};
template <class... TypeLists>
using concat_t = typename concat<TypeLists...>::type;
/**
* Filters the types in a type list by a type trait.
* Examples:
* typelist<int&, const string&&> == filter_t<std::is_reference,
* typelist<void, string, int&, bool, const string&&, int>>
*/
template <template <class> class Condition, class TypeList>
struct filter final {
static_assert(
false_t<TypeList>::value,
"In typelist::filter<Condition, TypeList>, the TypeList argument must be typelist<...>.");
};
template <template <class> class Condition, class Head, class... Tail>
struct filter<Condition, typelist<Head, Tail...>> final {
static_assert(
is_type_condition<Condition>::value,
"In typelist::filter<Condition, TypeList>, the Condition argument must be a condition type trait, i.e. have a static constexpr bool ::value member.");
using type = std::conditional_t<
Condition<Head>::value,
concat_t<
typelist<Head>,
typename filter<Condition, typelist<Tail...>>::type>,
typename filter<Condition, typelist<Tail...>>::type>;
};
template <template <class> class Condition>
struct filter<Condition, typelist<>> final {
static_assert(
is_type_condition<Condition>::value,
"In typelist::filter<Condition, TypeList>, the Condition argument must be a condition type trait, i.e. have a static constexpr bool ::value member.");
using type = typelist<>;
};
template <template <class> class Condition, class TypeList>
using filter_t = typename filter<Condition, TypeList>::type;
/**
* Counts how many types in the list fulfill a type trait
* Examples:
* 2 == count_if<std::is_reference, typelist<void, string, int&, bool, const
* string&&, int>>
*/
template <template <class> class Condition, class TypeList>
struct count_if final {
static_assert(
is_type_condition<Condition>::value,
"In typelist::count_if<Condition, TypeList>, the Condition argument must be a condition type trait, i.e. have a static constexpr bool ::value member.");
static_assert(
is_instantiation_of<typelist, TypeList>::value,
"In typelist::count_if<Condition, TypeList>, the TypeList argument must be typelist<...>.");
// TODO Direct implementation might be faster
static constexpr size_t value = size<filter_t<Condition, TypeList>>::value;
};
/**
* Checks if a typelist contains a certain type.
* Examples:
* contains<typelist<int, string>, string> == true_type
* contains<typelist<int, string>, double> == false_type
*/
namespace detail {
template <class TypeList, class Type, class Enable = void>
struct contains {};
template <class Type>
struct contains<typelist<>, Type, void> : std::false_type {};
template <class Type, class Head, class... Tail>
struct contains<
typelist<Head, Tail...>,
Type,
std::enable_if_t<std::is_same_v<Head, Type>>> : std::true_type {};
template <class Type, class Head, class... Tail>
struct contains<
typelist<Head, Tail...>,
Type,
std::enable_if_t<!std::is_same_v<Head, Type>>>
: contains<typelist<Tail...>, Type> {};
} // namespace detail
template <class TypeList, class Type>
using contains = typename detail::contains<TypeList, Type>::type;
/**
* Returns true iff the type trait is true for all types in the type list
* Examples:
* true == all<std::is_reference, typelist<int&, const float&&, const
* MyClass&>>::value false == all<std::is_reference, typelist<int&, const
* float&&, MyClass>>::value
*/
template <template <class> class Condition, class TypeList>
struct all {
static_assert(
false_t<TypeList>::value,
"In typelist::all<Condition, TypeList>, the TypeList argument must be typelist<...>.");
};
template <template <class> class Condition, class... Types>
struct all<Condition, typelist<Types...>>
: std::conjunction<Condition<Types>...> {
static_assert(
is_type_condition<Condition>::value,
"In typelist::all<Condition, TypeList>, the Condition argument must be a condition type trait, i.e. have a static constexpr bool ::value member.");
};
/**
* Returns true iff the type trait is true for any type in the type list
* Examples:
* true == true_for_any_type<std::is_reference, typelist<int, const
* float&&, const MyClass>>::value false ==
* true_for_any_type<std::is_reference, typelist<int, const float,
* MyClass>>::value
*/
template <template <class> class Condition, class TypeList>
struct true_for_any_type final {
static_assert(
false_t<TypeList>::value,
"In typelist::true_for_any_type<Condition, TypeList>, the TypeList argument must be typelist<...>.");
};
template <template <class> class Condition, class... Types>
struct true_for_any_type<Condition, typelist<Types...>> final
: std::disjunction<Condition<Types>...> {
static_assert(
is_type_condition<Condition>::value,
"In typelist::true_for_any_type<Condition, TypeList>, the Condition argument must be a condition type trait, i.e. have a static constexpr bool ::value member.");
};
/**
* Maps types of a type list using a type trait
* Example:
* typelist<int&, double&, string&> == map_t<std::add_lvalue_reference_t,
* typelist<int, double, string>>
*/
template <template <class> class Mapper, class TypeList>
struct map final {
static_assert(
false_t<TypeList>::value,
"In typelist::map<Mapper, TypeList>, the TypeList argument must be typelist<...>.");
};
template <template <class> class Mapper, class... Types>
struct map<Mapper, typelist<Types...>> final {
using type = typelist<Mapper<Types>...>;
};
template <template <class> class Mapper, class TypeList>
using map_t = typename map<Mapper, TypeList>::type;
/**
* Returns the first element of a type list.
* Example:
* int == head_t<typelist<int, string>>
*/
template <class TypeList>
struct head final {
static_assert(
false_t<TypeList>::value,
"In typelist::head<T>, the T argument must be typelist<...>.");
};
template <class Head, class... Tail>
struct head<typelist<Head, Tail...>> final {
using type = Head;
};
template <class TypeList>
using head_t = typename head<TypeList>::type;
/**
* Returns the first element of a type list, or the specified default if the
* type list is empty. Example: int == head_t<bool, typelist<int, string>>
* bool == head_t<bool, typelist<>>
*/
template <class Default, class TypeList>
struct head_with_default final {
using type = Default;
};
template <class Default, class Head, class... Tail>
struct head_with_default<Default, typelist<Head, Tail...>> final {
using type = Head;
};
template <class Default, class TypeList>
using head_with_default_t = typename head_with_default<Default, TypeList>::type;
/**
* Returns the N-th element of a type list.
* Example:
* int == element_t<1, typelist<float, int, char>>
*/
/// Base template.
template <size_t Index, class TypeList>
struct element final {
static_assert(
false_t<TypeList>::value,
"In typelist::element<T>, the T argument must be typelist<...>.");
};
/// Successful case, we have reached the zero index and can "return" the head
/// type.
template <class Head, class... Tail>
struct element<0, typelist<Head, Tail...>> {
using type = Head;
};
/// Error case, we have an index but ran out of types! It will only be selected
/// if `Ts...` is actually empty!
template <size_t Index, class... Ts>
struct element<Index, typelist<Ts...>> {
static_assert(
Index < sizeof...(Ts),
"Index is out of bounds in typelist::element");
};
/// Shave off types until we hit the <0, Head, Tail...> or <Index> case.
template <size_t Index, class Head, class... Tail>
struct element<Index, typelist<Head, Tail...>>
: element<Index - 1, typelist<Tail...>> {};
/// Convenience alias.
template <size_t Index, class TypeList>
using element_t = typename element<Index, TypeList>::type;
/**
* Returns the last element of a type list.
* Example:
* int == last_t<typelist<int, string>>
*/
template <class TypeList>
struct last final {
static_assert(
false_t<TypeList>::value,
"In typelist::last<T>, the T argument must be typelist<...>.");
};
template <class Head, class... Tail>
struct last<typelist<Head, Tail...>> final {
using type = typename last<typelist<Tail...>>::type;
};
template <class Head>
struct last<typelist<Head>> final {
using type = Head;
};
template <class TypeList>
using last_t = typename last<TypeList>::type;
static_assert(std::is_same_v<int, last_t<typelist<double, float, int>>>);
/**
* Take/drop a number of arguments from a typelist.
* Example:
* typelist<int, string> == take_t<typelist<int, string, bool>, 2>
* typelist<bool> == drop_t<typelist<int, string, bool>, 2>
*/
namespace detail {
template <class TypeList, size_t offset, class IndexSequence>
struct take_elements final {};
template <class TypeList, size_t offset, size_t... Indices>
struct take_elements<TypeList, offset, std::index_sequence<Indices...>> final {
using type = typelist<typename element<offset + Indices, TypeList>::type...>;
};
} // namespace detail
template <class TypeList, size_t num>
struct take final {
static_assert(
is_instantiation_of<typelist, TypeList>::value,
"In typelist::take<T, num>, the T argument must be typelist<...>.");
static_assert(
num <= size<TypeList>::value,
"Tried to typelist::take more elements than there are in the list");
using type = typename detail::
take_elements<TypeList, 0, std::make_index_sequence<num>>::type;
};
template <class TypeList, size_t num>
using take_t = typename take<TypeList, num>::type;
template <class TypeList, size_t num>
struct drop final {
static_assert(
is_instantiation_of<typelist, TypeList>::value,
"In typelist::drop<T, num>, the T argument must be typelist<...>.");
static_assert(
num <= size<TypeList>::value,
"Tried to typelist::drop more elements than there are in the list");
using type = typename detail::take_elements<
TypeList,
num,
std::make_index_sequence<size<TypeList>::value - num>>::type;
};
template <class TypeList, size_t num>
using drop_t = typename drop<TypeList, num>::type;
/**
* Like drop, but returns an empty list rather than an assertion error if `num`
* is larger than the size of the TypeList.
* Example:
* typelist<> == drop_if_nonempty_t<typelist<string, bool>, 2>
* typelist<> == drop_if_nonempty_t<typelist<int, string, bool>, 3>
*/
template <class TypeList, size_t num>
struct drop_if_nonempty final {
static_assert(
is_instantiation_of<typelist, TypeList>::value,
"In typelist::drop<T, num>, the T argument must be typelist<...>.");
using type = typename detail::take_elements<
TypeList,
std::min(num, size<TypeList>::value),
std::make_index_sequence<
size<TypeList>::value - std::min(num, size<TypeList>::value)>>::type;
};
template <class TypeList, size_t num>
using drop_if_nonempty_t = typename drop_if_nonempty<TypeList, num>::type;
/**
* Reverses a typelist.
* Example:
* typelist<int, string> == reverse_t<typelist<string, int>>
*/
template <class TypeList>
struct reverse final {
static_assert(
false_t<TypeList>::value,
"In typelist::reverse<T>, the T argument must be typelist<...>.");
};
template <class Head, class... Tail>
struct reverse<typelist<Head, Tail...>> final {
using type =
concat_t<typename reverse<typelist<Tail...>>::type, typelist<Head>>;
};
template <>
struct reverse<typelist<>> final {
using type = typelist<>;
};
template <class TypeList>
using reverse_t = typename reverse<TypeList>::type;
/**
* Find the index of the first type in a typelist fulfilling a type trait
* condition. Example:
*
* 2 == find_if<typelist<char, int, char&, int&>, std::is_reference>::value
*/
template <class TypeList, template <class> class Condition, class Enable = void>
struct find_if final {
static_assert(
false_t<TypeList>::value,
"In typelist::find_if<TypeList, Condition>, the TypeList argument must be typelist<...>.");
};
template <template <class> class Condition>
struct find_if<typelist<>, Condition, void> final {
static_assert(
false_higher_t<Condition>::value,
"In typelist::find_if<Type/List, Condition>, didn't find any type fulfilling the Condition.");
};
template <class Head, class... Tail, template <class> class Condition>
struct find_if<
typelist<Head, Tail...>,
Condition,
std::enable_if_t<Condition<Head>::value>>
final {
static constexpr size_t value = 0;
};
template <class Head, class... Tail, template <class> class Condition>
struct find_if<
typelist<Head, Tail...>,
Condition,
std::enable_if_t<!Condition<Head>::value>>
final {
static constexpr size_t value =
1 + find_if<typelist<Tail...>, Condition>::value;
};
/**
* Maps a list of types into a list of values.
* Examples:
* // Example 1
* auto sizes =
* map_types_to_values<typelist<int64_t, bool, uint32_t>>(
* [] (auto t) { return sizeof(decltype(t)::type); }
* );
* // sizes == std::tuple<size_t, size_t, size_t>{8, 1, 4}
*
* // Example 2
* auto shared_ptrs =
* map_types_to_values<typelist<int, double>>(
* [] (auto t) { return make_shared<typename decltype(t)::type>(); }
* );
* // shared_ptrs == std::tuple<shared_ptr<int>, shared_ptr<double>>()
*/
namespace detail {
template <class T>
struct type_ final {
using type = T;
};
template <class TypeList>
struct map_types_to_values final {
static_assert(
false_t<TypeList>::value,
"In typelist::map_types_to_values<T>, the T argument must be typelist<...>.");
};
template <class... Types>
struct map_types_to_values<typelist<Types...>> final {
template <class Func>
static auto call(Func&& func) {
return std::tuple{std::forward<Func>(func)(type_<Types>())...};
}
};
} // namespace detail
template <class TypeList, class Func>
auto map_types_to_values(Func&& func) {
return detail::map_types_to_values<TypeList>::call(std::forward<Func>(func));
}
} // namespace typelist
} // namespace c10::guts
#include <torch/headeronly/util/TypeList.h>

View File

@ -1,151 +1 @@
#pragma once
#include <functional>
#include <type_traits>
namespace c10::guts {
/**
* is_equality_comparable<T> is true_type iff the equality operator is defined
* for T.
*/
template <class T, class Enable = void>
struct is_equality_comparable : std::false_type {};
template <class T>
struct is_equality_comparable<
T,
std::void_t<decltype(std::declval<T&>() == std::declval<T&>())>>
: std::true_type {};
template <class T>
using is_equality_comparable_t = typename is_equality_comparable<T>::type;
/**
* is_hashable<T> is true_type iff std::hash is defined for T
*/
template <class T, class Enable = void>
struct is_hashable : std::false_type {};
template <class T>
struct is_hashable<T, std::void_t<decltype(std::hash<T>()(std::declval<T&>()))>>
: std::true_type {};
template <class T>
using is_hashable_t = typename is_hashable<T>::type;
/**
* is_function_type<T> is true_type iff T is a plain function type (i.e.
* "Result(Args...)")
*/
template <class T>
struct is_function_type : std::false_type {};
template <class Result, class... Args>
struct is_function_type<Result(Args...)> : std::true_type {};
template <class T>
using is_function_type_t = typename is_function_type<T>::type;
/**
* is_instantiation_of<T, I> is true_type iff I is a template instantiation of T
* (e.g. vector<int> is an instantiation of vector) Example:
* is_instantiation_of_t<vector, vector<int>> // true
* is_instantiation_of_t<pair, pair<int, string>> // true
* is_instantiation_of_t<vector, pair<int, string>> // false
*/
template <template <class...> class Template, class T>
struct is_instantiation_of : std::false_type {};
template <template <class...> class Template, class... Args>
struct is_instantiation_of<Template, Template<Args...>> : std::true_type {};
template <template <class...> class Template, class T>
using is_instantiation_of_t = typename is_instantiation_of<Template, T>::type;
namespace detail {
/**
* strip_class: helper to remove the class type from pointers to `operator()`.
*/
template <typename T>
struct strip_class {};
template <typename Class, typename Result, typename... Args>
struct strip_class<Result (Class::*)(Args...)> {
using type = Result(Args...);
};
template <typename Class, typename Result, typename... Args>
struct strip_class<Result (Class::*)(Args...) const> {
using type = Result(Args...);
};
template <typename T>
using strip_class_t = typename strip_class<T>::type;
} // namespace detail
/**
* Evaluates to true_type, iff the given class is a Functor
* (i.e. has a call operator with some set of arguments)
*/
template <class Functor, class Enable = void>
struct is_functor : std::false_type {};
template <class Functor>
struct is_functor<
Functor,
std::enable_if_t<is_function_type<
detail::strip_class_t<decltype(&Functor::operator())>>::value>>
: std::true_type {};
/**
* lambda_is_stateless<T> is true iff the lambda type T is stateless
* (i.e. does not have a closure).
* Example:
* auto stateless_lambda = [] (int a) {return a;};
* lambda_is_stateless<decltype(stateless_lambda)> // true
* auto stateful_lambda = [&] (int a) {return a;};
* lambda_is_stateless<decltype(stateful_lambda)> // false
*/
namespace detail {
template <class LambdaType, class FuncType>
struct is_stateless_lambda__ final {
static_assert(
!std::is_same_v<LambdaType, LambdaType>,
"Base case shouldn't be hit");
};
// implementation idea: According to the C++ standard, stateless lambdas are
// convertible to function pointers
template <class LambdaType, class C, class Result, class... Args>
struct is_stateless_lambda__<LambdaType, Result (C::*)(Args...) const>
: std::is_convertible<LambdaType, Result (*)(Args...)> {};
template <class LambdaType, class C, class Result, class... Args>
struct is_stateless_lambda__<LambdaType, Result (C::*)(Args...)>
: std::is_convertible<LambdaType, Result (*)(Args...)> {};
// case where LambdaType is not even a functor
template <class LambdaType, class Enable = void>
struct is_stateless_lambda_ final : std::false_type {};
// case where LambdaType is a functor
template <class LambdaType>
struct is_stateless_lambda_<
LambdaType,
std::enable_if_t<is_functor<LambdaType>::value>>
: is_stateless_lambda__<LambdaType, decltype(&LambdaType::operator())> {};
} // namespace detail
template <class T>
using is_stateless_lambda = detail::is_stateless_lambda_<std::decay_t<T>>;
/**
* is_type_condition<C> is true_type iff C<...> is a type trait representing a
* condition (i.e. has a constexpr static bool ::value member) Example:
* is_type_condition<std::is_reference> // true
*/
template <template <class> class C, class Enable = void>
struct is_type_condition : std::false_type {};
template <template <class> class C>
struct is_type_condition<
C,
std::enable_if_t<
std::is_same_v<bool, std::remove_cv_t<decltype(C<int>::value)>>>>
: std::true_type {};
/**
* is_fundamental<T> is true_type iff the lambda type T is a fundamental type
* (that is, arithmetic type, void, or nullptr_t). Example: is_fundamental<int>
* // true We define it here to resolve a MSVC bug. See
* https://github.com/pytorch/pytorch/issues/30932 for details.
*/
template <class T>
struct is_fundamental : std::is_fundamental<T> {};
} // namespace c10::guts
#include <torch/headeronly/util/TypeTraits.h>

View File

@ -24,6 +24,7 @@ set(C10_XPU_HEADERS
XPUCachingAllocator.h
XPUDeviceProp.h
XPUException.h
XPUEvent.h
XPUFunctions.h
XPUMacros.h
XPUStream.h

178
c10/xpu/XPUEvent.h Normal file
View File

@ -0,0 +1,178 @@
#pragma once
#include <c10/xpu/XPUStream.h>
namespace c10::xpu {
/*
* XPUEvent are movable not copyable wrappers around SYCL event. XPUEvent are
* constructed lazily when first recorded. It has a device, and this device is
* acquired from the first recording stream. Later streams that record the event
* must match the same device.
*
* Currently, XPUEvent does NOT support to export an inter-process event from
* another process via inter-process communication(IPC). So it means that
* inter-process communication for event handles between different processes is
* not available. This could impact some applications that rely on cross-process
* synchronization and communication.
*/
struct XPUEvent {
// Constructors
XPUEvent(bool enable_timing = false) noexcept
: enable_timing_{enable_timing} {}
~XPUEvent() {
if (isCreated()) {
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_deletion(
c10::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
}
}
C10_DISABLE_COPY_AND_ASSIGN(XPUEvent);
XPUEvent(XPUEvent&& other) = default;
XPUEvent& operator=(XPUEvent&& other) = default;
operator sycl::event&() const {
return event();
}
std::optional<c10::Device> device() const {
if (isCreated()) {
return c10::Device(c10::kXPU, device_index_);
} else {
return std::nullopt;
}
}
inline bool isCreated() const {
return (event_.get() != nullptr);
}
DeviceIndex device_index() const {
return device_index_;
}
sycl::event& event() const {
return *event_;
}
bool query() const {
using namespace sycl::info;
if (!isCreated()) {
return true;
}
return event().get_info<event::command_execution_status>() ==
event_command_status::complete;
}
void record() {
record(getCurrentXPUStream());
}
void recordOnce(const XPUStream& stream) {
if (!isCreated()) {
record(stream);
}
}
void record(const XPUStream& stream) {
if (!isCreated()) {
device_index_ = stream.device_index();
assignEvent(stream.queue());
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_creation(
c10::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
} else {
TORCH_CHECK(
device_index_ == stream.device_index(),
"Event device ",
device_index_,
" does not match recording stream's device ",
stream.device_index(),
".");
reassignEvent(stream.queue());
}
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_record(
c10::kXPU,
reinterpret_cast<uintptr_t>(event_.get()),
reinterpret_cast<uintptr_t>(&stream.queue()));
}
}
void block(const XPUStream& stream) {
if (isCreated()) {
std::vector<sycl::event> event_list{event()};
// Make this stream wait until event_ is completed.
stream.queue().ext_oneapi_submit_barrier(event_list);
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_wait(
c10::kXPU,
reinterpret_cast<uintptr_t>(event_.get()),
reinterpret_cast<uintptr_t>(&stream.queue()));
}
}
}
double elapsed_time(const XPUEvent& other) const {
TORCH_CHECK(
isCreated() && other.isCreated(),
"Both events must be recorded before calculating elapsed time.");
TORCH_CHECK(
query() && other.query(),
"Both events must be completed before calculating elapsed time.");
TORCH_CHECK(
enable_timing_ && other.enable_timing_,
"Both events must be created with argument 'enable_timing=True'.");
using namespace sycl::info::event_profiling;
// Block until both of the recorded events are completed.
uint64_t end_time_ns = other.event().get_profiling_info<command_end>();
uint64_t start_time_ns = event().get_profiling_info<command_end>();
// Return the eplased time in milliseconds.
return 1e-6 *
(static_cast<double>(end_time_ns) - static_cast<double>(start_time_ns));
}
void synchronize() const {
if (isCreated()) {
const c10::impl::PyInterpreter* interp = c10::impl::GPUTrace::get_trace();
if (C10_UNLIKELY(interp)) {
(*interp)->trace_gpu_event_synchronization(
c10::kXPU, reinterpret_cast<uintptr_t>(event_.get()));
}
event().wait_and_throw();
}
}
private:
void assignEvent(sycl::queue& queue) {
if (enable_timing_) {
event_ = std::make_unique<sycl::event>(
sycl::ext::oneapi::experimental::submit_profiling_tag(queue));
} else {
event_ = std::make_unique<sycl::event>(queue.ext_oneapi_submit_barrier());
}
}
void reassignEvent(sycl::queue& queue) {
event_.reset();
assignEvent(queue);
}
bool enable_timing_ = false;
c10::DeviceIndex device_index_ = -1;
// Only need to track the last event, as events in an in-order queue are
// executed sequentially.
std::unique_ptr<sycl::event> event_;
};
} // namespace c10::xpu

View File

@ -478,6 +478,7 @@ function(torch_update_find_cuda_flags)
endfunction()
include(CheckCXXCompilerFlag)
include(CheckCCompilerFlag)
include(CheckLinkerFlag)
##############################################################################
@ -501,6 +502,24 @@ function(append_cxx_flag_if_supported flag outputvar)
endif()
endfunction()
function(append_c_flag_if_supported flag outputvar)
string(TOUPPER "HAS${flag}" _FLAG_NAME)
string(REGEX REPLACE "[=-]" "_" _FLAG_NAME "${_FLAG_NAME}")
# GCC silences unknown -Wno-XXX flags, so test the corresponding -WXXX.
if(CMAKE_C_COMPILER_ID STREQUAL "GNU")
string(REGEX REPLACE "^Wno-" "W" new_flag "${flag}")
else()
set(new_flag "${flag}")
endif()
check_c_compiler_flag("${new_flag}" ${_FLAG_NAME})
if(${_FLAG_NAME})
string(APPEND ${outputvar} " ${flag}")
set(${outputvar} "${${outputvar}}" PARENT_SCOPE)
endif()
endfunction()
function(target_compile_options_if_supported target flag)
set(_compile_options "")
append_cxx_flag_if_supported("${flag}" _compile_options)

View File

@ -1308,8 +1308,319 @@ coverage_ignore_functions = [
# torch.onnx.symbolic_opset7
"max",
"min",
# torch.onnx.symbolic_opset8
"addmm",
"bmm",
"empty",
"empty_like",
"flatten",
"full",
"full_like",
"gt",
"lt",
"matmul",
"mm",
"ones",
"ones_like",
"prelu",
"repeat",
"zeros",
"zeros_like",
# torch.onnx.symbolic_opset9
"abs",
"acos",
"adaptive_avg_pool1d",
"adaptive_avg_pool2d",
"adaptive_avg_pool3d",
"adaptive_max_pool1d",
"adaptive_max_pool2d",
"adaptive_max_pool3d",
"add",
"addcmul",
"addmm",
"alias",
"amax",
"amin",
"aminmax",
"arange",
"argmax",
"argmin",
"as_strided",
"as_tensor",
"asin",
"atan",
"atan2",
"avg_pool1d",
"avg_pool2d",
"avg_pool3d",
"baddbmm",
"batch_norm",
"bernoulli",
"bitwise_not",
"bitwise_or",
"bmm",
"broadcast_tensors",
"broadcast_to",
"bucketize",
"cat",
"cdist",
"ceil",
"clamp",
"clamp_max",
"clamp_min",
"clone",
"constant_pad_nd",
"contiguous",
"conv1d",
"conv2d",
"conv3d",
"conv_tbc",
"conv_transpose1d",
"conv_transpose2d",
"conv_transpose3d",
"convert_element_type",
"convolution",
"cos",
"cosine_similarity",
"cross",
"cumsum",
"detach",
"dim",
"div",
"dot",
"dropout",
"elu",
"embedding",
"embedding_bag",
"empty",
"empty_like",
"eq",
"erf",
"exp",
"expand",
"expand_as",
"eye",
"fill",
"flatten",
"floor",
"floor_divide",
"floordiv",
"frobenius_norm",
"full",
"full_like",
"gather",
"ge",
"gelu",
"get_pool_ceil_padding",
"glu",
"group_norm",
"gru",
"gt",
"hann_window",
"hardshrink",
"hardsigmoid",
"hardswish",
"hardtanh",
"index",
"index_add",
"index_copy",
"index_fill",
"index_put",
"index_select",
"instance_norm",
"is_floating_point",
"is_pinned",
"isnan",
"item",
"kl_div",
"layer_norm",
"le",
"leaky_relu",
"lerp",
"lift",
"linalg_cross",
"linalg_matrix_norm",
"linalg_norm",
"linalg_vector_norm",
"linear",
"linspace",
"log",
"log10",
"log1p",
"log2",
"log_sigmoid",
"log_softmax",
"logical_and",
"logical_not",
"logical_or",
"logical_xor",
"logit",
"logsumexp",
"lstm",
"lstm_cell",
"lt",
"masked_fill",
"masked_fill_",
"matmul",
"max",
"max_pool1d",
"max_pool1d_with_indices",
"max_pool2d",
"max_pool2d_with_indices",
"max_pool3d",
"max_pool3d_with_indices",
"maximum",
"meshgrid",
"min",
"minimum",
"mish",
"mm",
"movedim",
"mse_loss",
"mul",
"multinomial",
"mv",
"narrow",
"native_layer_norm",
"ne",
"neg",
"new_empty",
"new_full",
"new_ones",
"new_zeros",
"nonzero",
"nonzero_numpy",
"noop_complex_operators",
"norm",
"numel",
"numpy_T",
"one_hot",
"ones",
"ones_like",
"onnx_placeholder",
"overload_by_arg_count",
"pad",
"pairwise_distance",
"permute",
"pixel_shuffle",
"pixel_unshuffle",
"pow",
"prelu",
"prim_constant",
"prim_constant_chunk",
"prim_constant_split",
"prim_data",
"prim_device",
"prim_dtype",
"prim_if",
"prim_layout",
"prim_list_construct",
"prim_list_unpack",
"prim_loop",
"prim_max",
"prim_min",
"prim_shape",
"prim_tolist",
"prim_tuple_construct",
"prim_type",
"prim_unchecked_cast",
"prim_uninitialized",
"rand",
"rand_like",
"randint",
"randint_like",
"randn",
"randn_like",
"reciprocal",
"reflection_pad",
"relu",
"relu6",
"remainder",
"repeat",
"repeat_interleave",
"replication_pad",
"reshape",
"reshape_as",
"rnn_relu",
"rnn_tanh",
"roll",
"rrelu",
"rsqrt",
"rsub",
"scalar_tensor",
"scatter",
"scatter_add",
"select",
"selu",
"sigmoid",
"sign",
"silu",
"sin",
"size",
"slice",
"softmax",
"softplus",
"softshrink",
"sort",
"split",
"split_with_sizes",
"sqrt",
"square",
"squeeze",
"stack",
"std",
"std_mean",
"sub",
"t",
"take",
"tan",
"tanh",
"tanhshrink",
"tensor",
"threshold",
"to",
"topk",
"transpose",
"true_divide",
"type_as",
"unbind",
"unfold",
"unsafe_chunk",
"unsafe_split",
"unsafe_split_with_sizes",
"unsqueeze",
"unsupported_complex_operators",
"unused",
"upsample_bilinear2d",
"upsample_linear1d",
"upsample_nearest1d",
"upsample_nearest2d",
"upsample_nearest3d",
"upsample_trilinear3d",
"var",
"var_mean",
"view",
"view_as",
"where",
"wrap_logical_op_with_cast_to",
"wrap_logical_op_with_negation",
"zero",
"zeros",
"zeros_like",
# torch.onnx.utils
"disable_apex_o2_state_dict_hook",
"export",
"export_to_pretty_string",
"exporter_context",
"is_in_onnx_export",
"model_signature",
"register_custom_op_symbolic",
"select_model_mode_for_export",
"setup_onnx_logging",
"unconvertible_ops",
"unpack_quantized_tensor",
"warn_on_static_input_change",
# torch.onnx.verification
"check_export_model_diff",
"verify",
"verify_aten_graph",
@ -1400,6 +1711,32 @@ coverage_ignore_functions = [
"noop_context_fn",
"set_checkpoint_early_stop",
"set_device_states",
# torch.utils.collect_env
"check_release_file",
"get_cachingallocator_config",
"get_clang_version",
"get_cmake_version",
"get_conda_packages",
"get_cpu_info",
"get_cuda_module_loading_config",
"get_cudnn_version",
"get_env_info",
"get_gcc_version",
"get_gpu_info",
"get_libc_version",
"get_lsb_version",
"get_mac_version",
"get_nvidia_driver_version",
"get_nvidia_smi",
"get_os",
"get_pip_packages",
"get_platform",
"get_pretty_env_info",
"get_python_platform",
"get_running_cuda_version",
"get_windows_version",
"is_xnnpack_available",
"pretty_str",
# torch.utils.cpp_backtrace
"get_cpp_backtrace",
# torch.utils.cpp_extension
@ -1463,6 +1800,52 @@ coverage_ignore_functions = [
"apply_shuffle_seed",
"apply_shuffle_settings",
"get_all_graph_pipes",
# torch.utils.flop_counter
"addmm_flop",
"baddbmm_flop",
"bmm_flop",
"conv_backward_flop",
"conv_flop",
"conv_flop_count",
"convert_num_with_suffix",
"get_shape",
"get_suffix_str",
"mm_flop",
"normalize_tuple",
"register_flop_formula",
"sdpa_backward_flop",
"sdpa_backward_flop_count",
"sdpa_flop",
"sdpa_flop_count",
"shape_wrapper",
"transpose_shape",
# torch.utils.hipify.hipify_python
"add_dim3",
"compute_stats",
"extract_arguments",
"file_add_header",
"file_specific_replacement",
"find_bracket_group",
"find_closure_group",
"find_parentheses_group",
"fix_static_global_kernels",
"get_hip_file_path",
"hip_header_magic",
"hipify",
"is_caffe2_gpu_file",
"is_cusparse_file",
"is_out_of_place",
"is_pytorch_file",
"is_special_file",
"match_extensions",
"matched_files_iter",
"openf",
"preprocess_file_and_save_result",
"preprocessor",
"processKernelLaunches",
"replace_extern_shared",
"replace_math_functions",
"str2bool",
# torch.utils.hooks
"unserializable_hook",
"warn_if_has_hooks",

View File

@ -0,0 +1,21 @@
# torch.mtia.mtia_graph
The MTIA backend is implemented out of the tree, only interfaces are defined here.
```{eval-rst}
.. automodule:: torch.mtia.mtia_graph
```
```{eval-rst}
.. currentmodule:: torch.mtia.mtia_graph
```
```{eval-rst}
.. autoclass:: MTIAGraph
:members:
```
```{eval-rst}
.. autoclass:: graph
:members:
```

View File

@ -14,6 +14,10 @@ Utils
sdpa_kernel
SDPBackend
register_flash_attention_impl
activate_flash_attention_impl
list_flash_attention_impls
current_flash_attention_impl
Submodules
----------

View File

@ -29,6 +29,7 @@ mps
xpu
mtia
mtia.memory
mtia.mtia_graph
meta
torch.backends <backends>
torch.export <export>

View File

@ -19,91 +19,6 @@
swap_tensors
```
# torch.utils.collect_env
```{eval-rst}
.. automodule:: torch.utils.collect_env
```
```{eval-rst}
.. currentmodule:: torch.utils.collect_env
```
```{eval-rst}
.. autosummary::
:toctree: generated
:nosignatures:
check_release_file
is_xnnpack_available
pretty_str
```
# torch.utils.flop_counter
```{eval-rst}
.. automodule:: torch.utils.flop_counter
```
```{eval-rst}
.. currentmodule:: torch.utils.flop_counter
```
```{eval-rst}
.. autosummary::
:toctree: generated
:nosignatures:
baddbmm_flop
bmm_flop
conv_backward_flop
conv_flop
conv_flop_count
register_flop_formula
sdpa_backward_flop
sdpa_backward_flop_count
sdpa_flop
sdpa_flop_count
shape_wrapper
```
# torch.utils.hipify.hipify_python
```{eval-rst}
.. automodule:: torch.utils.hipify.hipify_python
```
```{eval-rst}
.. currentmodule:: torch.utils.hipify.hipify_python
```
```{eval-rst}
.. autosummary::
:toctree: generated
:nosignatures:
compute_stats
extract_arguments
file_add_header
file_specific_replacement
find_bracket_group
find_closure_group
find_parentheses_group
fix_static_global_kernels
hip_header_magic
hipify
is_caffe2_gpu_file
is_cusparse_file
is_out_of_place
is_pytorch_file
is_special_file
openf
preprocess_file_and_save_result
preprocessor
processKernelLaunches
replace_extern_shared
replace_math_functions
str2bool
```
<!-- This module needs to be documented. Adding here in the meantime
for tracking purposes -->
```{eval-rst}
@ -128,6 +43,7 @@ for tracking purposes -->
.. py:module:: torch.utils.benchmark.utils.valgrind_wrapper.timer_interface
.. py:module:: torch.utils.bundled_inputs
.. py:module:: torch.utils.checkpoint
.. py:module:: torch.utils.collect_env
.. py:module:: torch.utils.cpp_backtrace
.. py:module:: torch.utils.cpp_extension
.. py:module:: torch.utils.data.backward_compatibility
@ -164,8 +80,10 @@ for tracking purposes -->
.. py:module:: torch.utils.data.sampler
.. py:module:: torch.utils.dlpack
.. py:module:: torch.utils.file_baton
.. py:module:: torch.utils.flop_counter
.. py:module:: torch.utils.hipify.constants
.. py:module:: torch.utils.hipify.cuda_to_hip_mappings
.. py:module:: torch.utils.hipify.hipify_python
.. py:module:: torch.utils.hipify.version
.. py:module:: torch.utils.hooks
.. py:module:: torch.utils.jit.log_extract

View File

@ -260,6 +260,7 @@ select = [
"TRY401", # verbose-log-message
"UP",
"YTT",
"S101",
]
[tool.ruff.lint.pyupgrade]
@ -339,6 +340,39 @@ keep-runtime-typing = true
"tools/linter/**" = [
"LOG015" # please fix
]
"benchmarks/**" = [
"S101"
]
"test/**" = [
"S101"
]
"torchgen/**" = [
"S101"
]
"torch/**" = [
"S101"
]
"tools/**" = [
"S101"
]
"setup.py" = [
"S101"
]
"functorch/**" = [
"S101"
]
"docs/**" = [
"S101"
]
"android/**" = [
"S101"
]
".github/**" = [
"S101"
]
".ci/**" = [
"S101"
]
[tool.codespell]
ignore-words = "tools/linter/dictionary.txt"

View File

@ -10,7 +10,7 @@ tp2_dir="$top_dir/third_party"
pip install ninja
# Install onnx
pip install --no-use-pep517 -e "$tp2_dir/onnx"
pip install -e "$tp2_dir/onnx"
# Install caffe2 and pytorch
pip install -r "$top_dir/caffe2/requirements.txt"

View File

@ -17,8 +17,11 @@ set(AOTI_ABI_CHECK_TEST_SRCS
${AOTI_ABI_CHECK_TEST_ROOT}/test_headeronlyarrayref.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_macros.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_math.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_metaprogramming.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_rand.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_scalartype.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_typelist.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_typetraits.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_vec.cpp
${AOTI_ABI_CHECK_TEST_ROOT}/test_vec_half.cpp
)

View File

@ -1,9 +1,8 @@
#include <c10/test/util/Macros.h>
#include <c10/util/Metaprogramming.h>
#include <gtest/gtest.h>
#include <torch/headeronly/util/Metaprogramming.h>
#include <cstdlib>
using namespace c10::guts;
using namespace torch::headeronly::guts;
// NOLINTBEGIN(modernize*, cppcoreguidelines-special-member-functions)
namespace {
@ -65,6 +64,15 @@ static_assert(
typename make_function_traits_t<void, typelist::typelist<int, float>>::
func_type>::value,
"");
struct Functor final {
std::string operator()(int64_t a, float b) const;
};
static_assert(
std::is_same<
std::string(int64_t, float),
typename infer_function_traits_t<Functor>::func_type>::value,
"");
} // namespace test_function_traits
struct MovableOnly {

View File

@ -1,8 +1,8 @@
#include <c10/util/TypeList.h>
#include <gtest/gtest.h>
#include <torch/headeronly/util/TypeList.h>
#include <memory>
using namespace c10::guts::typelist;
using namespace torch::headeronly::guts::typelist;
// NOLINTBEGIN(modernize-unary-static-assert)
namespace test_size {
class MyClass {};

View File

@ -1,7 +1,7 @@
#include <c10/util/TypeTraits.h>
#include <gtest/gtest.h>
#include <torch/headeronly/util/TypeTraits.h>
using namespace c10::guts;
using namespace torch::headeronly::guts;
// NOLINTBEGIN(modernize-unary-static-assert)
namespace {

View File

@ -1,5 +1,6 @@
#include <torch/csrc/inductor/aoti_torch/c/shim.h>
#include <torch/csrc/stable/accelerator.h>
#include <torch/csrc/stable/device.h>
#include <torch/csrc/stable/library.h>
#include <torch/csrc/stable/tensor.h>
#include <torch/csrc/stable/ops.h>
@ -528,6 +529,149 @@ STABLE_TORCH_LIBRARY_IMPL(libtorch_agnostic, CompositeExplicitAutograd, m) {
m.impl("make_tensor_clones_and_call_foreach", &boxed_make_tensor_clones_and_call_foreach);
}
// Test functions for torch::stable::Tensor device method
torch::stable::Device test_tensor_device(torch::stable::Tensor tensor) {
return tensor.device();
}
void boxed_test_tensor_device(
StableIValue* stack,
uint64_t num_args,
uint64_t num_outputs) {
torch::stable::Device res = test_tensor_device(
torch::stable::detail::to<torch::stable::Tensor>(stack[0]));
stack[0] = torch::stable::detail::from(res);
}
// Test functions for torch::stable::Device
torch::stable::Device test_device_constructor(
bool is_cuda,
torch::stable::DeviceIndex index,
bool use_str) {
using torch::stable::Device;
using torch::stable::DeviceType;
if (use_str) {
std::string device_str;
if (is_cuda) {
device_str = "cuda:" + std::to_string(index);
} else {
device_str = "cpu";
}
return Device(device_str);
} else {
if (is_cuda) {
return Device(DeviceType::CUDA, index);
} else {
return Device(DeviceType::CPU);
}
}
}
void boxed_test_device_constructor(
StableIValue* stack,
uint64_t num_args,
uint64_t num_outputs) {
torch::stable::Device res = test_device_constructor(
torch::stable::detail::to<bool>(stack[0]),
torch::stable::detail::to<torch::stable::DeviceIndex>(stack[1]),
torch::stable::detail::to<bool>(stack[2]));
stack[0] = torch::stable::detail::from(res);
}
bool test_device_equality(torch::stable::Device d1, torch::stable::Device d2) {
return d1 == d2;
}
void boxed_test_device_equality(
StableIValue* stack,
uint64_t num_args,
uint64_t num_outputs) {
bool res = test_device_equality(
torch::stable::detail::to<torch::stable::Device>(stack[0]),
torch::stable::detail::to<torch::stable::Device>(stack[1]));
stack[0] = torch::stable::detail::from(res);
}
torch::stable::Device test_device_set_index(
torch::stable::Device device,
torch::stable::DeviceIndex index) {
device.set_index(index);
return device;
}
void boxed_test_device_set_index(
StableIValue* stack,
uint64_t num_args,
uint64_t num_outputs) {
torch::stable::Device res = test_device_set_index(
torch::stable::detail::to<torch::stable::Device>(stack[0]),
torch::stable::detail::to<torch::stable::DeviceIndex>(stack[1]));
stack[0] = torch::stable::detail::from(res);
}
torch::stable::DeviceIndex test_device_index(torch::stable::Device device) {
return device.index();
}
void boxed_test_device_index(
StableIValue* stack,
uint64_t num_args,
uint64_t num_outputs) {
torch::stable::DeviceIndex res = test_device_index(
torch::stable::detail::to<torch::stable::Device>(stack[0]));
stack[0] = torch::stable::detail::from(res);
}
bool test_device_is_cuda(torch::stable::Device device) {
return device.is_cuda();
}
void boxed_test_device_is_cuda(
StableIValue* stack,
uint64_t num_args,
uint64_t num_outputs) {
bool res = test_device_is_cuda(
torch::stable::detail::to<torch::stable::Device>(stack[0]));
stack[0] = torch::stable::detail::from(res);
}
bool test_device_is_cpu(torch::stable::Device device) {
return device.is_cpu();
}
void boxed_test_device_is_cpu(
StableIValue* stack,
uint64_t num_args,
uint64_t num_outputs) {
bool res = test_device_is_cpu(
torch::stable::detail::to<torch::stable::Device>(stack[0]));
stack[0] = torch::stable::detail::from(res);
}
STABLE_TORCH_LIBRARY_FRAGMENT(libtorch_agnostic, m) {
m.def("test_tensor_device(Tensor t) -> Device");
m.def(
"test_device_constructor(bool is_cuda, DeviceIndex index, bool use_str) -> Device");
m.def("test_device_equality(Device d1, Device d2) -> bool");
m.def("test_device_set_index(Device device, DeviceIndex index) -> Device");
m.def("test_device_index(Device device) -> DeviceIndex");
m.def("test_device_is_cuda(Device device) -> bool");
m.def("test_device_is_cpu(Device device) -> bool");
}
STABLE_TORCH_LIBRARY_IMPL(libtorch_agnostic, CompositeExplicitAutograd, m) {
m.impl("test_tensor_device", &boxed_test_tensor_device);
m.impl("test_device_constructor", &boxed_test_device_constructor);
m.impl("test_device_equality", &boxed_test_device_equality);
m.impl("test_device_set_index", &boxed_test_device_set_index);
m.impl("test_device_index", &boxed_test_device_index);
m.impl("test_device_is_cuda", &boxed_test_device_is_cuda);
m.impl("test_device_is_cpu", &boxed_test_device_is_cpu);
}
// Test functions for torch::stable::accelerator APIs
#ifdef LAE_USE_CUDA
@ -617,3 +761,66 @@ STABLE_TORCH_LIBRARY_IMPL(libtorch_agnostic, CompositeExplicitAutograd, m) {
}
#endif // LAE_USE_CUDA
Tensor test_parallel_for(int64_t size, int64_t grain_size) {
AtenTensorHandle tensor_handle;
int64_t stride = 1;
aoti_torch_empty_strided(
1,
&size,
&stride,
aoti_torch_dtype_int64(),
aoti_torch_device_type_cpu(),
0,
&tensor_handle);
Tensor tensor(tensor_handle);
int64_t* data_ptr = reinterpret_cast<int64_t*>(tensor.data_ptr());
torch::stable::zero_(tensor);
// Use parallel_for to fill each element with its index
// If using a parallel path, the thread id is encoded in the upper 32 bits
torch::stable::parallel_for(
0, size, grain_size, [data_ptr](int64_t begin, int64_t end) {
for (auto i = begin; i < end; i++) {
STD_TORCH_CHECK(i <= UINT32_MAX);
uint32_t thread_id;
torch_get_thread_idx(&thread_id);
data_ptr[i] = i | (static_cast<int64_t>(thread_id) << 32);
}
});
return tensor;
}
void boxed_test_parallel_for(
StableIValue* stack,
uint64_t num_args,
uint64_t num_outputs) {
Tensor res = test_parallel_for(to<int64_t>(stack[0]), to<int64_t>(stack[1]));
stack[0] = from(res);
}
uint32_t test_get_num_threads() {
return torch::stable::get_num_threads();
}
void boxed_test_get_num_threads(
StableIValue* stack,
uint64_t num_args,
uint64_t num_outputs) {
uint32_t res = test_get_num_threads();
stack[0] = from(res);
}
STABLE_TORCH_LIBRARY_FRAGMENT(libtorch_agnostic, m) {
m.def("test_parallel_for(int size, int grain_size) -> Tensor");
m.def("test_get_num_threads() -> int");
}
STABLE_TORCH_LIBRARY_IMPL(libtorch_agnostic, CompositeExplicitAutograd, m) {
m.impl("test_parallel_for", &boxed_test_parallel_for);
m.impl("test_get_num_threads", &boxed_test_get_num_threads);
}

View File

@ -215,6 +215,18 @@ def test_default_constructor(defined) -> bool:
return torch.ops.libtorch_agnostic.test_default_constructor.default(defined)
def test_tensor_device(t):
"""
Tests Tensor device() method.
Args:
t: Tensor - tensor to get device from
Returns: Device - device of the tensor
"""
return torch.ops.libtorch_agnostic.test_tensor_device.default(t)
def my_pad(t) -> Tensor:
"""
Pads the input tensor with hardcoded padding parameters.
@ -375,3 +387,103 @@ def make_tensor_clones_and_call_foreach(t1, t2) -> list[Tensor]:
return torch.ops.libtorch_agnostic.make_tensor_clones_and_call_foreach.default(
t1, t2
)
def test_device_constructor(is_cuda, index, use_str):
"""
Tests creating a Device from DeviceType and index, or from a string.
Args:
is_cuda: bool - if True, creates CUDA device; if False, creates CPU device
index: int - device index
use_str: bool - if True, constructs from string; if False, constructs from DeviceType
Returns: Device - A device with the specified type and index
"""
return torch.ops.libtorch_agnostic.test_device_constructor.default(
is_cuda, index, use_str
)
def test_device_equality(d1, d2) -> bool:
"""
Tests Device equality operator.
Args:
d1: Device - first device
d2: Device - second device
Returns: bool - True if devices are equal
"""
return torch.ops.libtorch_agnostic.test_device_equality.default(d1, d2)
def test_device_set_index(device, index):
"""
Tests Device set_index() method.
Args:
device: Device - device to modify
index: int - new device index
Returns: Device - device with updated index
"""
return torch.ops.libtorch_agnostic.test_device_set_index.default(device, index)
def test_device_index(device) -> int:
"""
Tests Device index() method.
Args:
device: Device - device to query
Returns: int - device index
"""
return torch.ops.libtorch_agnostic.test_device_index.default(device)
def test_device_is_cuda(device) -> bool:
"""
Tests Device is_cuda() method.
Args:
device: Device - device to check
Returns: bool - True if device is CUDA
"""
return torch.ops.libtorch_agnostic.test_device_is_cuda.default(device)
def test_device_is_cpu(device) -> bool:
"""
Tests Device is_cpu() method.
Args:
device: Device - device to check
Returns: bool - True if device is CPU
"""
return torch.ops.libtorch_agnostic.test_device_is_cpu.default(device)
def test_parallel_for(size, grain_size) -> Tensor:
"""
Tests the parallel_for functionality by using it to fill a tensor with indices.
Args:
size: int - size of the tensor to create
grain_size: int - grain size for parallel_for
Returns: Tensor - a 1D int64 tensor where each element contains its index
(if multiple threads are used the threadid will be encoded in the upper 32 bits)
"""
return torch.ops.libtorch_agnostic.test_parallel_for.default(size, grain_size)
def test_get_num_threads() -> int:
"""
Tests the get_num_threads functionality by returning the number of threads
for the parallel backend.
Returns: int - the number of threads for the parallel backend
"""
return torch.ops.libtorch_agnostic.test_get_num_threads.default()

View File

@ -418,6 +418,113 @@ if not IS_WINDOWS:
self.assertEqual(result[0], t1 * t1)
self.assertEqual(result[1], t2 * t2)
@onlyCUDA
def test_device(self, device):
import libtorch_agnostic
cuda_device = libtorch_agnostic.ops.test_device_constructor(
is_cuda=True, index=1, use_str=False
)
self.assertEqual(cuda_device, torch.device("cuda:1"))
cuda_device = libtorch_agnostic.ops.test_device_constructor(
is_cuda=True, index=1, use_str=True
)
self.assertEqual(cuda_device, torch.device("cuda:1"))
self.assertEqual(libtorch_agnostic.ops.test_device_index(cuda_device), 1)
self.assertTrue(
libtorch_agnostic.ops.test_device_equality(
cuda_device, torch.device("cuda:1")
)
)
self.assertFalse(
libtorch_agnostic.ops.test_device_equality(
cuda_device, torch.device("cuda:0")
)
)
self.assertFalse(libtorch_agnostic.ops.test_device_is_cpu(cuda_device))
self.assertTrue(libtorch_agnostic.ops.test_device_is_cuda(cuda_device))
cuda_0_device = libtorch_agnostic.ops.test_device_set_index(cuda_device, 0)
self.assertEqual(cuda_0_device, torch.device("cuda:0"))
cpu_device = libtorch_agnostic.ops.test_device_constructor(False, 0, False)
self.assertEqual(cpu_device, torch.device("cpu"))
self.assertTrue(
libtorch_agnostic.ops.test_device_equality(
cpu_device, torch.device("cpu")
)
)
self.assertTrue(libtorch_agnostic.ops.test_device_is_cpu(cpu_device))
self.assertFalse(libtorch_agnostic.ops.test_device_is_cuda(cpu_device))
self.assertFalse(
libtorch_agnostic.ops.test_device_equality(cpu_device, cuda_device)
)
with self.assertRaisesRegex(
RuntimeError, "Device index 129 is out of range for int8_t"
):
libtorch_agnostic.ops.test_device_constructor(
is_cuda=True, index=129, use_str=False
)
with self.assertRaisesRegex(
RuntimeError, "Device index 129 is out of range for int8_t"
):
libtorch_agnostic.ops.test_device_set_index(cuda_device, 129)
@onlyCUDA
@deviceCountAtLeast(2)
def test_tensor_device(self, device):
import libtorch_agnostic
t = torch.randn(2, 3)
self.assertEqual(libtorch_agnostic.ops.test_tensor_device(t), t.device)
t_cuda = torch.randn(2, 3, device="cuda")
self.assertEqual(
libtorch_agnostic.ops.test_tensor_device(t_cuda), t_cuda.device
)
t_cuda_1 = torch.randn(2, 3, device="cuda:1")
self.assertEqual(
libtorch_agnostic.ops.test_tensor_device(t_cuda_1), t_cuda_1.device
)
@onlyCPU
# TODO: Debug this:
# Dynamo failed to run FX node with fake tensors:
# call_function libtorch_agnostic.test_parallel_for.default(*(100, 10), **{}):
# got RuntimeError('libtorch_agnostic::test_parallel_for() expected at most
# 2 argument(s) but received 3 argument(s).
# Declaration: libtorch_agnostic::test_parallel_for(int size, int grain_size) -> Tensor')
@xfailIfTorchDynamo
def test_parallel_for(self, device):
import libtorch_agnostic
num_threads = torch.get_num_threads()
size = 100
grain_size = 10
expected_num_threads_used = min(
(size + grain_size - 1) // grain_size, num_threads
)
result = libtorch_agnostic.ops.test_parallel_for(size, grain_size)
result_thread_ids = torch.unique(torch.bitwise_right_shift(result, 32))
result_values = torch.bitwise_and(result, 0xFFFFFFFF)
expected = torch.arange(size, dtype=torch.int64)
self.assertEqual(result_values, expected)
self.assertEqual(result_thread_ids, torch.arange(expected_num_threads_used))
@onlyCPU
def test_get_num_threads(self, device):
import libtorch_agnostic
num_threads = libtorch_agnostic.ops.test_get_num_threads()
expected_num_threads = torch.get_num_threads()
self.assertEqual(num_threads, expected_num_threads)
instantiate_device_type_tests(TestLibtorchAgnostic, globals(), except_for=None)
if __name__ == "__main__":

View File

@ -140,6 +140,11 @@ static void initDeviceStreamState(DeviceIndex device_index) {
static void initOpenRegStreamsOnce() {
c10::call_once(init_flag, initGlobalStreamState);
for (const auto i : c10::irange(num_devices)) {
c10::call_once(
device_flags[i], initDeviceStreamState, static_cast<DeviceIndex>(i));
}
if (current_streams) {
return;
}
@ -202,8 +207,6 @@ OpenRegStream getStreamFromPool(const int priority, DeviceIndex device_index) {
if (device_index == -1) {
device_index = current_device();
}
c10::call_once(
device_flags[device_index], initDeviceStreamState, device_index);
auto pri_idx =
std::clamp(priority, 0, max_compile_time_stream_priorities - 1);
const auto idx = get_idx(priority_counters[device_index][pri_idx]);

View File

@ -180,6 +180,47 @@ class TestTrackerFullyShard1DTrainingCore(FSDPTest):
del model
del optim
def _test_tracker_multihandler_hook(self):
"""Should run without KeyError."""
class TestModule(nn.Module):
def __init__(self, dim: int):
super().__init__()
self.norm1 = nn.RMSNorm(dim)
self.output1 = nn.Linear(dim, dim)
self.norm2 = nn.RMSNorm(dim)
self.output2 = nn.Linear(dim, dim)
def forward(self, x: torch.Tensor) -> torch.Tensor:
x = self.norm1(x)
x = self.output1(x)
x = self.norm2(x)
x = self.output2(x)
return x
gc.collect()
torch.manual_seed(42)
dev = torch.device(torch.accelerator.current_device_index())
with torch.device(dev):
model = TestModule(128)
mesh = init_device_mesh(dev.type, (self.world_size,))
fully_shard([model.norm1, model.output1], mesh=mesh)
fully_shard([model.norm2, model.output2], mesh=mesh)
fully_shard(model, mesh=mesh)
fmt = FSDPMemTracker(model)
with fmt:
inp = torch.randn(16, 128, device=dev)
y = model(inp)
loss = y.sum()
loss.backward()
del inp
del model
class TestTrackerFullyShard1DTrainingCompose(FSDPTest):
@property

View File

@ -0,0 +1,44 @@
# Owner(s): ["oncall: r2p"]
# This is a helper script for
# test_run.py::ElasticLaunchTest::test_virtual_local_rank. It prints out the
# generated inductor output for a simple function.
import os
from unittest.mock import patch
import torch
import torch.distributed as dist
from torch._inductor import codecache
@torch.compile
def myfn(x: torch.Tensor) -> torch.Tensor:
return x + x
dist.init_process_group(backend="nccl")
local_rank = int(os.environ.get("LOCAL_RANK", "cuda:0"))
torch.cuda.set_device(local_rank)
def print_output_code(original_fn):
def wrapper(msg, *args, **kwargs):
# Check if this is the "Output code:" message
if args and "Output code:" in msg:
print(args[0])
return wrapper
x = torch.rand(2, 2, device="cuda")
with patch.object(
codecache.output_code_log,
"debug",
side_effect=print_output_code(codecache.output_code_log.debug),
):
y = myfn(x)
dist.destroy_process_group()

View File

@ -16,7 +16,7 @@ import sys
import tempfile
import uuid
from contextlib import closing, redirect_stderr, redirect_stdout
from unittest import mock
from unittest import mock, skipIf
from unittest.mock import MagicMock, Mock, patch
import torch.distributed.run as launch
@ -28,6 +28,7 @@ from torch.distributed.elastic.utils.distributed import get_free_port
from torch.testing._internal.common_utils import (
run_tests,
skip_but_pass_in_sandcastle_if,
TEST_CUDA,
TEST_WITH_DEV_DBG_ASAN,
TestCase,
)
@ -677,6 +678,96 @@ class ElasticLaunchTest(TestCase):
for i in range(nproc_per_node):
self.assertTrue(f"[rank{i}]: creating " in captured_out.getvalue())
@skip_but_pass_in_sandcastle_if(
TEST_WITH_DEV_DBG_ASAN, "test incompatible with dev/dbg asan"
)
@skipIf(not TEST_CUDA, "requires CUDA")
def test_virtual_local_rank(self):
"""
Test that virtual-local-rank ensures consistent device IDs across ranks.
Without it, ranks may compile to different devices, leading to different code.
"""
run_id = str(uuid.uuid4().int)
nnodes = 1
nproc_per_node = 2
# Helper function to run and capture output
def run_test(use_virtual_local_rank):
args = [
f"--nnodes={nnodes}",
f"--nproc-per-node={nproc_per_node}",
f"--rdzv-id={run_id}",
"--monitor-interval=1",
"--start-method=spawn",
"--redirect=3",
"--tee=3",
]
if use_virtual_local_rank:
args.append("--virtual-local-rank")
args.append(path("script_deviceid.py"))
captured_out = io.StringIO()
captured_err = io.StringIO()
with redirect_stdout(captured_out), redirect_stderr(captured_err):
launch.main(args)
return captured_out.getvalue()
def split_ranks(output):
default0 = []
default1 = []
for line in output.splitlines():
if "cuda:" not in line:
continue
if line.startswith("[default0]:"):
default0.append(line[11:])
elif line.startswith("[default1]:"):
default1.append(line[11:])
return default0, default1
# First, run WITHOUT virtual-local-rank - outputs should differ
output = run_test(use_virtual_local_rank=False)
rank0, rank1 = split_ranks(output)
# Verify we actually captured compiled code from both ranks
self.assertGreater(
len(rank0), 0, "Expected to capture compiled code from rank 0"
)
self.assertGreater(
len(rank1), 0, "Expected to capture compiled code from rank 1"
)
# Without virtual-local-rank, the ranks should have DIFFERENT compiled code
# because they see different device IDs (cuda:0 vs cuda:1)
self.assertNotEqual(
rank0,
rank1,
"Expected different compiled code without --virtual-local-rank",
)
# Now run WITH virtual-local-rank - outputs should be identical
output = run_test(use_virtual_local_rank=True)
rank0, rank1 = split_ranks(output)
# Verify we actually captured compiled code from both ranks
self.assertGreater(
len(rank0),
0,
"Expected to capture compiled code from rank 0 with --virtual-local-rank",
)
self.assertGreater(
len(rank1),
0,
"Expected to capture compiled code from rank 1 with --virtual-local-rank",
)
# With virtual-local-rank, both ranks should have IDENTICAL compiled code
# because they both see cuda:0 during compilation
self.assertEqual(
rank0, rank1, "Expected identical compiled code with --virtual-local-rank"
)
if __name__ == "__main__":
run_tests()

View File

@ -1,6 +1,7 @@
# Owner(s): ["oncall: distributed"]
import contextlib
import unittest
import torch
import torch.distributed as dist
@ -23,8 +24,15 @@ from torch.testing._internal.common_utils import (
TestCase,
)
from torch.testing._internal.distributed.fake_pg import FakeStore
from torch.utils._debug_mode import _OpCall, _RedistributeCall, DebugMode
from torch.testing._internal.inductor_utils import GPU_TYPE, HAS_GPU
from torch.utils._debug_mode import (
_OpCall,
_RedistributeCall,
_TritonKernelCall,
DebugMode,
)
from torch.utils._python_dispatch import TorchDispatchMode
from torch.utils._triton import has_triton_package
@requires_cuda
@ -434,6 +442,110 @@ class TestDTensorDebugMode(TestCase):
][-1]
self.assertTrue("self.l2(self.l1(x))" in sum_op.fwd_stack_trace)
@unittest.skipIf(not HAS_GPU, "requires GPU")
@unittest.skipIf(not has_triton_package(), "requires triton")
def test_triton_kernel_logs(self):
import triton
from torch.testing._internal.triton_utils import add_kernel_autotuned
def call_triton(x, y):
output = torch.zeros_like(x)
n_elements = output.numel()
grid = lambda meta: (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),) # noqa: E731
add_kernel_autotuned[grid](x, y, output, n_elements)
return output
x = torch.randn(128, device=GPU_TYPE)
y = torch.randn(128, device=GPU_TYPE)
with DebugMode() as debug_mode:
torch.compile(call_triton)(x, y)
triton_calls = [
op for op in debug_mode.operators if isinstance(op, _TritonKernelCall)
]
self.assertGreater(len(triton_calls), 0)
self.assertIn("[triton]", triton_calls[0].render([]))
def test_check_hash_mismatches(self):
x = torch.randn(64, 64, device=GPU_TYPE)
x_different = torch.randn(64, 64, device=GPU_TYPE)
# Identical runs should have no mismatches
with DebugMode() as dm1, DebugMode.log_tensor_hashes():
x.sin().sum()
with DebugMode() as dm2, DebugMode.log_tensor_hashes():
x.sin().sum()
mismatches = DebugMode.check_hash_mismatches(dm1.logs, dm2.logs)
self.assertEqual(len(mismatches), 0)
# Different inputs should produce hash mismatches
with DebugMode() as dm3, DebugMode.log_tensor_hashes():
x_different.sin().sum()
# Check that mismatches are detected
mismatches = DebugMode.check_hash_mismatches(dm1.logs, dm3.logs)
self.assertEqual(len(mismatches), 2)
self.assertEqual(
[call["call"] for call in mismatches], ["aten::sin", "aten::sum"]
)
@unittest.skipIf(not HAS_GPU, "requires GPU")
@unittest.skipIf(not has_triton_package(), "requires triton")
def test_check_triton_hash_mismatches(self):
import triton
from torch.testing._internal.triton_utils import add_kernel_autotuned
def call_triton(x, y):
output = torch.zeros_like(x)
n_elements = output.numel()
grid = lambda meta: (triton.cdiv(n_elements, meta["BLOCK_SIZE"]),) # noqa: E731
add_kernel_autotuned[grid](x, y, output, n_elements)
return output
a = torch.randn(128, device=GPU_TYPE)
b = torch.randn(128, device=GPU_TYPE)
c = torch.randn(128, device=GPU_TYPE)
# Run with hash logging to verify triton kernels can be hashed
with DebugMode() as dm_t1, DebugMode.log_tensor_hashes(hash_inputs=True):
torch.compile(call_triton)(a, b)
# Different inputs should have different hashes in triton kernels
with DebugMode() as dm_t2, DebugMode.log_tensor_hashes(hash_inputs=True):
torch.compile(call_triton)(a, c)
# Compare triton kernel hashes
mismatches = DebugMode.check_hash_mismatches(
dm_t1.logs, dm_t2.logs, compare_inputs=True
)
triton_mismatches = [m for m in mismatches if m["call_type"] == "triton kernel"]
self.assertGreater(len(triton_mismatches), 0)
# check both input & output hash mismatches are detected
self.assertGreater(len([m for m in triton_mismatches if m["is_input_hash"]]), 0)
self.assertGreater(
len([m for m in triton_mismatches if not m["is_input_hash"]]), 0
)
def test_check_structure_mismatches(self):
x = torch.randn(32, 32, device=self.device_type)
with DebugMode() as dm1, DebugMode.log_tensor_hashes():
x.sin()
with DebugMode() as dm2, DebugMode.log_tensor_hashes():
x.cos()
with DebugMode() as dm3, DebugMode.log_tensor_hashes():
x.sin().cos()
with self.assertRaisesRegex(ValueError, "Operators don't match"):
DebugMode.check_hash_mismatches(dm1.logs, dm2.logs)
with self.assertRaisesRegex(ValueError, "Log lengths don't match"):
DebugMode.check_hash_mismatches(dm1.logs, dm3.logs)
def test_pretty_print_dtensor_make_fx(self):
mesh = DeviceMesh(self.device_type, list(range(self.world_size)))

View File

@ -535,7 +535,7 @@ class DTensorExportTest(TestCase):
self.assertEqual(fn(z), gm(z)[0])
def test_dtensor_data_dependent_index(self):
def test_dtensor_data_dependent_index_and_slice(self):
device_mesh = init_device_mesh(self.device_type, mesh_shape=(self.world_size,))
class Foo(torch.nn.Module):
@ -548,6 +548,35 @@ class DTensorExportTest(TestCase):
y_dt = distribute_tensor(y, device_mesh, placements=[Replicate()])
_dynamo_graph_capture_for_export(Foo())(x_dt, y_dt)
class Bar(torch.nn.Module):
def forward(self, x):
val = torch.clamp(x.max(), min=1).item()
torch._check(val >= 1)
return x[:val]
x = torch.randint(1000, (4, 64, 16))
x_dt = distribute_tensor(x, device_mesh, placements=[Replicate()])
gm = _dynamo_graph_capture_for_export(Bar())(x_dt)
self.assertExpectedInline(
"""\
graph():
%l_flat_args_0_ : [num_users=2] = placeholder[target=arg_0]
%max_1 : [num_users=1] = call_method[target=max](args = (%l_flat_args_0_,), kwargs = {})
%clamp : [num_users=1] = call_function[target=torch.clamp](args = (%max_1,), kwargs = {min: 1})
%item : [num_users=2] = call_method[target=item](args = (%clamp,), kwargs = {})
%ge_1 : [num_users=1] = call_function[target=operator.ge](args = (%item, 1), kwargs = {})
%_assert_scalar_default : [num_users=0] = call_function[target=torch.ops.aten._assert_scalar.default](args = (%ge_1, Runtime assertion failed for expression u0 >= 1 on node 'ge_1'), kwargs = {})
%res : [num_users=2] = call_function[target=operator.getitem](args = (%l_flat_args_0_, slice(None, item, None)), kwargs = {})
%getattr_1 : [num_users=1] = call_function[target=builtins.getattr](args = (%res, _local_tensor), kwargs = {})
%sym_size_int : [num_users=2] = call_function[target=torch.ops.aten.sym_size.int](args = (%getattr_1, 0), kwargs = {})
%ge_2 : [num_users=1] = call_function[target=operator.ge](args = (%sym_size_int, 0), kwargs = {})
%_assert_scalar_default_1 : [num_users=0] = call_function[target=torch.ops.aten._assert_scalar.default](args = (%ge_2, Runtime assertion failed for expression u2 >= 0 on node 'ge_2'), kwargs = {})
%le : [num_users=1] = call_function[target=operator.le](args = (%sym_size_int, 4), kwargs = {})
%_assert_scalar_default_2 : [num_users=0] = call_function[target=torch.ops.aten._assert_scalar.default](args = (%le, Runtime assertion failed for expression u2 <= 4 on node 'le'), kwargs = {})
return (res,)""", # noqa: B950
str(gm.graph).strip(),
)
instantiate_parametrized_tests(DTensorExportTest)

View File

@ -706,11 +706,11 @@ class DistTensorOpsTest(DTensorTestBase):
@with_comms
def test_dtensor_dtype_conversion(self):
from torch.distributed.tensor.debug import (
_clear_sharding_prop_cache,
_get_sharding_prop_cache_info,
_clear_fast_path_sharding_prop_cache,
_get_fast_path_sharding_prop_cache_stats,
)
_clear_sharding_prop_cache()
_clear_fast_path_sharding_prop_cache()
device_mesh = self.build_device_mesh()
shard_spec = [Shard(0)]
# by default we start from bf16 dtype
@ -730,13 +730,13 @@ class DistTensorOpsTest(DTensorTestBase):
self.assertEqual(bf16_sharded_dtensor1.to_local().dtype, torch.bfloat16)
# by this point we only have cache misses
hits, misses, _, _ = _get_sharding_prop_cache_info()
hits, misses = _get_fast_path_sharding_prop_cache_stats()
self.assertEqual(hits, 0)
self.assertEqual(misses, 2)
# convert to fp32 again and see if there's cache hit
bf16_sharded_dtensor1.float()
hits, misses, _, _ = _get_sharding_prop_cache_info()
hits, misses = _get_fast_path_sharding_prop_cache_stats()
# by now we should have cache hit
self.assertEqual(hits, 1)
self.assertEqual(misses, 2)

View File

@ -999,13 +999,25 @@ class TestExplicitRedistribute(LocalTensorTestBase):
dx = distribute_tensor(x, device_mesh, [Shard(0)])
dA = distribute_tensor(A, device_mesh, [Replicate()])
with ExplicitRedistributionContext():
with ExplicitRedistributionContext(strict=True):
dY = torch.matmul(dx, dA_repl)
loss = dY.sum()
# we now see the error during backwards
with self.assertRaisesRegex(RuntimeError, "Implicit redistribution"):
loss.backward()
loss.backward(retain_graph=True)
with ExplicitRedistributionContext(strict=False):
# but since it's a 'free' redistribute, we can still do it under non-strict mode
loss.backward(retain_graph=True)
with ExplicitRedistributionContext(enable=False):
# and we can disable
loss.backward(retain_graph=True)
# and re-enable
with self.assertRaisesRegex(RuntimeError, "Implicit redistribution"):
loss.backward(retain_graph=True)
if __name__ == "__main__":

View File

@ -1062,6 +1062,307 @@ class TestComputeCommReorderingBucketing(TestComputeCommReorderingMultiProc):
self.assertTrue(same(out, correct))
def get_toy_model(device_type: str):
"""
Helper to construct a small multi-layer ToyModel
"""
class ToyBlock(torch.nn.Module):
def __init__(self):
super().__init__()
self.wq = torch.nn.Linear(4, 4)
self.wk = torch.nn.Linear(4, 4)
self.proj = torch.nn.Linear(4, 4)
def forward(self, x):
attn = self.wq(x) + self.wk(x)
return self.proj(torch.nn.functional.relu(attn))
class ToyModel(torch.nn.Module):
def __init__(self):
super().__init__()
self.layers = torch.nn.ModuleList([ToyBlock() for _ in range(2)])
self.norm = torch.nn.LayerNorm(4)
def forward(self, x):
for blk in self.layers:
x = blk(x)
return self.norm(x)
model = ToyModel().to(device_type)
return model
def apply_manual_reordering_and_get_graph(graph, module_bucket_plans, out_li) -> None:
gm = graph.owning_module
from torch._inductor.fx_passes.overlap_manual_scheduling import (
ManualOverlapScheduler,
)
for node in list(gm.graph.nodes):
if (
node.name == "all_gather_into_tensor"
or node.name == "all_gather_into_tensor_1"
or node.name == "wait_tensor"
or node.name == "wait_tensor_1"
):
node.meta["nn_module_stack"] = {"test": ["module_1", ""]}
if (
node.name == "all_gather_into_tensor_2"
or node.name == "all_gather_into_tensor_3"
or node.name == "wait_tensor_2"
or node.name == "wait_tensor_3"
):
node.meta["nn_module_stack"] = {"test": ["module_2", ""]}
overlapped_gm = ManualOverlapScheduler(
gm, module_bucket_plans, insert_overlap_deps=False
).run()
overlapped_gm.graph.lint()
out_li.append(overlapped_gm.graph)
def run_and_get_manual_aten_graph(fn, module_bucket_plans, *inputs):
li = []
apply = functools.partial(
apply_manual_reordering_and_get_graph,
module_bucket_plans=module_bucket_plans,
out_li=li,
)
with torch._inductor.config.patch(post_grad_custom_post_pass=apply):
out = fn(*inputs)
return out, li[0]
class TestManualOverlapBucketing(TestComputeCommReorderingMultiProc):
"""
Tests for manual overlap scheduling and subgraph utilities.
"""
@unittest.skipIf(not HAS_GPU, "Inductor+gpu needs triton and recent GPU arch")
def test_make_graph_view_and_get_subgraph_by_path(self):
from torch._inductor.fx_passes.graph_view import (
get_subgraph_by_path,
make_graph_view,
)
model = get_toy_model(device_type)
gm = torch.fx.symbolic_trace(model)
graph_view = make_graph_view(gm.graph)
# Fetch subgraph for first transformer layer
sub_nodes = get_subgraph_by_path(graph_view, "layers.0.wq")
self.assertEqual([n.name for n in sub_nodes], ["layers_0_wq"])
# Fetch multiple paths at once
multi_nodes = get_subgraph_by_path(graph_view, ["layers.0.wq", "layers.0.proj"])
self.assertEqual(
[n.name for n in multi_nodes], ["layers_0_wq", "layers_0_proj"]
)
# Fetch non existing paths
non_exist_nodes = get_subgraph_by_path(graph_view, "nonexistent.module.path")
self.assertEqual(non_exist_nodes, [])
# Fetch mixed of existing and non existing paths
mixed_nodes = get_subgraph_by_path(
graph_view, ["layers.0.wq", "nonexistent.module.path"]
)
self.assertEqual([n.name for n in mixed_nodes], ["layers_0_wq"])
@unittest.skipIf(not HAS_GPU, "Inductor+gpu needs triton and recent GPU arch")
def test_manual_reordering_bucketing_pass_separate_buckets(
self,
):
def func(a, b, c, d, *, ranks):
# All 4 all-gathers are independent - COULD be bucketed together
ag1 = _functional_collectives.all_gather_tensor(a, 0, ranks)
ag2 = _functional_collectives.all_gather_tensor(b, 0, ranks)
ag3 = _functional_collectives.all_gather_tensor(c[:4], 0, ranks)
ag4 = _functional_collectives.all_gather_tensor(d[:4], 0, ranks)
# First compute - can hide ag1 and ag2
e = a * 5 # Use a to avoid fusion
mm1 = torch.matmul(e, e.T)
# Force ag1/ag2 to complete before mm2 (but ag3/ag4 can still be deferred)
# Use first 8x8 elements to match mm1's shape
intermediate = ag1[:8, :8] + ag2[:8, :8]
# Second compute - depends on ag1/ag2 through intermediate, can hide ag3/ag4
mm2 = torch.matmul(mm1 + intermediate, c[:8])
# Use all results
result = (
ag1.sum() * 1.1
+ ag2.sum() * 1.2
+ ag3.sum() * 1.3
+ ag4.sum() * 1.4
+ mm1.sum()
+ mm2.sum()
)
return result
with _dynamo_dist_per_rank_init(
self.rank,
self.world_size,
self.backend(device_type),
fake_pg=not at_least_x_gpu(2),
):
a = torch.ones(8, 8, dtype=torch.float, device=device_type)
b = torch.ones(8, 8, dtype=torch.float, device=device_type) * 2
c = torch.ones(8, 8, dtype=torch.float, device=device_type) * 3
d = torch.ones(8, 8, dtype=torch.float, device=device_type) * 4
ranks = list(range(self.world_size))
func_c = functools.partial(func, ranks=ranks)
compiled = torch.compile(func_c)
out, aten_graph = run_and_get_manual_aten_graph(
compiled, ["module_1", "module_2"], a, b, c, d
)
(
FileCheck()
.check("_pre_bucket_all_gather")
.check("all_gather_into_tensor_out")
.check("_pre_bucket_all_gather_1")
.check("all_gather_into_tensor_out_1")
.check("wait_tensor_4")
.check("wait_tensor_5")
.run(str(aten_graph))
)
correct = func(a, b, c, d, ranks=ranks)
self.assertTrue(same(out, correct))
@unittest.skipIf(not HAS_GPU, "Inductor+gpu needs triton and recent GPU arch")
def test_bucketing_reordering_pass_no_bucket(
self,
):
def func(a, b, c, d, *, ranks):
# All 4 all-gathers are independent - COULD be bucketed together
ag1 = _functional_collectives.all_gather_tensor(a, 0, ranks)
ag2 = _functional_collectives.all_gather_tensor(b, 0, ranks)
ag3 = _functional_collectives.all_gather_tensor(c[:4], 0, ranks)
ag4 = _functional_collectives.all_gather_tensor(d[:4], 0, ranks)
# First compute - can hide ag1 and ag2
e = a * 5 # Use a to avoid fusion
mm1 = torch.matmul(e, e.T)
# Force ag1/ag2 to complete before mm2 (but ag3/ag4 can still be deferred)
# Use first 8x8 elements to match mm1's shape
intermediate = ag1[:8, :8] + ag2[:8, :8]
# Second compute - depends on ag1/ag2 through intermediate, can hide ag3/ag4
mm2 = torch.matmul(mm1 + intermediate, c[:8])
# Use all results
result = (
ag1.sum() * 1.1
+ ag2.sum() * 1.2
+ ag3.sum() * 1.3
+ ag4.sum() * 1.4
+ mm1.sum()
+ mm2.sum()
)
return result
with _dynamo_dist_per_rank_init(
self.rank,
self.world_size,
self.backend(device_type),
fake_pg=not at_least_x_gpu(2),
):
a = torch.ones(8, 8, dtype=torch.float, device=device_type)
b = torch.ones(8, 8, dtype=torch.float, device=device_type) * 2
c = torch.ones(8, 8, dtype=torch.float, device=device_type) * 3
d = torch.ones(8, 8, dtype=torch.float, device=device_type) * 4
ranks = list(range(self.world_size))
func_c = functools.partial(func, ranks=ranks)
compiled = torch.compile(func_c)
out, aten_graph = run_and_get_manual_aten_graph(compiled, [], a, b, c, d)
(
FileCheck()
.check("all_gather_into_tensor")
.check("all_gather_into_tensor_1")
.check("all_gather_into_tensor_2")
.check("all_gather_into_tensor_3")
.check("wait_tensor")
.check("wait_tensor_1")
.check("wait_tensor_2")
.check("wait_tensor_3")
.run(str(aten_graph))
)
correct = func(a, b, c, d, ranks=ranks)
self.assertTrue(same(out, correct))
@unittest.skipIf(not HAS_GPU, "Inductor+gpu needs triton and recent GPU arch")
def test_bucketing_reordering_pass_single_bucket(
self,
):
def func(a, b, c, d, *, ranks):
# All 4 all-gathers are independent - COULD be bucketed together
ag1 = _functional_collectives.all_gather_tensor(a, 0, ranks)
ag2 = _functional_collectives.all_gather_tensor(b, 0, ranks)
ag3 = _functional_collectives.all_gather_tensor(c[:4], 0, ranks)
ag4 = _functional_collectives.all_gather_tensor(d[:4], 0, ranks)
# First compute - can hide ag1 and ag2
e = a * 5 # Use a to avoid fusion
mm1 = torch.matmul(e, e.T)
# Force ag1/ag2 to complete before mm2 (but ag3/ag4 can still be deferred)
# Use first 8x8 elements to match mm1's shape
intermediate = ag1[:8, :8] + ag2[:8, :8]
# Second compute - depends on ag1/ag2 through intermediate, can hide ag3/ag4
mm2 = torch.matmul(mm1 + intermediate, c[:8])
# Use all results
result = (
ag1.sum() * 1.1
+ ag2.sum() * 1.2
+ ag3.sum() * 1.3
+ ag4.sum() * 1.4
+ mm1.sum()
+ mm2.sum()
)
return result
with _dynamo_dist_per_rank_init(
self.rank,
self.world_size,
self.backend(device_type),
fake_pg=not at_least_x_gpu(2),
):
a = torch.ones(8, 8, dtype=torch.float, device=device_type)
b = torch.ones(8, 8, dtype=torch.float, device=device_type) * 2
c = torch.ones(8, 8, dtype=torch.float, device=device_type) * 3
d = torch.ones(8, 8, dtype=torch.float, device=device_type) * 4
ranks = list(range(self.world_size))
func_c = functools.partial(func, ranks=ranks)
compiled = torch.compile(func_c)
out, aten_graph = run_and_get_manual_aten_graph(
compiled, [["module_1", "module_2"]], a, b, c, d
)
(
FileCheck()
.check("_pre_bucket_all_gather")
.check("all_gather_into_tensor_out")
.check("wait_tensor_4")
.run(str(aten_graph))
)
correct = func(a, b, c, d, ranks=ranks)
self.assertTrue(same(out, correct))
if __name__ == "__main__":
from torch._dynamo.test_case import run_tests

View File

@ -1341,13 +1341,11 @@ class TestCollectivesInductor(DynamoDistributedSingleProcTestCase):
assert counter.op_count == 3 # It generates 2 getattr to unpack the array
assert same(out, correct)
# This doesn't work in all cases, and now we properly loudly error.
# See: https://github.com/pytorch/pytorch/issues/151240
# When differentiable funcols are implemented can revert.
@unittest.expectedFailure
def test_backwards(self):
"""
It's probably not that common to need backwards support for collectives.
However, I wanted to at least see if it was possible to support it as a design goal.
"""
def func(inp):
ar = _functional_collectives.all_reduce(inp, "sum", "0")
return ar

View File

@ -950,7 +950,7 @@ SeqNr|OrigAten|SrcFn|FwdSrcFn
2|aten.threshold_backward.default||relu
1|aten.native_batch_norm_backward.default||batch_norm
0|aten.convolution_backward.default||conv2d
11|aten.add.Tensor||l1_loss
11|aten.add.Tensor||
"""
),
)

View File

@ -2363,6 +2363,34 @@ class FunctionTests(torch._dynamo.test_case.TestCase):
self.assertTrue(same(output, expected))
assert cnt.frame_count == 1
@unittest.skipIf(sys.version_info < (3, 13), "math.fma introduced in python 3.13")
def test_math_fma(self):
def fma_func(a, b, c):
return math.fma(a, b, c)
# Test with scalar constants (constant folding path)
cnt = torch._dynamo.testing.CompileCounter()
cfma_scalars = torch._dynamo.optimize_assert(cnt)(fma_func)
assert cnt.frame_count == 0
expected = fma_func(2.0, 3.0, 4.0)
output = cfma_scalars(2.0, 3.0, 4.0)
self.assertEqual(output, expected)
assert cnt.frame_count == 0
# Test with tensors (Inductor path)
cnt2 = torch._dynamo.testing.CompileCounter()
cfma_tensors = torch._dynamo.optimize_assert(cnt2)(fma_func)
assert cnt2.frame_count == 0
x = torch.tensor(2.0)
y = torch.tensor(3.0)
z = torch.tensor(4.0)
expected_tensors = x * y + z
output_tensors = cfma_tensors(x, y, z)
torch.testing.assert_close(output_tensors, expected_tensors)
assert cnt2.frame_count == 1
@make_test
def test_numpy_meshgrid(x, y):
r1, r2 = np.meshgrid(x.numpy(), y.numpy())

View File

@ -5788,6 +5788,20 @@ utils_device.CURRENT_DEVICE == None""".split("\n"):
self.assertTrue(torch.allclose(dynamo_output, output))
def test_repr(self):
class Config:
def __repr__(self):
return "Config()"
def forward(x, config):
return x * len(repr(config))
config = Config()
x = torch.randn(2, 2)
compiled = torch.compile(forward, fullgraph=True)
compiled(x, config)
def test_nn_functional_reduction(self):
def fn(loss, reduction):
reduction_enum = F._Reduction.get_enum(reduction)

View File

@ -335,6 +335,59 @@ class <lambda>(torch.nn.Module):
""",
)
@requires_cuda
@requires_multigpu()
def test_new_event_api(self) -> None:
from torch._dynamo.graph_bytecode_inputs import get_external_object_by_index
from torch._dynamo.variables.streams import new_event
def event_generation_backend(gm, *args, **kwargs): # type: ignore[no-untyped-def]
e0_ind = new_event()
with torch.Stream(device="cuda:1"):
get_external_object_by_index(e0_ind).record()
e1_ind = new_event()
self.assertNotEqual(e0_ind, e1_ind)
self.assertNotEqual(
get_external_object_by_index(e0_ind),
get_external_object_by_index(e1_ind),
)
with gm.graph.inserting_after(next(iter(gm.graph.nodes))):
gm.graph.call_function(
get_external_object_by_index, args=(1,), kwargs={}
)
return gm
@torch.compile(backend=event_generation_backend)
def fn(x):
return x + 1
fn(torch.ones(2, 2, device="cuda:0"))
@requires_cuda
def test_new_stream_api(self) -> None:
from torch._dynamo.graph_bytecode_inputs import get_external_object_by_index
from torch._dynamo.variables.streams import new_stream
def stream_generation_backend(gm, *args, **kwargs): # type: ignore[no-untyped-def]
s0_ind = new_stream()
s1_ind = new_stream()
self.assertNotEqual(s0_ind, s1_ind)
self.assertNotEqual(
get_external_object_by_index(s0_ind),
get_external_object_by_index(s1_ind),
)
with gm.graph.inserting_after(next(iter(gm.graph.nodes))):
gm.graph.call_function(
get_external_object_by_index, args=(1,), kwargs={}
)
return gm
@torch.compile(backend=stream_generation_backend)
def fn(x):
return x + 1
fn(torch.ones(2, 2, device="cuda:0"))
@requires_cuda
def test_stream_with_mutation(self):
def fn(x, y):
@ -523,6 +576,23 @@ class <lambda>(torch.nn.Module):
torch.accelerator.set_stream(original_stream)
reset_user_object_tracking()
@requires_cuda
def test_run_opcheck_wait_record_stream(self):
from torch._dynamo.variables.streams import wait_stream
from torch.library import opcheck
s0 = torch.Stream()
s1 = torch.Stream()
s2 = torch.Stream()
store_user_object_weakrefs(s0, s1, s2)
sample_inputs = [
(0, 1),
(2, 0),
]
for args in sample_inputs:
opcheck(wait_stream, args)
@requires_cuda
def test_inductor_lowering(self):
with patch("torch._inductor.config.implicit_fallbacks", False):

View File

@ -331,7 +331,12 @@ class TestDynamismExpression(TestCase):
return torch.ops.aten.slice.Tensor(*args)
inp = (torch.rand((10, 3, 224, 224)), 0, 0, 9223372036854775807)
dynamic_shapes = (({0: Dim("dim")}, None, None, None),)
dynamic_shapes = (
{0: Dim("dim")},
None,
None,
None,
)
torch.export.export(
Slice(),
inp,
@ -585,6 +590,7 @@ class TestExport(TestCase):
inp = ([torch.ones(1, 3)], torch.ones(1, 3))
self._test_export_same_as_eager(f, inp)
@testing.expectedFailureStrictV2
@skipIfCrossRef
def test_custom_tag_metadata_re_export(self):
class Foo(torch.nn.Module):
@ -1021,6 +1027,7 @@ graph():
dynamic_shapes = {"x": (dim0_x, dim1_x)}
export(Foo(), inputs, dynamic_shapes=dynamic_shapes)
@testing.expectedFailureStrictV2
def test_no_tensor_computation(self):
class Module(torch.nn.Module):
def forward(self, x, y):
@ -1356,6 +1363,7 @@ def forward(self, primals, tangents):
# instead of the scripted function, so we get x.sin()
self.assertEqual(res, x.sin())
@testing.expectedFailureStrictV2
def test_no_tensor_computation_2(self):
class Module(torch.nn.Module):
def forward(self, x, y):
@ -1374,6 +1382,7 @@ graph():
return (x,)""",
)
@testing.expectedFailureStrictV2
def test_no_tensor_computation_3(self):
class Module(torch.nn.Module):
def forward(self, x, y):
@ -1392,6 +1401,7 @@ graph():
return (5,)""",
)
@testing.expectedFailureStrictV2
def test_no_tensor_computation_4(self):
class Module(torch.nn.Module):
def forward(self, x, y):
@ -1934,6 +1944,7 @@ graph():
for vr_upper in vr_upper_bounds:
self.assertEqual(vr_upper, 1)
@testing.expectedFailureStrictV2
def test_detect_leak_strict(self):
class Foo(torch.nn.Module):
def __init__(self):
@ -2682,6 +2693,7 @@ class GraphModule(torch.nn.Module):
gm = export(m, (torch.rand(64, 64),))
torch.export.unflatten(gm)
@testing.expectedFailureStrictV2
def test_unflatten_closure(self):
class Dummy(torch.nn.Module):
def forward(self, fn, x):
@ -4187,6 +4199,7 @@ def forward(self, p_linear_weight, p_linear_bias, x):
if str(sym) in ["u0", "s0"]:
self.assertEqual(vr.lower, 1)
@testing.expectedFailureStrictV2
def test_duplicate_modules_with_non_persistent_buffers(self):
class FooWithBuf(torch.nn.Module):
def __init__(self):
@ -4830,6 +4843,7 @@ def forward(self, p_conv_weight, p_conv_bias, p_conv1d_weight, p_conv1d_bias, b_
table.materialize()
self.assertFalse(torch.ops.mylib.foo123.default in table)
@testing.expectedFailureStrictV2
def test_if_post_autograd_op_preserved(self):
class Foo(torch.nn.Module):
def forward(self, x):
@ -5533,21 +5547,11 @@ def forward(self, p_linear_weight, p_linear_bias, b_buffer, x):
w = Wrapped()
if is_retracebility_test(self._testMethodName):
with self.assertRaisesRegex(
torch._dynamo.exc.UserError,
"Detected mismatch between the structure of `inputs` and `dynamic_shapes`"
": `inputs` has 2 elements, but `dynamic_shapes` has 1 elements",
):
export(w, args, dynamic_shapes={"args": ({0: batch}, {0: batch})})
else:
compiled = export(
w, args, dynamic_shapes={"args": ({0: batch}, {0: batch})}
)
expected = w(*args)
mod = compiled.module()
got = mod(*args)
self.assertTrue(torch.allclose(expected, got))
compiled = export(w, args, dynamic_shapes=({0: batch}, {0: batch}))
expected = w(*args)
mod = compiled.module()
got = mod(*args)
self.assertTrue(torch.allclose(expected, got))
def test_dynamic_shapes_builder_basic(self):
class M(torch.nn.Module):
@ -7228,6 +7232,7 @@ def forward(self, p_linear_weight, p_linear_bias, b_buffer, x):
@testing.expectedFailureSerDer # we don't save placeholder metadata
@testing.expectedFailureCppSerDes # we don't save placeholder metadata
@testing.expectedFailureSerDerNonStrict
@testing.expectedFailureStrictV2
def test_linear_conv(self):
strict = True
@ -8826,6 +8831,7 @@ def forward(self, x):
)
)
@testing.expectedFailureStrictV2
def test_automatic_constrain_size(self):
class M(torch.nn.Module):
def forward(self, x, y):
@ -8937,6 +8943,7 @@ def forward(self, x):
):
ep.graph_module.while_loop_body_graph_0(torch.tensor([5]), torch.zeros(1))
@testing.expectedFailureStrictV2
def test_constrain_decomp(self) -> None:
class M(torch.nn.Module):
def __init__(self) -> None:
@ -9575,6 +9582,7 @@ def forward(self, b_a_buffer, x):
self.assertTrue(torch.allclose(ep.module()(xs), module_out))
@requires_cuda_and_triton
@testing.expectedFailureStrictV2
def test_export_associative_scan_lifted_buffers(self):
if "cpp_runtime_nonstrict" in self.id():
self.skipTest("TODO Unexpected success in OSS but not in fbcode.")
@ -9665,6 +9673,7 @@ def forward(self, b_a_buffer, x):
len([node for node in gm.graph.nodes if node.op == "placeholder"]), 2
)
@testing.expectedFailureStrictV2
def test_no_check_is_size_error(self):
class Module(torch.nn.Module):
def forward(self, x):
@ -9818,6 +9827,7 @@ def forward(self, b_a_buffer, x):
self.assertEqual(len(ep.graph_signature.input_specs), 4)
self.assertTrue(torch.allclose(ep.module()(*inp), transform.module()(*inp)))
@testing.expectedFailureStrictV2
def test_tensor_attribute_zero_args(self):
class Foo(torch.nn.Module):
def __init__(self, value):
@ -9831,6 +9841,7 @@ def forward(self, b_a_buffer, x):
ep = export(m, ())
self.assertEqual(ep.graph_signature.lifted_tensor_constants, ["x"])
@testing.expectedFailureStrictV2
def test_preserve_shape_dynamism_for_unused_inputs(self):
torch.export.register_dataclass(
Inp3,
@ -10000,6 +10011,7 @@ def forward(self, p_lin_weight, p_lin_bias, x):
)
@unittest.skipIf(IS_FBCODE, "We can't customize decomp in fbcode")
@testing.expectedFailureStrictV2
def test_export_decomp_torture_case_2(self):
class MyLinear(torch.nn.Module):
def __init__(self) -> None:
@ -10135,6 +10147,7 @@ def forward(self, p_conv_weight, p_conv_bias, p_conv1d_weight, p_conv1d_bias, c_
# expected 4, but got 7
ep_v2.module()(*test_inp)
@testing.expectedFailureStrictV2
def test_constant_output(self):
class ModuleConstant(torch.nn.Module):
def __init__(self) -> None:
@ -10219,6 +10232,7 @@ def forward(self, p_conv_weight, p_conv_bias, p_conv1d_weight, p_conv1d_bias, c_
# expected >= 3, but got 2
ep.module()(*test_inp)
@testing.expectedFailureStrictV2
def test_nested_module(self):
class M1(torch.nn.Module):
def forward(self, x):
@ -10256,6 +10270,7 @@ graph():
unflattened = unflatten(ep)
self.assertTrue(torch.allclose(unflattened(*inps), M2()(*inps)))
@testing.expectedFailureStrictV2
def test_nested_module_with_init_buffer(self):
class M1(torch.nn.Module):
def __init__(self) -> None:
@ -10383,6 +10398,7 @@ graph():
ep = export(m, sample_inputs)
self.assertEqual(ep.module()(*sample_inputs), m(*sample_inputs))
@testing.expectedFailureStrictV2
def test_lazy_module_kwargs(self):
class LazyModule(torch.nn.modules.lazy.LazyModuleMixin, torch.nn.Module):
def initialize_parameters(self, *args, **kwargs):
@ -12256,6 +12272,7 @@ graph():
ep.module()(x)
@testing.expectedFailureCppRuntime
@testing.expectedFailureStrictV2
def test_symint_input_basic(self):
class M(torch.nn.Module):
def forward(self, x, y):
@ -12975,6 +12992,7 @@ def forward(self, c_submod_params, x):
ufm = torch.export.unflatten(ep)
self.assertTrue(torch.allclose(ufm(*inp), epm(*inp)))
@testing.expectedFailureStrictV2
def test_unflatten_multiple_graphs_shared_submodule(self):
class N(torch.nn.Module):
def forward(self, x, b):
@ -14026,6 +14044,7 @@ def forward(self, x):
return (foo_functional,)""",
)
@testing.expectedFailureStrictV2
def test_placeholder_naming_order(self):
# See https://github.com/pytorch/pytorch/issues/143732
@ -14077,6 +14096,7 @@ def forward(self, x):
).run_decompositions()
ep.module()(torch.ones(4, 4), **kwargs)
@testing.expectedFailureStrictV2
def test_placeholder_naming_order_variadic(self):
class Mod(torch.nn.Module):
def forward(self, a, b, c, **kwargs):
@ -14101,6 +14121,7 @@ def forward(self, x):
):
export(Foo(), (torch.randn(4, 4),), strict=False)
@testing.expectedFailureStrictV2
def test_placeholder_naming_collisions(self):
# test collisions between nested user inputs
class Foo(torch.nn.Module):
@ -14173,6 +14194,7 @@ def forward(self, x):
self.assertEqual(expected_names_and_ops, real_names_and_ops)
@skipIfCrossRef # Dynamo changes the order of ops under Torch function modes
@testing.expectedFailureStrictV2
def test_placeholder_naming_collisions_hoo_subgraphs(self):
# test collisions between user inputs, top-level nodes, and HOO subgraph nodes
class Foo(torch.nn.Module):
@ -14250,6 +14272,7 @@ def forward(self, x):
]
self.assertEqual(expected_getattr_names, real_getattr_names)
@testing.expectedFailureStrictV2
def test_constant_input_naming(self):
class Foo(torch.nn.Module):
def forward(self, x, y, div="floor"):
@ -14941,6 +14964,7 @@ graph():
]
self.assertEqual(len(repeat_nodes), 0)
@testing.expectedFailureStrictV2
def test_checks_to_constrain_range(self):
class Foo(torch.nn.Module):
def forward(self, x, y):
@ -15275,6 +15299,7 @@ graph():
Block(torch.randn(4, 4), torch.randn(4, 4))
)
@testing.expectedFailureStrictV2
def test_enum_str(self):
class TensorDim(str, enum.Enum):
DDP = "ddp"
@ -15436,6 +15461,7 @@ def forward(self, x):
return (getitem_3, cos_1)""",
)
@testing.expectedFailureStrictV2
def test_run_decompositions_keep_metadata(self):
"""Make sure the metadata is kept after exported program run_decompositions."""
@ -15465,6 +15491,7 @@ def forward(self, x):
for node in decomposed_program.graph.nodes:
self.assertEqual(node.meta["custom"]["my_field"], "dummy")
@testing.expectedFailureStrictV2
def test_run_decompositions_keep_tensor_constant_metadata(self):
"""Make sure the metadata of tensor constants are kept after run_decompositions."""
@ -16096,6 +16123,7 @@ def forward(self, x):
@testing.expectedFailureSerDer # T195866111
@testing.expectedFailureSerDerNonStrict
@testing.expectedFailureStrictV2
def test_hints_wrapper(self):
strict = True
@ -16670,6 +16698,7 @@ def forward(self, args_0):
return (abs_1,)""",
)
@testing.expectedFailureStrictV2
def test_sdpa_gqa(self):
from torch.nn.attention import sdpa_kernel, SDPBackend
@ -17504,6 +17533,105 @@ def forward(self, x):
exported_param_names = [name for name, _ in gm.named_parameters()]
self.assertEqual(original_param_names, exported_param_names)
def test_export_compiled_model_with_nested_dynamic_shapes(self):
class M(torch.nn.Module):
def forward(self, data_batch):
return data_batch["a1"] + data_batch["a2"]
m = M()
compiled_m = torch.compile(m)
example_args = (
{
"a1": torch.ones(3, 3),
"a2": torch.ones(3, 3),
},
)
dynamic_shapes = (
{
"a1": {0: Dim.DYNAMIC},
"a2": {0: Dim.DYNAMIC},
},
)
ep = export(
compiled_m, example_args, dynamic_shapes=dynamic_shapes, strict=True
)
gm = ep.module()
self.assertEqual(gm(*example_args), compiled_m(*example_args))
def test_export_model_with_nested_dynamic_shapes(self):
class M(torch.nn.Module):
def forward(self, data_batch):
return data_batch["a1"] + data_batch["a2"]
m = M()
example_args = (
{
"a1": torch.ones(3, 3),
"a2": torch.ones(3, 3),
},
)
B = torch.export.Dim("batch", min=1, max=65536)
dynamic_shapes = (
{
"a1": {0: B},
"a2": {0: B},
},
)
ep = export(m, example_args, dynamic_shapes=dynamic_shapes, strict=True)
gm = ep.module()
self.assertEqual(gm(*example_args), m(*example_args))
def test_export_compiled_model_with_kwargs_dynamic_shapes(self):
class M(torch.nn.Module):
def forward(self, a1, a2):
return a1 + a2
m = M()
compiled_m = torch.compile(m)
example_args = ()
example_kwargs = {
"a1": torch.ones(3, 3),
"a2": torch.ones(3, 3),
}
dynamic_shapes = {
"a1": {0: Dim.DYNAMIC},
"a2": {0: Dim.DYNAMIC},
}
ep = export(
compiled_m,
example_args,
kwargs=example_kwargs,
dynamic_shapes=dynamic_shapes,
strict=True,
)
gm = ep.module()
self.assertEqual(gm(**example_kwargs), compiled_m(**example_kwargs))
def test_export_model_with_kwargs_dynamic_shapes(self):
class M(torch.nn.Module):
def forward(self, a1, a2):
return a1 + a2
m = M()
example_args = ()
example_kwargs = {
"a1": torch.ones(3, 3),
"a2": torch.ones(3, 3),
}
dynamic_shapes = {
"a1": {0: Dim.DYNAMIC},
"a2": {0: Dim.DYNAMIC},
}
ep = export(
m,
example_args,
kwargs=example_kwargs,
dynamic_shapes=dynamic_shapes,
strict=True,
)
gm = ep.module()
self.assertEqual(gm(**example_kwargs), m(**example_kwargs))
@unittest.skipIf(not torchdynamo.is_dynamo_supported(), "dynamo doesn't support")
class TestExportCustomClass(TorchTestCase):

View File

@ -15,7 +15,7 @@ test_classes = {}
def mocked_strict_export_v2(*args, **kwargs):
# If user already specified strict, don't make it strict
with config.patch(use_new_tracer_experimental=True):
with config.patch(use_legacy_dynamo_graph_capture=False):
if "strict" in kwargs:
return export(*args, **kwargs)
return export(*args, **kwargs, strict=True)

View File

@ -1092,6 +1092,57 @@ class inner_f(torch.nn.Module):
)
self.assertEqual(joint._aot_state.fw_metadata.static_input_indices, [0, 1])
def test_no_annotation_on_gradient_acc_nodes(self):
"""Test basic linear module with aot_export_joint_with_descriptors"""
class SimpleLinear(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(3, 2)
self.linear2 = nn.Linear(3, 2)
def forward(self, x):
with fx_traceback.annotate({"test": 1}):
return self.linear(x) - self.linear2(x)
model = SimpleLinear()
inputs = (torch.randn(4, 3, requires_grad=True),)
graph_module = graph_capture(model, inputs, True)
add_nodes = graph_module.graph.find_nodes(
op="call_function", target=torch.ops.aten.add.Tensor
)
self.assertEqual(len(add_nodes), 1)
gradient_acc_node = add_nodes[0]
self.assertTrue(gradient_acc_node.meta["is_gradient_acc"])
self.assertEqual(gradient_acc_node.meta.get("custom", {}), {})
custom_metadata = fx_traceback._get_custom_metadata(graph_module)
self.assertExpectedInline(
str(custom_metadata),
"""\
('call_function', 't', {'test': 1})
('call_function', 'addmm', {'test': 1})
('call_function', 't_1', {'test': 1})
('call_function', 'addmm_1', {'test': 1})
('call_function', 'sub', {'test': 1})
('call_function', 'neg', {'test': 1})
('call_function', 't_2', {'test': 1})
('call_function', 'mm', {'test': 1})
('call_function', 't_3', {'test': 1})
('call_function', 'mm_1', {'test': 1})
('call_function', 't_4', {'test': 1})
('call_function', 'sum_1', {'test': 1})
('call_function', 'view', {'test': 1})
('call_function', 't_5', {'test': 1})
('call_function', 't_6', {'test': 1})
('call_function', 'mm_2', {'test': 1})
('call_function', 't_7', {'test': 1})
('call_function', 'mm_3', {'test': 1})
('call_function', 't_8', {'test': 1})
('call_function', 'sum_2', {'test': 1})
('call_function', 'view_1', {'test': 1})
('call_function', 't_9', {'test': 1})""",
)
if __name__ == "__main__":
run_tests()

View File

@ -274,7 +274,10 @@ class TestUtils(TestCase):
class TestAnalysis(TestCase):
@skipIf(not SM80OrLater, "Requires SM80")
@skipIf(
(not torch.xpu.is_available()) and (not SM80OrLater),
"Requires XPU or CUDA SM80",
)
def test_noop(self):
with (
patch("sys.stdout", new_callable=StringIO) as mock_stdout,
@ -283,7 +286,10 @@ class TestAnalysis(TestCase):
main()
self.assertEqual(mock_stdout.getvalue(), "")
@skipIf(not SM80OrLater, "Requires SM80")
@skipIf(
(not torch.xpu.is_available()) and (not SM80OrLater),
"Requires XPU or CUDA SM80",
)
@dtypes(torch.float, torch.double, torch.float16)
def test_diff(self, device, dtype):
"""
@ -334,7 +340,11 @@ class TestAnalysis(TestCase):
expected_flops = [4096000, 4096000, 223552896, 223552896, 0, 0, 0]
verify_flops(self, expected_flops, out_profile)
@skipIf(not SM80OrLater, "Requires SM80")
@skipIf(
(not torch.xpu.is_available()) and (not SM80OrLater),
"Requires XPU or CUDA SM80",
)
@skipXPUIf(TEST_WITH_SLOW, "Skip because test too slow on XPU")
@dtypes(torch.float, torch.double, torch.float16)
@parametrize(
"maxat",
@ -504,7 +514,11 @@ class TestAnalysis(TestCase):
self.assertTrue(seen_baddbmm)
self.assertTrue(seen_conv)
@skipIf(not SM80OrLater, "Requires SM80")
@skipIf(
(not torch.xpu.is_available()) and (not SM80OrLater),
"Requires XPU or CUDA SM80",
)
@skipXPUIf(TEST_WITH_SLOW, "Skip because test too slow on XPU")
@dtypes(torch.float, torch.float16)
@parametrize(
"maxat",
@ -554,7 +568,10 @@ class TestAnalysis(TestCase):
if event["name"] == "triton_poi_fused_add_randn_sin_0":
event["args"]["kernel_num_gb"] = 0.002097168
@skipIf(not SM80OrLater, "Requires SM80")
@skipIf(
(not torch.xpu.is_available()) and (not SM80OrLater),
"Requires XPU or CUDA SM80",
)
@dtypes(torch.float, torch.float16)
def test_combine_profiles(self, device, dtype):
"""
@ -630,7 +647,10 @@ class TestAnalysis(TestCase):
# Verify device properties are present
self.assertIn("deviceProperties", combined_profile)
self.assertGreater(len(combined_profile["deviceProperties"]), 0)
# XPU currently does not have the deviceProperties like CUDA.
# See https://github.com/intel/torch-xpu-ops/issues/2247
if torch.cuda.is_available():
self.assertGreater(len(combined_profile["deviceProperties"]), 0)
# Verify some trace events from each original profile are present
combined_event_names = {
@ -648,7 +668,7 @@ class TestAnalysis(TestCase):
self.assertTrue(profile3_event_names.intersection(combined_event_names))
instantiate_device_type_tests(TestAnalysis, globals())
instantiate_device_type_tests(TestAnalysis, globals(), allow_xpu=True)
if __name__ == "__main__":
run_tests()

View File

@ -7522,6 +7522,38 @@ class AOTInductorTestsTemplate:
eager_outputs = model(*example_inputs)
torch.testing.assert_close(eager_outputs, compiled_outputs)
@requires_gpu
def test_mixed_device_1(self):
if self.device != GPU_TYPE:
raise unittest.SkipTest("Mixed-device test requires GPU")
class Model(torch.nn.Module):
def __init__(self):
super().__init__()
# Buffers are on CPU
self.register_buffer(
"index", torch.tensor([1, 4, 1, 7], device="cpu", dtype=torch.int64)
)
self.register_buffer(
"src", torch.ones(4, device="cpu", dtype=torch.int64)
)
def forward(self, matrix, vector):
# Inputs are on CUDA
# 1. Operation on CPU tensors
z = torch.zeros((vector.shape[0],), device="cpu", dtype=torch.int64)
scatter_result = z.scatter_add(0, self.index, self.src)
# 2. Move result to CUDA and continue on CUDA
v = vector + scatter_result.to(vector.dtype).to(GPU_TYPE)
return torch.matmul(matrix, v)
example_inputs = (
torch.randn(10, 10, device=self.device),
torch.randn(10, device=self.device),
)
self.check_model(Model(), example_inputs, move_model_to_device=False)
class AOTInductorLoggingTest(LoggingTestCase):
@make_logging_test(dynamic=logging.DEBUG)

Some files were not shown because too many files have changed in this diff Show More