Compare commits

...

476 Commits

Author SHA1 Message Date
1a938b98a8 Update
[ghstack-poisoned]
2025-11-11 11:03:15 -08:00
7b53a1b265 Update (base update)
[ghstack-poisoned]
2025-11-11 11:03:15 -08:00
474f0febf2 Update
[ghstack-poisoned]
2025-11-11 10:59:08 -08:00
384c2162c2 Update (base update)
[ghstack-poisoned]
2025-11-11 10:59:08 -08:00
bd99ae3315 [Docs] Add warning that torch.export.load uses pickle (#167557)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167557
Approved by: https://github.com/zhxchen17, https://github.com/angelayi
2025-11-11 18:47:14 +00:00
ce8672c24f Fix use of TORCH_CHECK in torch/csrc/stable (#167495)
Tested by above PR

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167495
Approved by: https://github.com/janeyx99
ghstack dependencies: #166579, #166694, #166695, #167362
2025-11-11 17:58:30 +00:00
402c465030 [ARM] Improve LLM performance & mem usage using int4-bf16 KleidiAI kernels (#158250)
Co-authored-by: Nikhil Gupta [nikhil.gupta2@arm.com](mailto:nikhil.gupta2@arm.com)

This PR enables the use of KleidiAI INT4 kernels that directly produce BF16 outputs within PyTorch to boost LLM prefill & decode performance

**This change improves decode throughput by ~15% & reduces memory required to inference the model by 50%**

### Benchmark Setup
```
Model: meta-llama/Llama-3.1-8B
Test Platform: Neoverse V2
```
### Detailed Results

| Metric                           | With `--compile`         | Without `--compile`      |
|----------------------------------|---------------------------|---------------------------|
| Quantization Scheme              | INT4 symmetric channelwise | INT4 symmetric channelwise |
| Input Precision                  | BF16                      | BF16                      |
| Number of Layers Quantized       | 32                        | 32                        |
| Average Compression Ratio        | 87.49%                    | 87.49%                    |
| Total Quantization Time (s)      | 9.62                      | 10.32                     |
| Compile Time (First) (s)         | 134.48                    | 1.69                      |
| Compile Time (Second) (s)        | 80.44                     | 1.60                      |
| Compile Time (Subsequent) (s)    | 0.19                      | 0.22                      |
| Prefill Tokens                   | 54                        | 54                        |
| Decoded Tokens                   | 33                        | 33                        |
| Prefill Time (s)                 | 0.19                      | 0.22                      |
| Decode Time (s)                  | 0.76                      | 1.38                      |
| E2E Generation Time (s)          | 0.95                      | 1.60                      |
| Prefill Throughput (tokens/s)    | 288.13                    | 249.91                    |
| Decode Throughput (tokens/s)     | 43.42                     | 23.83                     |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/158250
Approved by: https://github.com/malfet, https://github.com/aditew01, https://github.com/fadara01

Co-authored-by: Nikhil Gupta <nikhil.gupta2@arm.com>
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-11-11 17:50:22 +00:00
573a79fffa [OpenReg] Initialize device stream states for all devices in initOpenRegStreamsOnce (#167528)
Fixes #167527

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167528
Approved by: https://github.com/fffrog
2025-11-11 16:53:22 +00:00
4945180468 Add empty tensor check for _pad_packed_sequence (#167521)
That prevents null pointer dereference

Fixes https://github.com/pytorch/pytorch/issues/149622
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167521
Approved by: https://github.com/albanD
2025-11-11 16:46:13 +00:00
1df723e6f5 [inductor] Fix constant creation (#167398)
We ran into this issue when debugging inductor-lite. Calling `torch.tensor` within a fake mode (which is the case inside of inductor) will create a FakeTensor, which causes this FakeTensor to be used as a constant within inductor.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167398
Approved by: https://github.com/eellison, https://github.com/BoyuanFeng
2025-11-11 16:30:46 +00:00
f9b81e23e4 [ROCm] Disable group gemm CK path when composable kernel (CK) is not enabled (#167403)
For ROCm builds without CK support, ensure use_fast_path is false so that the CK path is not triggered, since CK is currently not available in this configuration.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167403
Approved by: https://github.com/Skylion007, https://github.com/ScottTodd, https://github.com/jeffdaily
2025-11-11 16:15:51 +00:00
ffe6cc39c7 [inductor] Optimize cold compile time when cudagraphs-partition is enabled (#167132)
Summary: When cudagraphs-parittion is enabled, we have seen an increase of cold compile time in the vllm benchmark (see https://github.com/vllm-project/vllm/issues/27080). After some profiling, we found Triton compilation time increased the most. Further investigation reveals it was caused by duplicated Triton kernels not being shared among different partitions. This PR fixes the issue by reusing the Trition kernel source code cache at the top-level PythonWrapperCodegen.

In theory we could further reduce the compilation time by completely skipping compiling duplicated partitions. That can come as a furture improvement.

Some vllm benchmarking data,

```
VLLM_USE_STANDALONE_COMPILE=0 VLLM_DISABLE_COMPILE_CACHE=1 vllm bench latency -O.cudagraph_mode=PIECEWISE -O.use_inductor_graph_partition=True --model meta-llama/Meta-Llama-3.1-8
```
Before:
```
torch.compile takes 69.18 s in total
```
After:
```
torch.compile takes 26.81 s in total
```

As a refrence, this is the compile time when turning off inductor graph partition. Looks like we still have some gap to close.
```
VLLM_USE_STANDALONE_COMPILE=0 VLLM_DISABLE_COMPILE_CACHE=1 vllm bench latency -O.cudagraph_mode=PIECEWISE -O.use_inductor_graph_partition=False --model meta-llama/Meta-Llama-3.1-8B

torch.compile takes 19.41 s in total
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167132
Approved by: https://github.com/eellison
ghstack dependencies: #167131
2025-11-11 15:54:31 +00:00
db1f3f6901 [inductor] Only generate compile-time auto-tuning block in the main graph (#167131)
Summary: When cudagraphs partition and autotune_at_compile_time are enabled, currently each subgraph will generate its own auto-tuning code block and run them once by one. This PR improves it by only generating one auto-tuning code block at the main graph level and execute it once time to auto-tune all the kernels.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167131
Approved by: https://github.com/eellison
2025-11-11 15:54:31 +00:00
43041f0a43 Remove superflous/misplaced TestFailure specs (#165989)
The tests are in class `TestInductorDynamic` which isn't affected by the `test_failures` dict which is only used as an argument to `copy_tests` for the `CommonTemplate` defined in another file.

So those have no effect.

Idea: Enhance `copy_tests` to detect unused keys

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165989
Approved by: https://github.com/benjaminglass1, https://github.com/ezyang
2025-11-11 15:36:43 +00:00
dc00842b81 [ROCm][CI] trigger magma build with gfx950 for ROCm7.1 (#167390)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167390
Approved by: https://github.com/jeffdaily
2025-11-11 15:17:37 +00:00
f1a129a6d0 Clarify that crashes/OOB accesses and not security threats (#167519)
Added note on crashes and out of bounds access in PyTorch.

Addresses https://github.com/pytorch/pytorch/issues/166881#issuecomment-3513245388

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167519
Approved by: https://github.com/albanD
2025-11-11 15:14:51 +00:00
fad48ffa62 [ROCm][CI] Match workflow names with workflow file names (#167483)
Fixes issue with uploading artifacts, which was inadvertently disabled for some renamed workflows via https://github.com/pytorch/pytorch/pull/167225

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167483
Approved by: https://github.com/jeffdaily
2025-11-11 14:45:44 +00:00
3e7a66fae1 [BugFix][Refactor] fix several instances which use f = open(...) without a corresponding f.close() (#167423)
This pattern can lead to potential file descriptor leaks, which can cause resource exhaustion or other unpredictable issues

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167423
Approved by: https://github.com/cyyever, https://github.com/Skylion007
2025-11-11 11:27:59 +00:00
5f0a563dc8 [pallas backend] implement complex indexing (#167493)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167493
Approved by: https://github.com/jansel
ghstack dependencies: #167426
2025-11-11 10:32:37 +00:00
678915d5f1 Update Arm copyright dates in LICENSE file (#167529)
Arm has made contributions to PyTorch this year however the top-level `LICENSE` file has not been updated yet to reflect this, which this PR addresses.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167529
Approved by: https://github.com/mlazos
2025-11-11 10:25:08 +00:00
daed97afff [Inductor] fix CppTile2DKernel for fp8 datatype (#167451)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167451
Approved by: https://github.com/Xia-Weiwen, https://github.com/jansel
2025-11-11 09:25:14 +00:00
53947adb1f [Inductor] optimize the heuristics of sum reduction (#163144)
Fix https://github.com/pytorch/pytorch/issues/151400.
**Summary:**
Optimize the heuristics of sum reduction, reduce the chunk size of cascade sum to improve numerical stability.
I ran the Inductor benchmark with this PR on CPU, and no performance regression is seen.

**Example:**
Take https://github.com/pytorch/pytorch/issues/151400 as an example:
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch._inductor import config

config.fallback_random = True
torch.set_grad_enabled(False)
torch.manual_seed(0)

class Model(torch.nn.Module):

    def __init__(self):
        super().__init__()

    def forward(self, x):
        vec = x.flatten()
        vec_one = torch.ones_like(vec)
        x = torch.outer(vec, vec_one)
        return torch.mean(x, dim=1)

model = Model()

x = torch.randn(3, 8, 64, 64)  # error will be amplified as the input tensor gets larger

inputs = [x]

def run_test(model, inputs, backend):
    if backend != "eager":
        model = torch.compile(model, backend=backend)
    torch.manual_seed(0)
    output = model(*inputs)
    return output

output = run_test(model, inputs, 'eager')
c_output = run_test(model, inputs, 'inductor')
fp64 = run_test(model.to(dtype=torch.float64), [inputs[0].to(dtype=torch.float64)], 'eager')

print(torch.allclose(output, c_output, rtol=1e-3, atol=1e-3))
print(torch.max(torch.abs(c_output - output)))
print(torch._dynamo.utils.same(output, c_output, fp64))

```

**logs:**
- Before
```
False
tensor(0.0052)
False
```

- After
```
True
tensor(0.0004)
True
```

-
**Generated code:**
- Before
```
cpp_fused_mean_mul_ones_like_view_0 = async_compile.cpp_pybinding(['float*', 'const float*'], '''
#include <torch/csrc/inductor/cpp_prefix.h>
extern "C"  void  kernel(float* in_out_ptr0,
                       const float* in_ptr0)
{
    auto out_ptr0 = in_out_ptr0;
    #pragma omp parallel num_threads(240)
    {
        int tid = omp_get_thread_num();
        {
            #pragma omp for
            for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(98304L); x0+=static_cast<int64_t>(16L))
            {
                {
                    float tmp_acc0 = 0;
                    at::vec::Vectorized<float> tmp_acc0_vec = at::vec::Vectorized<float>(0);
                    for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(98304L); x1+=static_cast<int64_t>(1L))
                    {
                        {
                            if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                            {
                                auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                                auto tmp1 = static_cast<float>(1.0);
                                auto tmp2 = at::vec::Vectorized<float>(tmp1);
                                auto tmp3 = tmp0 * tmp2;
                                tmp_acc0_vec = tmp_acc0_vec + tmp3;
                            }
                        }
                    }
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        tmp_acc0_vec.store(out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
                {
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        auto tmp0 = at::vec::Vectorized<float>::loadu(out_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                        auto tmp1 = static_cast<float>(98304.0);
                        auto tmp2 = at::vec::Vectorized<float>(tmp1);
                        auto tmp3 = tmp0 / tmp2;
                        tmp3.store(in_out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
            }
        }
    }
}
''')

async_compile.wait(globals())
del async_compile

class Runner:
    def __init__(self, partitions):
        self.partitions = partitions

    def recursively_apply_fns(self, fns):
        new_callables = []
        for fn, c in zip(fns, self.partitions):
            new_callables.append(fn(c))
        self.partitions = new_callables

    def call(self, args):
        arg0_1, = args
        args.clear()
        assert_size_stride(arg0_1, (3, 8, 64, 64), (32768, 4096, 64, 1))
        buf0 = empty_strided_cpu((98304, ), (1, ), torch.float32)
        buf1 = buf0; del buf0  # reuse
        # [Provenance debug handles] cpp_fused_mean_mul_ones_like_view_0:1
        cpp_fused_mean_mul_ones_like_view_0(buf1, arg0_1)
        del arg0_1
        return (buf1, )
```

- After
```
cpp_fused_mean_mul_ones_like_view_0 = async_compile.cpp_pybinding(['float*', 'const float*'], '''
#include <torch/csrc/inductor/cpp_prefix.h>
extern "C"  void  kernel(float* in_out_ptr0,
                       const float* in_ptr0)
{
    auto out_ptr0 = in_out_ptr0;
    #pragma omp parallel num_threads(240)
    {
        int tid = omp_get_thread_num();
        {
            #pragma omp for
            for(int64_t x0=static_cast<int64_t>(0L); x0<static_cast<int64_t>(98304L); x0+=static_cast<int64_t>(16L))
            {
                {
                    float tmp_acc0 = 0;
                    at::vec::Vectorized<float> tmp_acc0_vec = at::vec::Vectorized<float>(0);
                    at::vec::Vectorized<float> masked_tmp_acc0_vec = at::vec::Vectorized<float>(0);
                    CascadeSumHelper<float, 4096> scalar_cascade_helper0(static_cast<int64_t>(98304L));
                    CascadeSumHelper<at::vec::Vectorized<float>, 4096> cascade_helper0(static_cast<int64_t>(98304L));
                    CascadeSumHelper<at::vec::Vectorized<float>, 4096> masked_cascade_helper0(static_cast<int64_t>(0L));
                    for(int64_t x1=static_cast<int64_t>(0L); x1<static_cast<int64_t>(98304L); x1+=static_cast<int64_t>(1L))
                    {
                        {
                            if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                            {
                                auto tmp0 = at::vec::Vectorized<float>::loadu(in_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                                auto tmp1 = static_cast<float>(1.0);
                                auto tmp2 = at::vec::Vectorized<float>(tmp1);
                                auto tmp3 = tmp0 * tmp2;
                                tmp_acc0_vec = cascade_sum_combine(tmp3, &cascade_helper0);
                            }
                        }
                    }
                    tmp_acc0 = cascade_sum_final(&scalar_cascade_helper0);
                    tmp_acc0_vec = cascade_sum_final(&cascade_helper0);
                    masked_tmp_acc0_vec = cascade_sum_final(&masked_cascade_helper0);
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        tmp_acc0_vec = tmp_acc0_vec + masked_tmp_acc0_vec;
                        tmp_acc0_vec.store(out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
                {
                    if(C10_LIKELY(x0 >= static_cast<int64_t>(0) && x0 < static_cast<int64_t>(98304L)))
                    {
                        auto tmp0 = at::vec::Vectorized<float>::loadu(out_ptr0 + static_cast<int64_t>(x0), static_cast<int64_t>(16));
                        auto tmp1 = static_cast<float>(98304.0);
                        auto tmp2 = at::vec::Vectorized<float>(tmp1);
                        auto tmp3 = tmp0 / tmp2;
                        tmp3.store(in_out_ptr0 + static_cast<int64_t>(x0));
                    }
                }
            }
        }
    }
}
''')

async_compile.wait(globals())
del async_compile

class Runner:
    def __init__(self, partitions):
        self.partitions = partitions

    def recursively_apply_fns(self, fns):
        new_callables = []
        for fn, c in zip(fns, self.partitions):
            new_callables.append(fn(c))
        self.partitions = new_callables

    def call(self, args):
        arg0_1, = args
        args.clear()
        assert_size_stride(arg0_1, (3, 8, 64, 64), (32768, 4096, 64, 1))
        buf0 = empty_strided_cpu((98304, ), (1, ), torch.float32)
        buf1 = buf0; del buf0  # reuse
        # [Provenance debug handles] cpp_fused_mean_mul_ones_like_view_0:1
        cpp_fused_mean_mul_ones_like_view_0(buf1, arg0_1)
        del arg0_1
        return (buf1, )
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163144
Approved by: https://github.com/CaoE, https://github.com/mingfeima, https://github.com/jansel
2025-11-11 09:25:00 +00:00
c297b02f12 [DTensor] statically_known_true for slice strategy (#166990)
Avoids data-dependent errors for out-of-bounds & redundant slice checks.

The sharding logic that immediately depends on this only checks for redundant slices, and is saying: "it's safe to reuse the input placements if a) the slicing dimension isn't sharded, or b) the slice is redundant, so just pretend this op didn't happen".

This has a slight effect on output placements, when a slice is performed on a shared dim, and dynamic shapes are involved (size/start/end/step). Now if the slice isn't obviously redundant, we won't immediately consider the input placements valid (even if they could be for very particular runtime shapes), and select strategies valid for the general case - in this case I guess unsharding the slicing dim.

For backed symbols, we could choose to recompile when the redundant case is hit, by switching to `guard_or_false`, but it's not obvious how desirable this is.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166990
Approved by: https://github.com/laithsakka
2025-11-11 08:04:09 +00:00
bd24774f50 [XPU][Test] Enable XPU tests in inductor/test_analysis.py (#166840)
This PR enables XPU devices in test_analysis.py.

For performance reason, it skips some slow tests, so a full scope should be enabled by using:

```
export PYTORCH_TEST_WTH_SLOW=1
```

**PR Stack:**

- https://github.com/pytorch/pytorch/pull/166840 : This PR enables the tests, ignores the tests that failed
- https://github.com/pytorch/pytorch/pull/166839 : This fixed the bug and enable the full tests for xpu

**Some skipped test time:**

```
test_augment_trace_against_flop_counter_maxat0_xpu_float16 [49.0863s]
test_augment_trace_against_flop_counter_maxat0_xpu_float32 [18.2268s]
test_augment_trace_against_flop_counter_maxat1_xpu_float16 [85.6549s]
test_augment_trace_against_flop_counter_maxat1_xpu_float32 [329.0832s]
test_augment_trace_against_flop_counter_maxat2_xpu_float16 [24.4825s]
test_augment_trace_against_flop_counter_maxat2_xpu_float32 [19.0688s]
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166840
Approved by: https://github.com/guangyey, https://github.com/jansel
2025-11-11 07:49:07 +00:00
525eb9fab9 Fix command injection vulnerability in PCH compilation (#167502)
Fixed a command injection vulnerability in PreCompiled Header (PCH) compilation where extra_cflags were passed to subprocess with shell=True, allowing arbitrary command execution through malicious compiler flags.

Changed subprocess.check_output(pch_cmd, shell=True) to use shlex.split() to safely parse the command without shell interpretation. This prevents shell metacharacters (;, |, &, etc.) in extra_cflags from being executed as shell commands.

Added test case test_pch_command_injection that verifies:
1. PCH compilation attempts with malicious payloads in extra_cflags
2. Shell commands embedded in flags are not executed
3. Exploit file is not created, proving no shell execution occurred

Note: On RHEL/Fedora and other systems with versioned GCC compilers, the test depends on https://github.com/pytorch/pytorch/pull/167501 being merged first, otherwise the test will be skipped due to GCC detection issues.

Fixes #167480

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167502
Approved by: https://github.com/malfet
2025-11-11 07:27:41 +00:00
7886070fc5 Use stable topological sort in fuse_by_partitions (#167397)
legalize_graph() performs a topo sort that shuffles the nodes is a global way, making the result unpredictable.
We should avoid this in graph pass in general.

This problem is discovered when testing regional_inductor, a single fuse region trigger the global reordering.

Before
https://www.internalfb.com/intern/diffing/?before_paste_number=2029217728&after_paste_number=2029218006&regex_remove_pattern=&enable_regex_remove=0&strip_empty_lines=0&line_wrap=0&selected_tab=plain_diff

After
https://www.internalfb.com/intern/diffing/?paste_number=2029162294&regex_remove_pattern=&enable_regex_remove=0&strip_empty_lines=0&line_wrap=0&selected_tab=plain_diff

Left is gm before regional_inductor, right is after.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167397
Approved by: https://github.com/ezyang
2025-11-11 07:14:02 +00:00
87d17e9dee [pallas backend] Implementing Strided/Scatter Access (#167426)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167426
Approved by: https://github.com/yarongmu-google, https://github.com/jansel
2025-11-11 06:32:25 +00:00
53422e6bc8 [MPS] Add mechanism for reporting asserts from kernels (#166615)
Allocate ErrorMessages buffer associated with MPSStream and introduce `c10:🤘:report_error` method(and `TORCH_REPORT_ERROR` macro), that can be used to preserve up to `c10:🤘:error_message_count` messages

Add test that detects those

As results attempt to run something like
```python
import torch
x=torch.rand(10, 1, 10, device='mps')
y=x[:, [1]]
torch.mps.synchonize()
```
will raise `torch.AcceleratorError: index 1 is out of bounds for dimension 0 with size 1`

Fixes https://github.com/pytorch/pytorch/issues/111669
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166615
Approved by: https://github.com/manuelcandales, https://github.com/dcci
ghstack dependencies: #167444, #167445
2025-11-11 06:28:14 +00:00
c34b743eac [Dynamo] Support for xor (#166065)
Add missing support for xor (and maybe some other binary ops later on)

Fixes #146688

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166065
Approved by: https://github.com/ezyang
2025-11-11 05:44:08 +00:00
db250fa895 Revert "Expose THPVariable_Wrap() with a type argument (#167488)"
This reverts commit 52a6b5a4cc9f938b9cda102fb506fd0e4b32ecad.

Reverted https://github.com/pytorch/pytorch/pull/167488 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/167488#issuecomment-3515070469))
2025-11-11 05:39:40 +00:00
52231a7974 show current env before running lint (#166860)
There seems to be some discrepency between CI and local for Pyrefly so logging these to be able to check for different dependency versions
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166860
Approved by: https://github.com/janeyx99
2025-11-11 05:28:31 +00:00
cf71c53eae Fix check_compiler_is_gcc to detect versioned GCC compilers (#167501)
The function was only returning True for compilers named 'c++', but failed to detect g++, gcc, g++-13, g++-14, etc. This fixes the detection to work for any GCC variant by checking for both COLLECT_GCC and 'gcc version' in the compiler output.

The previous implementation used os.path.basename() on the resolved compiler path and only checked if it exactly matched 'c++'. This caused false negatives for versioned GCC installations and direct g++ usage.

The fix simplifies the logic: if both COLLECT_GCC is present in the output (indicating GCC toolchain) and 'gcc version' appears in the version string, it's GCC.

Fixes #167499

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167501
Approved by: https://github.com/ezyang, https://github.com/malfet
2025-11-11 05:14:05 +00:00
f9caae42ed [MPS] Move dispatch_sync_with_rethrow to MPSStream (#167445)
And wrap dispatches to copy sync with rethrow-wrapped method

Needed if execption could be raised during the sync, for example when surfacing async errors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167445
Approved by: https://github.com/Skylion007, https://github.com/manuelcandales, https://github.com/dcci
ghstack dependencies: #167444
2025-11-11 05:03:50 +00:00
52a6b5a4cc Expose THPVariable_Wrap() with a type argument (#167488)
For torchdistx, which is only *mostly* dead

Differential Revision: D86712979

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167488
Approved by: https://github.com/soulitzer
2025-11-11 04:56:50 +00:00
94f6f79e27 [3/N] Use Python 3.10 typing (#167431)
This PR applies new Union and Optional typing syntax to some files.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167431
Approved by: https://github.com/ezyang
2025-11-11 04:40:05 +00:00
5676de1157 [PT2] Supply index to fake tensors on mtia device (#167457)
Summary:
When PT2 sees an operation like `torch.empty_strided(8, device="cuda")` it returns a fake tensor on a specific device index, such as `"cuda:0"`. This is implemented via a list of multi-index devices in the fake tensor constructor. If a device supports indices but is not in this list, we can hit fake tensor propagation errors as `"cuda"` is not considered to be the same device as `"cuda:0"`.

This PR adds the `"mtia"` device to this list, to resolve some fake tensor propagation errors we're seeing with the full Inductor backend. (Internal task: T243176905)

Test Plan: Tests are stacked on the internal diff D86605248.

Reviewed By: StellarrZ

Differential Revision: D86605248

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167457
Approved by: https://github.com/eellison, https://github.com/mlazos
2025-11-11 04:31:28 +00:00
2ca0b3f70a [simplefsdp] fix autobucketing pass that takes comm op as input (#167484)
Fix for issue: https://github.com/pytorch/torchtitan/issues/2004

The root cause is that we are scheduling comm ops that are used as input to bwd graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167484
Approved by: https://github.com/eellison
2025-11-11 04:09:02 +00:00
b06453c7cf Make PT2 compile backprop through custom op without autograd key a hard error (#166367)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166367
Approved by: https://github.com/bdhirsh
2025-11-11 03:16:30 +00:00
f0fa39a7e4 Revert "[inductor, 3.14] fix itertools.product pickle error in test_cpu_repro (#167382)"
This reverts commit 5320ca3725c4ccf2811c211b48af1ddebb2b471f.

Reverted https://github.com/pytorch/pytorch/pull/167382 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
b5142f74f9 Revert "[inductor, 3.14] catch pickle.PicklingError exceptions (#167383)"
This reverts commit ad7db3617ec5cc3aa384bd4408fcfbc2acac1a98.

Reverted https://github.com/pytorch/pytorch/pull/167383 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
a14452bfce Revert "[dynamo, 3.14] enable dynamo in 3.14 (#167384)"
This reverts commit 17e70ae459c45d85ef77afa4d19efe5f8b44f573.

Reverted https://github.com/pytorch/pytorch/pull/167384 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
619f329a4b Revert "[3.14, dataloader] handle forkserver default mp start method in 3.14 (#167387)"
This reverts commit cf63b212e330836c2be92bef903f5a5d0dc2c7e9.

Reverted https://github.com/pytorch/pytorch/pull/167387 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:09 +00:00
7a48db0809 Revert "[export, 3.14] handle patching methods with functools.partial correctly in non-strict export (#167396)"
This reverts commit fe0bb7cf6001532b14bba14d686baa1ff0b98de0.

Reverted https://github.com/pytorch/pytorch/pull/167396 on behalf of https://github.com/jeanschmidt due to merged stack seems to have introduced regressions on windows: test/test_dataloader.py::TestDataLoaderPersistentWorkers::test_worker_init_fn_forkserver [GH job link](https://github.com/pytorch/pytorch/actions/runs/19245763893/job/55023427916) [HUD commit link](fe0bb7cf60) ([comment](https://github.com/pytorch/pytorch/pull/167382#issuecomment-3514782788))
2025-11-11 02:57:08 +00:00
406f2943d2 Revert "Rework PyObject preservation (#166342)"
This reverts commit 6ca8cc6edf30b5ca882d4871af617e674b6cdd47.

Reverted https://github.com/pytorch/pytorch/pull/166342 on behalf of https://github.com/jeanschmidt due to seems to have introduced test/test_reductions.py::TestReductionsCPU::test_dim_reduction_fns_fn_name_var_cpu_int8 [GH job link](https://github.com/pytorch/pytorch/actions/runs/19247187935/job/55027440149) [HUD commit link](6ca8cc6edf) ([comment](https://github.com/pytorch/pytorch/pull/166342#issuecomment-3514771276))
2025-11-11 02:54:00 +00:00
c3bc56c8b4 [xpu][fix] Format XPU c10 and aten code (#167298)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167298
Approved by: https://github.com/cyyever, https://github.com/albanD, https://github.com/Skylion007
2025-11-11 02:07:37 +00:00
b2be4d24c0 [DTensor] Make ExplicitRedistributeContext strict/non-strict mode (#167370)
Also support nesting, enable/disable, and make the class use a
thread-local for storage so independent threads do not confuse each
other.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167370
Approved by: https://github.com/ezyang
ghstack dependencies: #166593
2025-11-11 01:19:16 +00:00
8d5cceeb6a [torchbench][optimus] Add backend optimus (#167357)
Summary: `--optimus [all | vertical_opt | horizontal_opt]` will kick off inductor compile with different fusion strategies.

Test Plan:
TorchBench Runner:

```
$ buck2 run mode/opt //pytorch/benchmark:run -- customized_optimus_illustrative -t train -d cuda
GPU Time per batch:   56.254 milliseconds
CPU Wall Time per batch:  56.326 milliseconds
CPU Wall Time:        56.326 milliseconds
Time to first batch:          420.0777 ms
GPU 0 Peak Memory:              0.0695 GB
CPU Peak Memory:              359.6362 GB
```

PT2 Benchmark Runner (comparing with eager):

```
buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative  --performance --training --inductor

running benchmark: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 30/30 [00:02<00:00, 14.37it/s]
4.509x
```

eager latency: ~56 ms
inductor latency: ~11 ms

Optimus backend:

```
$ buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative --performance --training --optimus all
11.02923508733511 ms, 13.884015614166856 ms, 0.794x
```

```
$ buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative --performance --training --optimus vertical_opt
12.47156853787601 ms, 10.699485195800662 ms, 1.166x
```

```
$ buck2 run mode/opt //pytorch/benchmark:pt2 -- --only customized_optimus_illustrative --performance --training --optimus horizontal_opt
11.078484123572707 ms, 10.797873372212052 ms, 1.026x
```

optimus latency ~10 ms

Differential Revision: D86524903

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167357
Approved by: https://github.com/mengluy0125
2025-11-11 00:35:30 +00:00
f6331192b4 Enable Doc builds for: Minor Releases RCs. Minor and Patch Releases final RC (#167478)
Enable Doc builds for
1. Minor Releases RCs
2. Minor and Patch Releases final RC

This is done to prevent publishing doc for patch releases when building rcs.
See:
https://github.com/pytorch/docs/pull/57

Followup after: https://github.com/pytorch/pytorch/pull/153973
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167478
Approved by: https://github.com/svekars, https://github.com/seemethere
2025-11-11 00:34:08 +00:00
f8d408d24a [CP] Correctly compile create_cp_block_mask (#167153)
Currently we re-compile create_block_mask every time, which is not very efficient and the global compilation also causes some issues. This PR lazily compile the create_block_mask and does it only once.

Fixes https://github.com/pytorch/pytorch/issues/167064

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167153
Approved by: https://github.com/drisspg, https://github.com/XilunWu
2025-11-11 00:03:06 +00:00
5a85b6eaf8 Migrate TypeTraits, TypeList, Metaprogramming to torch:: headeronly (#167386)
Taking over #163634; adding tests/headeronly APIs

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167386
Approved by: https://github.com/albanD, https://github.com/mikaylagawarecki
2025-11-11 00:02:20 +00:00
e3d6896d08 [MTIAGraph][Pytorch][3/n] Implement mtia_graph python wrapper in pytorch (#166964)
- Add python module `mtia_graph.py`, which is a wrapper on top of the c++ logic implemented in previous PRs/diffs
- Add python level integration tests

[Doc](https://docs.google.com/document/d/1Q3xdZAIqhBvuy2HxGDfJyXVmxYXUEeYSZSwsp7bcJF8/edit?tab=t.osb46a42t6wb)

Differential Revision: [D84673488](https://our.internmc.facebook.com/intern/diff/D84673488/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166964
Approved by: https://github.com/malfet
2025-11-10 23:49:20 +00:00
9d9e7c7b1c [Pytorch] Extend OSS conversion benchmarks (#167099)
Summary: We are extending OSS conversion benchmarks, to include all combinations between types

Test Plan: CI

Differential Revision: D86315975

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167099
Approved by: https://github.com/mcfi
2025-11-10 23:36:57 +00:00
4c3721fe70 allow sym_stride, and sym_size lowering in inductor to return ints (#167345)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167345
Approved by: https://github.com/eellison
2025-11-10 23:29:23 +00:00
8ef4099313 Revert "Add min/max support for barebones uint types (#166813)"
This reverts commit 9ffc480c5a928eaccb4ac0e1755a1c596674d884.

Reverted https://github.com/pytorch/pytorch/pull/166813 on behalf of https://github.com/jeanschmidt due to It was reverted internally 6 days ago, but not reverted on OSS, this is causing conflicts ([comment](https://github.com/pytorch/pytorch/pull/166813#issuecomment-3514328895))
2025-11-10 23:25:22 +00:00
de773364be Support AC in default partitioner when functionalization is enabled (#166610)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166610
Approved by: https://github.com/SherlockNoMad
ghstack dependencies: #166536
2025-11-10 23:09:01 +00:00
47da714b8b [inductor][determinism] type errors + use odc to dump imc on exit (#167136)
Summary: fix some type errors + instead of manually creating a filelock when dumping dcache's imc to file we simply use an odc (since this is the intended behavior of odc, anyways)

Test Plan:
```
buck test fbcode//mode/opt caffe2/test/inductor:caching
```

Differential Revision: D86345594

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167136
Approved by: https://github.com/aorenste
2025-11-10 22:51:03 +00:00
69ab1f93e4 Add shim for at::get_num_threads (#167362)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167362
Approved by: https://github.com/janeyx99
ghstack dependencies: #166579, #166694, #166695
2025-11-10 22:21:14 +00:00
232baa33b3 Redo add parallel_for to torch/csrc/stable (#166695)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166695
Approved by: https://github.com/malfet
ghstack dependencies: #166579, #166694
2025-11-10 22:21:14 +00:00
6f0182495f Add stable::Tensor.device() (#166694)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166694
Approved by: https://github.com/janeyx99
ghstack dependencies: #166579
2025-11-10 22:21:14 +00:00
7da82b84e2 Add torch::stable::Device (#166579)
Prior to this PR, the IValue <-> StableIValue conversion for `DeviceObjType` (aka c10::Device) was to pack it into the leading bits of the StableIValue (which is a uint64_t)

After this PR, the IValue <-> StableIValue conversion for `DeviceObjType` expects DeviceType to be packed into the upper 32 bits of StableIValue and DeviceIndex to be packed into the lower 32 bits

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166579
Approved by: https://github.com/janeyx99
2025-11-10 22:21:14 +00:00
cda7604434 [ez] Remove spammy deprecation log (#167470)
"
/packages/pytorch_latest_sixlib_conda/conda/lib/python3.12/site-packages/torch/_dynamo/variables/user_defined.py:1815: FutureWarning: `isinstance(treespec, LeafSpec)` is deprecated, use `isinstance(treespec, TreeSpec) and treespec.is_leaf()` instead.
  return ctor(*args, **kwargs)"

is too spammy

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167470
Approved by: https://github.com/tugsbayasgalan
2025-11-10 21:49:23 +00:00
6ca8cc6edf Rework PyObject preservation (#166342)
Make the PyObject preservation scheme thread-safe with free threaded (nogil) Python. The general idea is:

* Python Tensor and Storage objects always hold a strong reference to their underlying c10 object
* c10 objects hold a strong reference to their Python objects if there's at least one other reference to the c10 object

This is implemented in `intrusive_ptr`:

* The top most bit (`kHasPyObject`) from the weakref count is now used to indicate if the `intrusive_ptr_target` has an associated PyObject. So `kHasPyObject` is one bit, the weakref count is now 31 bits and the strong refcount remains 32 bits.
* When the reference count increases from one to two and `kHasPyObject` is set, we incref the associated Python object to ensure that it's kept alive.
* When the reference count decreases from two to one (i.e., there are no C++ reference to the `intrusive_ptr_target` other than from the Python object), we decre the associated Python object to break the cycle.

Other benefits:

* We can delete a lot of the copypasta from Python internal `subtype_dealloc`
* This fixes the weakref and GC bugs we had in the previous scheme. Python weakrefs on Tensors and Storages should just work as expected now.

Risks:

* Extra branch for reference count operations on `intrusive_ptr<TensorImpl>`, `intrusive_ptr<StorageImpl>`, and the generic `intrusive_ptr<intrusive_ptr_target>` even when we're not using Python.
* It's a big change
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166342
Approved by: https://github.com/albanD
2025-11-10 21:47:53 +00:00
bb37483464 Use c7i.2xlarge for B200 build (#167078)
The build system is oversized for what is necessary. Reduce the size to optimize costs. The default workflow runner is `linux.c7i.2xlarge` so we are just removing the runner definition in the workflow so that it uses the default.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167078
Approved by: https://github.com/nWEIdia, https://github.com/seemethere
2025-11-10 21:45:45 +00:00
2751b1d3c3 Support repr on user defined objects (#167372)
Fixes: https://github.com/pytorch/pytorch/issues/167369

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167372
Approved by: https://github.com/anijain2305
2025-11-10 21:06:37 +00:00
fe0bb7cf60 [export, 3.14] handle patching methods with functools.partial correctly in non-strict export (#167396)
Note: dynamo is not affected by this since patching class methods are not supported right now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167396
Approved by: https://github.com/angelayi
ghstack dependencies: #167382, #167383, #167384, #167387
2025-11-10 20:52:05 +00:00
cf63b212e3 [3.14, dataloader] handle forkserver default mp start method in 3.14 (#167387)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167387
Approved by: https://github.com/malfet
ghstack dependencies: #167382, #167383, #167384
2025-11-10 20:52:05 +00:00
17e70ae459 [dynamo, 3.14] enable dynamo in 3.14 (#167384)
dynamo tests are passing in the CI PR above - so we could probably just enable dynamo right now.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167384
Approved by: https://github.com/Skylion007, https://github.com/mlazos
ghstack dependencies: #167382, #167383
2025-11-10 20:52:05 +00:00
ad7db3617e [inductor, 3.14] catch pickle.PicklingError exceptions (#167383)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167383
Approved by: https://github.com/aorenste
ghstack dependencies: #167382
2025-11-10 20:52:04 +00:00
5320ca3725 [inductor, 3.14] fix itertools.product pickle error in test_cpu_repro (#167382)
`inductor/test_cpu_cpp_wrapper` was failing since it was attempting to pickle`itertools.product`, and that is no longer picklable in 3.14. We work around by eagerly generating a list.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167382
Approved by: https://github.com/atalman, https://github.com/malfet
2025-11-10 20:52:04 +00:00
3e4faca130 [torch.export] Refactor placeholder_naming_pass to reduce CCN (#166600)
Summary: Reduced CCN from 37 to 28 of placeholder_naming_pass method

Test Plan: Existing tests

Differential Revision: D85820388

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166600
Approved by: https://github.com/angelayi
2025-11-10 20:44:18 +00:00
0c2f206ded Typo fix - baddbmm_strategy (#166963)
This is called by registration with decorator, so function not called directly. For clarity, add the "b" for "batch" in function name.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166963
Approved by: https://github.com/janeyx99
2025-11-10 20:35:42 +00:00
6cf21fa331 Fix -ffunction-sections, -fdata-sections not being added on aarch64. (#166407)
Preferred solution to #166380

Changes:

- Moved summary print to bottom of CMakeLists.txt
- Fix the problem 'add_compile_options' should be called before targets defined, so opted for `append_cxx_flag_if_supported` and `append_c_flag_if_supported` ( new ).
- Added extra verbosity so it can be seen when linker script added.

( unfortunately linker script has to be added per-target rather than globally due to ninja/cmake depdendency tracking ).

Also move summary print to bottom of CMakeLists.txt and improve logging
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166407
Approved by: https://github.com/Aidyn-A, https://github.com/atalman
2025-11-10 20:32:08 +00:00
cdc8460f2c Use c7i.2xlarge for H100 build (#167466)
The build system maybe oversized for what is necessary. Reduce the size to optimize costs. The default workflow runner is linux.c7i.2xlarge so we are just removing the runner definition in the workflow so that it uses the default.

Relates to pytorch/test-infra#7175.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167466
Approved by: https://github.com/seemethere
2025-11-10 20:20:54 +00:00
86130aa2ca Fix flaky memory profiler test [2] (#167268)
Fixes #167037

Move the module definition outside of the unit test so when we run the unit test multiple times, the module is not re-compiled.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167268
Approved by: https://github.com/angelayi
2025-11-10 19:51:38 +00:00
9491830c79 move subgraph_has_impure_ops from node.is_impure into const_fold to unblock production (#167443)
Summary:
https://github.com/pytorch/pytorch/pull/166609 updates `node.is_impure` to consider a submodule as impure if submodule contains impure node. This in turn changes `graph.eliminate_dead_code()` function behavior, which does not eliminate nodes with side effects, see [pytorch documentation](https://docs.pytorch.org/docs/stable/fx.html#torch.fx.Graph.eliminate_dead_code)
> Remove all dead code from the graph, based on each node’s number of users, and whether the nodes have any side effects.

While this is correct that a submodule containing side-effectful ops is side-effectful and should not be dead code eliminated, some customers rely on the dead code elimination to eliminate submodules that contain impure ops which is the behavior before #166609 fix.

Due to production environment constraints, we have to revert https://github.com/pytorch/pytorch/pull/166609 and move the side-effectful submodule check logic to `const_fold.py`, which will correctly **not** const-fold a submodule that contains impure ops.

NOTE other call sites that use `node.is_impure()` to make decisions are still incorrectly eliminating side-effectful submodules, but we can't safely change that today.

## This pr
- move `_subgraph_has_impure_op` into `fx/experimental/const_fold.py`, check and prevent const-folding an impure submodule
- added a note in `node.is_impure` to highlight the incorrect behavior and context in case people go looking in the future.

Test Plan: run test_fx_const_fold and all tests pass

Differential Revision: D86641994

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167443
Approved by: https://github.com/jfix71
2025-11-10 19:29:54 +00:00
04a85b4c21 [compile-on-one-rank] Step 1: DeviceId (#166680)
Add a "--virtual-local-rank" mode to torchrun. When used instead of passing the
local rank in LOCAL_RANK it uses a LOCAL_RANK of "0" and adjusts
CUDA_VISIBLE_DEVICES to reflect the desired GPU index.

Testing:
(tweaked run_train.sh to use `--log-dir`)
```
export NGPU=8
export CONFIG_FILE="./torchtitan/models/llama3/train_configs/debug_model.toml"
with-proxy ./run_train.sh --model.name compiler_toolkit.llama3 --compile.enable --parallelism.data_parallel_shard_degree=2 --parallelism.tensor_parallel_degree=4
```

And then comparing ranks:

Without --virtual-local-rank gives a lot of differences like:
```
 [rank#]:        mul_1: "f32[8, 512, 256]" = torch.ops.aten.mul.Tensor(mul, view_9);  mul = None
-[rank#]:        _to_copy_3: "bf16[8, 512, 256]" = torch.ops.aten._to_copy.default(mul_1, dtype = torch.bfloat16, layout = torch.strided, device = device(type='cuda', index=0));  mul_1 = None
+[rank#]:        _to_copy_3: "bf16[8, 512, 256]" = torch.ops.aten._to_copy.default(mul_1, dtype = torch.bfloat16, layout = torch.strided, device = device(type='cuda', index=1));  mul_1 = None
 [rank#]:        detach: "f32[8, 512, 1]" = torch.ops.aten.detach.default(rsqrt);  rsqrt = None
```

With --virtual-local-rank makes those differences go away.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166680
Approved by: https://github.com/ezyang
2025-11-10 18:47:31 +00:00
a4437d76f0 Add some labeler rules that used to be in the autolabel bot (#167330)
See https://github.com/pytorch/test-infra/pull/7446 for the paths

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167330
Approved by: https://github.com/huydhn
2025-11-10 18:38:42 +00:00
3ea829a337 Fix torch.cond HOP device in inductor (#167354)
Fixes #166918

The output device may not be on the same device as the predicate device.

```
python test/inductor/test_control_flow.py -k test_output_on_different_device
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167354
Approved by: https://github.com/ydwu4, https://github.com/zou3519
2025-11-10 18:19:38 +00:00
3966b5ad05 [BE] Fix out-of-bounds index_put in test_mps.py (#167444)
Discovered while enabling assertions on out-of-bounds accesses. Otherwise test fails with
```
ERROR: test_sdpa_mask_fp16_L6_S17_NH23_HS121 (__main__.TestSDPA.test_sdpa_mask_fp16_L6_S17_NH23_HS121)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/malfet/git/pytorch/pytorch/torch/testing/_internal/common_utils.py", line 3334, in wrapper
    method(*args, **kwargs)
    ~~~~~~^^^^^^^^^^^^^^^^^
  File "/Users/malfet/git/pytorch/pytorch/build/../test/test_mps.py", line 9494, in test_sdpa_mask_fp16_L6_S17_NH23_HS121
    self._test_sdpa_mask(torch.float16, 7, 17, 23, 121)
    ~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/malfet/git/pytorch/pytorch/build/../test/test_mps.py", line 9478, in _test_sdpa_mask
    y_ref = F.scaled_dot_product_attention(q.cpu(), k.cpu(), v.cpu(), attn_mask=mask.cpu(), dropout_p=0.0, is_causal=False)
                                           ~~~~~^^
torch.AcceleratorError: index out of range

```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167444
Approved by: https://github.com/Skylion007, https://github.com/manuelcandales
2025-11-10 18:19:28 +00:00
f6a79b2a4a [inductor] Wrap pallas_call in jax.jit (#167441)
My understanding is this is needed for performance.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167441
Approved by: https://github.com/oulgen
2025-11-10 17:29:56 +00:00
2fcf41dd8e Add the ruff rule and skip everything for now (#167360)
Part of https://github.com/pytorch/pytorch/issues/164878
We can start narrowing the skips and remove them as PRs keep landing.

This PR is just to setup the scaffolding, fix will be in follow up
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167360
Approved by: https://github.com/janeyx99
2025-11-10 17:10:15 +00:00
31ccd8f13e [AOTI] Fix a mixed-device bug for scatter_add (#167341)
Summary: Fix https://github.com/pytorch/pytorch/issues/166841. AOTI incorrectly generates a call to aoti_torch_cuda_scatter_reduce_two_out while the op should actually run on CPU. Fix by using the correct device when calling _generate_scatter_fallback in the wrapper codegen.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167341
Approved by: https://github.com/yushangdi
2025-11-10 16:59:44 +00:00
59307ca1bc [BE] adding documentation (#167334)
`torch.ao.quantization` and `torch.fx.experimental`

<img width="833" height="518" alt="Screenshot 2025-11-07 at 3 20 54 PM" src="https://github.com/user-attachments/assets/47b72f28-29bd-4bab-b41f-24d97419e411" />
<img width="892" height="560" alt="Screenshot 2025-11-07 at 3 20 45 PM" src="https://github.com/user-attachments/assets/129825ab-6706-41f2-964d-8774debab18c" />

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167334
Approved by: https://github.com/janeyx99
2025-11-10 14:46:42 +00:00
c28475db7c Update slow tests (#166844)
This PR is auto-generated weekly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/weekly.yml).
Update the list of slow tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166844
Approved by: https://github.com/pytorchbot
2025-11-10 12:39:27 +00:00
74aec83841 [xla hash update] update the pinned xla hash (#167452)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned xla hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167452
Approved by: https://github.com/pytorchbot
2025-11-10 12:03:01 +00:00
52e744d68a [DTensor] Support convert StridedShard to shard order and vice versa (#166740)
We plan to use `StridedShard` to express `shard_order`. This PR adds the function to support the conversion between `StridedShard` and `shard_order`.

I moved some test related function into torch/testing/_internal/common_utils.py. We may only care about **_dtensor_spec.py** and **test_utils.py** in this PR for the review.

### How to convert shard order to StridedShard:
Considering the example:
- placements = $[x_0, x_1, x_2, x_3, x_4]$, all $x_?$ are shard on the same tensor dim.

Let's see how the shard order will impact the split_factor (sf). We loop from right to left in the placements to construct the split_factor by assuming different shard order. Starting from $x_4$, this should be a normal shard.

Then $x_3$. There are two possibilities, $x_3$'s order can be before $x_4$. If so, $x_3$'s sf=1, because $x_3$ is before $x_4$ in the placements. Else $x_3$'s order is after $x_4$, then the $x_3$'s sf should be the mesh dim size of $x_4$, which is $T(x_4)$:
<img width="820" height="431" alt="image" src="https://github.com/user-attachments/assets/f53b4b24-2523-42cc-ad6f-41f3c280db70" />

We can use this method to decide on the split factor for $x_2$, $x_1$ and so on.

### How to convert StridedShard to shard order:
This follows the same method above. We check all possible paths and use the real split_factor to see which path matchs the split_factor. If no such matches, the StridedShard is unable to be converted to shard order.

---

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166740
Approved by: https://github.com/ezyang
2025-11-10 09:35:10 +00:00
2178d7e5c9 Update
[ghstack-poisoned]
2025-11-09 22:13:41 -08:00
58a65c73a0 Update (base update)
[ghstack-poisoned]
2025-11-09 22:13:09 -08:00
219aef9c15 Update
[ghstack-poisoned]
2025-11-09 22:13:09 -08:00
3cfbf98ea9 [xpu][feature] Add XPU support on torch.accelerator.get_memory_info (#162564)
# Motivation
Support XPU for `torch.accelerator.get_memory_info`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162564
Approved by: https://github.com/albanD
ghstack dependencies: #156812
2025-11-10 05:34:49 +00:00
47db55258b [MPS] sparse sparse mm (#167013)
Sparse sparse mm op implementation

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167013
Approved by: https://github.com/malfet
2025-11-10 05:27:49 +00:00
50af6f3393 [MPS] erfinv for sparse mps (#166711)
Should be merged after #166708
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166711
Approved by: https://github.com/Skylion007, https://github.com/malfet
2025-11-10 05:25:31 +00:00
e545ba2d34 [DTensor] Fix Conv behavior for replicate stategy (#167402)
Pass `dim_map` to `_requires_data_exchange` and return False if both spatial and channels dimensions are replicated

Modify `test_conv1d` and `test_conv3d` to check values rather than just shape, and replicate `conv3d` across batch dimension

In general, feels like current Convolution implementation was written to work only if tensor is sharded across last dimention

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167402
Approved by: https://github.com/ezyang
2025-11-10 05:13:42 +00:00
3dafba8ecf Update (base update)
[ghstack-poisoned]
2025-11-09 21:06:25 -08:00
8d62869290 Update
[ghstack-poisoned]
2025-11-09 21:06:25 -08:00
a058bbdd6f [xpu][test] Enable profiler test for XPU (#165423)
Fixes #165130

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165423
Approved by: https://github.com/EikanWang, https://github.com/atalman, https://github.com/mlazos
2025-11-10 04:02:59 +00:00
2c78080ec0 Register functorch XPU/HPU dispatch keys (#167095)
Fixes TestOperatorsXPU.test_data_write_errors_under_transform_xpu https://github.com/intel/torch-xpu-ops/issues/2237

Tests on other devices throw runtime error "_mutating directly with `.data` inside functorch transform is not allowed._", but XPU/HPU fails earlier on `_has_compatible_shallow_copy_type`. This check is not met only when calling tensor.data inside functorch call.

```cpp
bool _has_compatible_shallow_copy_type(const Tensor& self, const Tensor& from) {
  return self.unsafeGetTensorImpl()->has_compatible_shallow_copy_type(
      from.key_set());
}
```

### t.data
| Tensor | Device | Dispatch Keys |
|--------|---------|---------------|
| `self` | `xpu` | `XPU, ADInplaceOrView, AutogradXPU, AutocastXPU` |
| `from` | `cpu` | `CPU, ADInplaceOrView, AutogradCPU, AutocastCPU` |

### t.data inside functorch transform
| Tensor | Device | Dispatch Keys |
|--------|---------|---------------|
| `self` | `xpu` | `ADInplaceOrView, AutogradOther, FuncTorchGradWrapper` |
| `from` | `cpu` | `CPU, ADInplaceOrView, AutogradCPU, AutocastCPU, FuncTorchGradWrapper` |

### t.data inside functorch transform + XPU dispatch key
| Tensor | Device | Dispatch Keys |
|--------|---------|---------------|
| `self` | `xpu` | `XPU, ADInplaceOrView, AutogradXPU, AutocastXPU, FuncTorchGradWrapper` |
| `from` | `cpu` | `CPU, ADInplaceOrView, AutogradCPU, AutocastCPU, FuncTorchGradWrapper` |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167095
Approved by: https://github.com/guangyey, https://github.com/albanD
2025-11-10 03:10:22 +00:00
fe6615e397 Swap pallas test shard to 12.8 (#167428)
Getting some weird failures building cuda13, lets stick to what we know works
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167428
Approved by: https://github.com/jansel
2025-11-10 02:42:35 +00:00
abf31db2cc Introduce a new API torch.accelerator.get_memory_info (#156812)
# Motivation
`torch.cuda.mem_get_info` and `torch.xpu.mem_get_info` are widely used in other popular repos, such as
- 076313bd09/python/sglang/srt/utils.py (L378),
- 7ecc2d7f39/src/accelerate/utils/modeling.py (L822),
- 7ba34b1241/vllm/worker/worker.py (L150).
-
This PR introduces a unified API `torch.accelerator.get_memory_info` to cover this scenario.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/156812
Approved by: https://github.com/albanD
2025-11-10 01:57:39 +00:00
a4c7856112 [Inductor][Grouped Gemm] Add Blackwell CuTeDSL Kernel (#167340)
Summary: This is a reland of https://github.com/pytorch/pytorch/pull/165036, which previously contained a minor bug in the logic that determined whether the kernel should be enabled. As a result, it was incorrectly activated on non-Blackwell GPUs.

Test Plan:
Inductor test (fbcode):
`INDUCTOR_TEST_DISABLE_FRESH_CACHE=1 TORCHINDUCTOR_CACHE_DIR=~/cutetest buck2 run mode/opt //caffe2/test/inductor:cutedsl_grouped_mm -c fbcode.nvcc_arch=b200a -c fbcode.enable_gpu_sections=true -c fbcode.platform010_cuda_version=12.8 -m "ovr_config//third-party/pypi/nvidia-cutlass-dsl/constraints:4.2.1"`

Tritonbench (fbcode):
`clear; CUDA_VISIBLE_DEVICES=7 TRITON_PRINT_AUTOTUNING=1 TRITON_ALWAYS_COMPILE=1 TORCH_LOGS=+inductor TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 TORCHINDUCTOR_MAX_AUTOTUNE_GEMM=1 buck2 run mode/opt //pytorch/tritonbench:run -c fbcode.nvcc_arch=b200a -c fbcode.enable_gpu_sections=true -c fbcode.platform010_cuda_version=12.8 -m "ovr_config//third-party/pypi/nvidia-cutlass-dsl/constraints:4.2.1" -- --op grouped_gemm --only aten_grouped_mm,preprocessed_pt2_cute_grouped_mm --precision bf16  --num-inputs 1 --metrics tflops,accuracy`

Tritonbench(oss):
`clear; CUDA_VISIBLE_DEVICES=2 TRITON_PRINT_AUTOTUNING=1 TRITON_ALWAYS_COMPILE=1 TORCH_LOGS=+inductor TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 TORCHINDUCTOR_MAX_AUTOTUNE_GEMM=1 python run.py --op grouped_gemm --only aten_grouped_mm,preprocessed_pt2_triton_grouped_mm --precision bf16  --num-inputs 1 --metrics tflops,accuracy`

Unit Tests(oss):
`clear; python test/inductor/test_cutedsl_grouped_mm.py`

Differential Revision: D86537373

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167340
Approved by: https://github.com/jananisriram
2025-11-10 00:29:07 +00:00
afb014541b Separately handle null data_ptr storages when creating unique ID (#167405)
## Summary
Previously fake/functionalized tensors that have `null` storage_ptr could segfault when checking for `.expired()` on weak storage ref, so handle `nullptr` storages separately, without checking their weakrefs.

Diagnosis and PR created by codex
------
[Codex Task](https://chatgpt.com/codex/tasks/task_e_690ea8790054832f90eaffb37ee0d8c8)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167405
Approved by: https://github.com/Skylion007
2025-11-09 23:13:56 +00:00
b91a2ab892 [2/N] Use context managers (#167404)
This PR fixes more context manager usage in Python code.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167404
Approved by: https://github.com/mlazos
2025-11-09 13:38:14 +00:00
14a845a4ec [2/N] Use Python 3.10 typing (#167167)
This PR applies new `Union` and `Optional` typing syntax to some files.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167167
Approved by: https://github.com/XuehaiPan, https://github.com/mlazos
2025-11-09 12:11:45 +00:00
5135ace3a3 Enable ruff UP035 rule (#167307)
This PR enables `UP035` rule of ruff.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167307
Approved by: https://github.com/Lucaskabela
2025-11-09 06:40:03 +00:00
e7c1905837 Fix test_fsdp_logging (#167312)
- The logger name in test_fully_shard_logging.py was wrong so the logs didn't happen.
- The `device` variable in test_fully_shard_logging is expected to be a string, so quote it
- `unittest.skipIf` is used so importing `unittest` instead of `unittest.mock` is required

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167312
Approved by: https://github.com/Skylion007, https://github.com/cyyever
2025-11-09 05:38:11 +00:00
9cf623a209 Update inductor-unittest.yml (#167417)
i see failures like https://github.com/pytorch/pytorch/actions/runs/19189378182/job/54865171317?pr=167389

maybe this will fix it
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167417
Approved by: https://github.com/yf225
2025-11-09 05:08:00 +00:00
06aa3ef3d3 Move types from typing_extensions to typing (#167185)
This PR moves some implemented types from typing_extensions to typing due to the recent update to Python 3.10.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167185
Approved by: https://github.com/janeyx99
2025-11-09 02:50:18 +00:00
0384104e23 Update pythoncapi_compat.h to 11cb80f2652cb2fe5231bf60b9dd98c83a4e25f4 (#167413)
Second attempt for https://github.com/pytorch/pytorch/pull/167138 with fixes for name conflicts in downstream packages.

Should slightly simplify https://github.com/pytorch/pytorch/pull/166342
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167413
Approved by: https://github.com/Skylion007
2025-11-09 02:14:33 +00:00
325ec98009 [13/N] Apply ruff UP035 rule (#167048)
This PR continues to apply ruff UP035 rule to test code and some remaining torch files.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167048
Approved by: https://github.com/Skylion007
2025-11-09 01:47:38 +00:00
47acdea74a another version of fixing CachingHostAllocatorImpl destructor (#167408)
Another version of #167347 that won't break xpu and should correctly handle runtime changes of `pinned_use_background_threads()`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167408
Approved by: https://github.com/yingufan, https://github.com/Skylion007
2025-11-09 00:20:54 +00:00
71606b289c [BugFix] Fix compute_error in coo_mean_time and csr_mean_time (#166795)
The csr timing loop is nested inside the coo loop. duplicated and inconsistent measurements.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166795
Approved by: https://github.com/cyyever, https://github.com/ezyang
2025-11-08 23:57:15 +00:00
e342a7509a [pallas backend] add cpu backend and parametrize the tests (#167388)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167388
Approved by: https://github.com/jansel
2025-11-08 23:11:35 +00:00
27ac58bd70 Optimize global save-plan validation (#166820)
## Summary
- Fixes #163548 by replacing the quadratic chunk-overlap scan in `_validate_global_plan` with a sweep-line pass that sorts chunk intervals and keeps an active set via `bisect_right`, giving O(n log n) behavior for metadata validation.
- Add focused tests in `TestValidateGlobalPlan` covering overlapping and non-overlapping shard layouts to lock in the faster path.

## Testing
- python test/distributed/checkpoint/test_planner.py -k ValidateGlobalPlan

## Benchmarks
| chunks | old runtime | new runtime |
|--------|-------------|-------------|
| 1 024  | 0.121 s     | 0.0014 s    |
| 2 048  | 0.486 s     | 0.0027 s    |
| 4 096  | 2.474 s     | 0.0058 s    |
| 8 192  | 8.014 s     | 0.0126 s    |
| 16 384 | 32.740 s    | 0.026 s     |

@ezyang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166820
Approved by: https://github.com/LucasLLC, https://github.com/Skylion007
2025-11-08 20:59:44 +00:00
406719c3da [MPS] SparseMps mv op (#166708)
Should be merged after #166561
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166708
Approved by: https://github.com/Skylion007
2025-11-08 20:03:49 +00:00
957570e4a3 [dynamo][guards] 1/N Guard selectively for DTensor (#165824)
A few internal jobs are observing very high guard overhead for DTensor.
Since we own DTensor, we can make those guards way faster.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165824
Approved by: https://github.com/Lucaskabela, https://github.com/bdhirsh
2025-11-08 19:28:28 +00:00
eeb6c96a89 [vision hash update] update the pinned vision hash (#167391)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vision hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167391
Approved by: https://github.com/pytorchbot
2025-11-08 05:58:11 +00:00
0b12e49795 [Inductor] Decouple flags for optimization and debug symbols (#167385)
Summary:
What: Decouple flags for optimization and debug symbols

Why: The current flag for debug symbols only compiles the .so binary in unoptimized mode

Differential Revision: D86363355

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167385
Approved by: https://github.com/hl475, https://github.com/jansel
2025-11-08 05:13:38 +00:00
87646e5db4 [dynamo][ac] Return all intermediates as outputs for AC Hop (#167192)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167192
Approved by: https://github.com/zou3519
2025-11-08 03:56:39 +00:00
29d6bb79e1 Use context managers (SIM115) (#166928)
This PR changes code to use context managers if possible.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166928
Approved by: https://github.com/Lucaskabela
2025-11-08 03:09:16 +00:00
c2924bbafa [dynamo] replace raise Unsupported(...) with unimplemented(...) (#167255)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167255
Approved by: https://github.com/Lucaskabela, https://github.com/mlazos, https://github.com/zou3519
ghstack dependencies: #167150
2025-11-08 02:01:12 +00:00
a2f109dcc3 [dynamo] rename unimplemented_v2 -> unimplemented (#167150)
Also force the new `unimplemented`/old `unimplemented_v2` to explicitly specify the `gb_type`, `context`, `explanation`, and `hints` args.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167150
Approved by: https://github.com/mlazos, https://github.com/zou3519
2025-11-08 01:49:53 +00:00
ba5ffa2dca [5/N] Use key in dict for existence checks (#167311)
This PR uses `key in dict` expressions for existence checks of dict elements in Python code. This operation is more efficient than `key in dict.keys()`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167311
Approved by: https://github.com/janeyx99, https://github.com/Lucaskabela
2025-11-08 01:43:12 +00:00
c131e4b390 Revert "[CP] Correctly compile create_cp_block_mask (#167153)"
This reverts commit 5a9ae7cefe679ff925a0aa7b9f5782fc93d4ef29.

Reverted https://github.com/pytorch/pytorch/pull/167153 on behalf of https://github.com/donigian due to breaking internal tests D86529123 ([comment](https://github.com/pytorch/pytorch/pull/167153#issuecomment-3505563239))
2025-11-08 01:33:13 +00:00
7fd15aa2bd Additional fix on top of D85172267 (#167267) (#167279)
Summary:

It seems
D80948073
has caused some issue on a lowering pkg built on trunk: https://fburl.com/mlhub/o6p60pno
error log: P2001933683
which we were able to lower successfully in older ien pkg: https://fburl.com/mlhub/1ro094zo

D85172267 fixed this issue for the if conditional, but issue still exists for the else conditional. Logic is moved right before if-else to cover both cases

Test Plan:
checkout D85605372

buck2 run -c fbcode.enable_gpu_sections=true -c fbcode.platform=platform010 -c fbcode.nvcc_arch=a100,h100 -c fbcode.split-dwarf=true -c fbcode.dwp=true -c fbcode.enable_distributed_thinlto=true -c fbcode.use_link_groups=true fbcode//inference_enablement/model_processing/infra/components/lowering/re:re_cinder -- -r "$(cat ./fbcode/minimal_viable_ai/umia_v1/ig/ss_omni_exp/re_lower_aoti.json)"

with the diff, no issue was encountered.

Reviewed By: tissue3

Differential Revision: D86474796

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167279
Approved by: https://github.com/pianpwk
2025-11-08 01:28:49 +00:00
c45c966031 subproc_pool: Fix quiesce waitcounter (#167350)
Summary:
I was inspecting running jobs, and the quiesce waitcounter wasn't showing up.
Turns out this was a bad copy paste.

Test Plan: Primarily inspection

Reviewed By: masnesral

Differential Revision: D86457409

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167350
Approved by: https://github.com/aorenste, https://github.com/masnesral
2025-11-08 01:12:18 +00:00
d18c742779 [HOP][print]Add make_fx for the proxy with graph module print (#166920)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166920
Approved by: https://github.com/angelayi
ghstack dependencies: #166660
2025-11-08 00:34:24 +00:00
4957ae5838 Add API to annotate disjoint backward and handle in AC (#166536)
This adds zero-bubble / DualPipeV support for (S)AC

Before:
- AC will always retrigger recompute upon every distinct backward.

After:
- Any checkpointed regions encountered by backward under the same instance of this context manager will only trigger recompute at most once, even if there are multiple calls to backward.
- Backward calls under the same instance of this context manager must execute over non-overlapping regions of the backward graph even if retain_graph=True.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166536
Approved by: https://github.com/albanD
2025-11-08 00:21:25 +00:00
31d6d3ef5c [easy] Add new torch/csrc/stable/c/shim.h to existing nitpick (#167367)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167367
Approved by: https://github.com/janeyx99, https://github.com/malfet
2025-11-08 00:13:03 +00:00
2325c511e7 [dynamo] Make sym node vt creation via SymNodeVariable create (#167189)
This will help in the next PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167189
Approved by: https://github.com/williamwen42, https://github.com/zou3519
ghstack dependencies: #167160
2025-11-07 23:58:13 +00:00
d865156967 [dynamo][hops] Overwrite proxy of the original VT to the subgraph outputs (#167160)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167160
Approved by: https://github.com/zou3519
2025-11-07 23:58:13 +00:00
fbc0bd2e90 [DTensor][be] getting rid of unneccesary Partial check for norm functions (#167247)
**Summary:** While the implementation is correct, these checks are just a subset of the Partial placement checks that are done in https://github.com/pytorch/pytorch/pull/165962. This means for ops aten.linalg_vector_norm.default and aten._foreach_norm.Scalar, we're unnecessarily checking for Partial placements twice.

**Test Cases**
1. pytest test/distributed/tensor/test_math_ops.py -k test_vector_norm_partial
2. pytest test/distributed/tensor/test_math_ops.py -k test_foreach_norm_partial
3. pytest test/distributed/tensor/test_math_ops.py -k test_partial_reduction_ops

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167247
Approved by: https://github.com/XilunWu
2025-11-07 23:49:29 +00:00
70f5f55abf [Inductor-FX] Allocate tensors on device type instead of indexed device (#167358)
# Problem
The FX backend currently allocates tensors on an exact device index, such as `"cuda:0"`. In contrast, the Python backend allocates on a device type, such as `"cuda"`. This avoids edge cases where fake tensor propagation can fail due to mismatched devices.

# Fix
Allocate tensors on `device.type` instead of the device.

# Test plan
Added a CI test passing in sample inputs on an indexed device, and checking that the output device in the generated FX graph is not indexed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167358
Approved by: https://github.com/mlazos, https://github.com/nandesuka, https://github.com/eellison
2025-11-07 23:48:54 +00:00
69ecb562e7 [PT2 Compiler] Add annotation for dynamo disabled callables (#166341)
Summary: To make torch.export compatible with PT2 compile (which is done on top of exported model) we need to store torch._dynamo.disable attributes in exported model and later restore this after unflattening of exported model. This diff will add annotations to all nodes with torch._dynamo.disable, which will be preserved during exporting.

Test Plan:
```
buck test mode/opt caffe2/test:test_export -- 'test_dynamo_disable_annotations'
```
https://www.internalfb.com/intern/testinfra/testrun/6473924770741560

Differential Revision: D85302730

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166341
Approved by: https://github.com/williamwen42, https://github.com/angelayi
2025-11-07 23:28:00 +00:00
5062abe4e7 [CI][serialization] Fix exception regexes with Python-3.14 (#167333)
Not sure why, but running some tests (for example `test_weights_only_safe_globals_build`) with `pytest` in 3.14 makes global name `test_serialization.ClassThatUsesBuildInstruction` instead of expected `__main__.ClassThatUsesBuildInstruction`
Also, change expected exception type from `AttributeError` to `PicklingError`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167333
Approved by: https://github.com/atalman
2025-11-07 23:22:36 +00:00
c7007e7584 Update Kineto Submodule (#167343)
Summary: Title

Test Plan: CI

Differential Revision: D86538778

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167343
Approved by: https://github.com/Skylion007, https://github.com/aaronenyeshi
2025-11-07 23:06:58 +00:00
09705ca9b2 [dynamo][guards] Fix mem leak in tensor subclass metadata guard (#167352)
Use cls instead of the object. Earlier the metadata guard was holding on
to the Dtensor causing mem leak.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167352
Approved by: https://github.com/Skylion007
2025-11-07 23:01:15 +00:00
ea6b0b5d0f add missing cpp standard lib in HeaderOnlyArrayRef.h (#167337)
Fixes #167315
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167337
Approved by: https://github.com/janeyx99
2025-11-07 23:00:08 +00:00
bbf852d87f Revert "Remove python workaround for ContextDecorator (#167049)"
This reverts commit 13d2cc7bd26e32cafff0377dda1c5ddc8d04c4ce.

Reverted https://github.com/pytorch/pytorch/pull/167049 on behalf of https://github.com/donigian due to breaking internal tests D86342845 ([comment](https://github.com/pytorch/pytorch/pull/167049#issuecomment-3505251296))
2025-11-07 22:32:45 +00:00
6392b986e7 Revert "[13/N] Apply ruff UP035 rule (#167048)"
This reverts commit ea44f12bce3eb05eaa9fa34943a3ffae04647fa5.

Reverted https://github.com/pytorch/pytorch/pull/167048 on behalf of https://github.com/donigian due to breaking internal tests D86342860 ([comment](https://github.com/pytorch/pytorch/pull/167048#issuecomment-3505232522))
2025-11-07 22:25:01 +00:00
32d30d96cf [ROCm][CI] unconditionally add gfx950, gfx115x to PYTORCH_ROCM_ARCH (#167299)
Included gfx950, gfx1150, and gfx1151 unconditionally in PYTORCH_ROCM_ARCH. Removed the ROCm 7.0 version check and refactored the architecture list.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167299
Approved by: https://github.com/jeffdaily
2025-11-07 21:47:59 +00:00
46516efa85 [BE] use undeprecated from/to in libtorch_agnostic tests (#167126)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167126
Approved by: https://github.com/Skylion007
ghstack dependencies: #164991, #165152, #165153, #165953
2025-11-07 21:31:30 +00:00
84b2147b85 Introducing the StableIValue representation of list :D (#165953)
Some important notes:
a) Just like IValues steal the ownership of ArrayRefs and any std::vectors in order to convert the inner elements into IValues, we do the same thing with StableIValue. This O(N) traverse is ineluctable.
b) As a result, since StableIValues are owning and our contract is that to<T>(StableIValue) transfers ownership, you cannot ever convert from StableIValue to a nonowning HeaderOnlyArrayRef<V>.

We handle memory similar to AtenTensorHandle, but we have a StableListHandle!

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165953
Approved by: https://github.com/malfet
ghstack dependencies: #164991, #165152, #165153
2025-11-07 21:31:30 +00:00
1727a71cb6 Create pallas test shard (#167143)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167143
Approved by: https://github.com/malfet
ghstack dependencies: #167243
2025-11-07 21:05:54 +00:00
fb9e10fe25 Revert "Update pythoncapi_compat.h (#167138)"
This reverts commit c90a976370945af052bb7b0db86240fa6f321cd6.

Reverted https://github.com/pytorch/pytorch/pull/167138 on behalf of https://github.com/donigian due to Sorry but this is breaking internally. See diff D86458778 for details. ([comment](https://github.com/pytorch/pytorch/pull/167138#issuecomment-3504895388))
2025-11-07 20:53:14 +00:00
4e277e6323 inductor: compile_worker - Fix potential race condition with quiesce waitcounters (#167025)
Summary:
If quiesce ends up called twice (which is likely not possible with the timer based implementation, but possible with either manual calls, or with the context manager implementation), this assertion fires.

Instead make this assertion tolerant to rentrant calling of quiesce

Test Plan: Added a explicit test which calls quiesce twice.

Differential Revision: D86251534

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167025
Approved by: https://github.com/masnesral
2025-11-07 20:49:34 +00:00
ba327b7a5c [BE][Typing][Dynamo] Type torch/_dynamo/variables/functions.py (#167103)
Provides type coverage to torch/_dynamo/variables/dicts.py

Coverage report:
`mypy torch/_dynamo/variables/functions.py --linecount-report /tmp/coverage_log`

Compare before to after - we go from 0 lines and 0 funcs covered to 2698 lines and 166 funcs covered

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167103
Approved by: https://github.com/mlazos, https://github.com/fxdawnn
2025-11-07 20:38:08 +00:00
8eb21304ab [DTensor] ignore fresh unbacked symbols in shard prop (#166989)
This fixes 2 issues with the DTensor data-dependent test case:

1) ShapeEnv not found when doing shard prop on data-dependent ops - fix was to detect the outer tracing fake mode. Maybe ShardingPropagator should just own a FakeMode & ShapeEnv for these purposes? The previous behavior was to initialize a new fake mode on every call.

2) Pending unbacked symbols not found. This happens because DTensor dispatch runs fake prop twice, once while figuring out the output sharding: 2bba37309b/torch/distributed/tensor/_sharding_prop.py (L175) and again to actually get the resulting local tensor: 2bba37309b/torch/distributed/tensor/_dispatch.py (L254-L255) With data-dependent ops, both calls will produce an unbacked symbol, but symbols in the first invocation are never surfaced, producing this error, so we ignore pending symbols from this site.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166989
Approved by: https://github.com/ezyang
2025-11-07 20:18:41 +00:00
b83a3f6e87 compile time comm benchmarking (#167100)
Adds an option to do compile time collective benchmarking for comms/compute overlap scheduling. As with the comm benchmarks, these are all gathered, and each rank uses the median result to ensure consistency. thanks to @ruisizhang123 who had done this previously.

We log the compile time benchmark, the inductor analytic result, and the nccl estimator result to tlparse.

TODO:
- mechanism to seed collective estimates with the existing tlparse (or perfetto) to use for deterministic, pgo'd estimates
- interpolate results between powers of 2, and also do the actual benchmarking for latency calculation. both of these need to be meta aware since reduce scatter needs to be divisible by group_size, not hard but leaving for a subsequent pr.

Example output tlparse: https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/eellison/custom/rank_0/-_0_0_0/node_runtime_estimation_10.json?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167100
Approved by: https://github.com/IvanKobzarev
2025-11-07 20:13:37 +00:00
289b47e657 [MPS] empty matrix x vec mul fix (#166561)
Fixes empty matrix x vector. Discovered when implementing an op for sparse tensors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166561
Approved by: https://github.com/eqy, https://github.com/albanD
2025-11-07 20:05:46 +00:00
c20308b79e [Test CI] Bump ruff to 0.14.4 (#167286)
This PR bumps ruff to 0.14.4.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167286
Approved by: https://github.com/janeyx99, https://github.com/Skylion007
2025-11-07 20:05:10 +00:00
4c41e9bde7 making TORCH_CHECK_{COND} non-fatal (#167004)
TORCH_CHECK is non-fatal by design, but TORCH_CHECK_{COND} macros are fatal. this is confusing, and we should limit fatality to the set of debug macros.

Differential Revision: D86168955

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167004
Approved by: https://github.com/malfet
2025-11-07 19:48:19 +00:00
2f5223564e [ez] Remove experiment for uploading all test runs (#167133)
reverts #165484

after #166988 they are just uploaded while its running
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167133
Approved by: https://github.com/malfet
2025-11-07 19:38:36 +00:00
28615a765d Fix: list index out of range with softmax when using 0 dim (#166547)
Fixes #163971

Problem:
PyTorch's inductor compiler crashed with IndexError: list index out of range when compiling code that uses  0-dimensional tensors with operations like torch.softmax(scalar_tensor, dim=0).

A 0-dim tensor has shape = torch.Size([]) (empty shape)

```
ndim = 0 (zero dimensions)

len(shape) = 0 (no indices to access)

# Line 972: Pad other_shape to match inp dimensions
other_shape = [1] * (inp_ndim - len(other_shape)) + list(other_shape)

# For scalar tensors:
# inp_ndim = 0  # as input is scalar
# other_shape = []
# Result: [1] * (0 - 0) + [] = [] (still empty!)

dim = match.kwargs["dim"]  # dim = 0
if isinstance(dim, int):
    dim = (dim,)

# crash is happening here!
return all(statically_known_true(other_shape[d] == 1) for d in dim)
#                                 ^^^^^^^^^^^^^^^^
#                                 Tries other_shape[0] but other_shape = [] (empty!)
#                                 → IndexError: list index out of range
```

The function _other_is_broadcasted_in_dim() is an optimization check for a softmax fusion pattern. It verifies whether it's safe to rewrite:

```
# From
scaled = inp * other
result = scaled - scaled.amax(dim, keepdim=True)

# To this more stable form:
result = (inp - inp.amax(dim, keepdim=True)) * other
```

The optimization is only valid if other is constant across the reduction dimension (i.e., broadcasted to size 1 in that dimension). Otherwise, scaling changes which element is the maximum.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166547
Approved by: https://github.com/jansel
2025-11-07 19:32:43 +00:00
d1446ad75c Register floor_divide.out for MTIA (#167280)
Differential Revision: D86468749

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167280
Approved by: https://github.com/albanD
2025-11-07 19:31:51 +00:00
e401a56b96 [ez] Remove some dead code from test artifact related files (#166966)
Remove circle ci path since it's no longer used

Remove function that is not used
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166966
Approved by: https://github.com/malfet

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-11-07 18:14:44 +00:00
22650c89fb [ROCm] Update skip_if_lt_x_gpu to work with MultiProcContinuous class (#167281)
- Since MultiProcContinuous class spawns one process per GPU and runs UT in each of the processes, we need to ensure we are propagating the exit code associated with skip all the way to the main worker thread that spawned all the child processes.
- This commit also updates several UTs that are meant for 4 GPUs but incorrectly calls skip_if_lt_x_gpu with 2 as an input. Examples:
    - test_replicate_with_fsdp.py
    - test_dtensor_resharding.py
    - test_state_dict.py
    - test_functional_api.py: Fix typo. multi-accelerator doesn't exit, replaced with multi-gpu
    - test_op_strategy.py: world_size was hardcoded
    - test_math_ops.py: UT written for 4 GPU, so skipping for anything less
    - test_schedule_multiproc.py: All UTs in this suite are required to run on 2+ GPUs, therefore, adding skips if less than 4 GPUs are supplied

Fixes https://github.com/pytorch/pytorch/issues/166875

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167281
Approved by: https://github.com/jeffdaily
2025-11-07 18:11:48 +00:00
c62a17a2fb [ez] Remove some unused vars in common_utils.py (#166453)
I can't find where these are used
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166453
Approved by: https://github.com/malfet
2025-11-07 18:09:40 +00:00
713e289ae7 [dynamo][pytree] support more optree functions by polyfill the underlying CXX functions directly (#167292)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167292
Approved by: https://github.com/Lucaskabela
ghstack dependencies: #167221, #167211
2025-11-07 18:09:19 +00:00
69784a0dbe [dynamo][pytree] add polyfills for optree path APIs (#167211)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167211
Approved by: https://github.com/Lucaskabela
ghstack dependencies: #167221
2025-11-07 17:53:32 +00:00
3c2409c465 Refactor recursive call of collect_temp_source (#166714)
Recursive function call creates a reference cycle: closure <- function <- cell inside closure
Capturing self (PyCodegen instance) in same closure prolongs it's life until next gc.collect() which might result in worse resource management

After the introduction of e9209e0 OOM issues has been observed. Looking for reference cycles one has been uncovered that would result in the prolonging lifetime of tensors. As the result of that OOM issues might occur. Such a dependency chain has been uncovered:
<img width="1059" height="540" alt="image" src="https://github.com/user-attachments/assets/359a8534-e7cd-491f-be40-547c2af5cbbc" />

At the end of it a reference cycle can be found that consists of a closure for function collect_temp_source, the function itself, and a cell object inside closure that would point to the function due to the recursive call.

This issue can either be resolved by removing recurrency or removing PyCodegen instance from the closure.
Another precaution that can be made is to explicitly empty f_locals dict. This way we cut the tensor from the chain leading to reference cycle.

Fixes #166721

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166714
Approved by: https://github.com/Lucaskabela, https://github.com/Skylion007, https://github.com/jeromean, https://github.com/williamwen42, https://github.com/mlazos
2025-11-07 17:52:54 +00:00
724cd32b0c [PT2 Compiler] Add flag in dynamo disable wrapper to indicate reursive disable (#165790)
Summary: After torch._dynamo.disable is applied, wrapped method does not have any flag to indicate whether it was disabled recursively or not. This flag is needed if to preserve dynamo disable methods in torch.export-ed model

Test Plan:
```
buck test mode/opt caffe2/test/dynamo:test_dynamo -- 'test_disable_recursive_flags'
````
https://www.internalfb.com/intern/testinfra/testrun/7599824674075603

Differential Revision: D84949143

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165790
Approved by: https://github.com/angelayi, https://github.com/williamwen42
2025-11-07 17:48:20 +00:00
b62935d1a5 fix alpha beta in decomp (#167317)
fix for https://github.com/pytorch/pytorch/issues/167313

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167317
Approved by: https://github.com/zou3519
ghstack dependencies: #161404
2025-11-07 17:42:13 +00:00
ccc8c117dc Codeowner/Labeler updates post-Blas-reorgs (#167130)
Summary:

Previous PRs have split out scaled/grouped Blas routines into
their own files. This updates the codeowners and labeler to reflect
those changes.

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
Signed-off-by: Simon Layton <simonlayton@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167130
Approved by: https://github.com/drisspg
2025-11-07 17:27:41 +00:00
86db4de10f [PP] PP Runtime Features for supporting Graph Based execution (#167277)
Allow overriding UNSHARD, RESHARD and REDUCE_GRAD actions.
Enable running pp backward without torch.grad.is_enabled().

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167277
Approved by: https://github.com/wconstab
2025-11-07 17:11:14 +00:00
12860892f8 Revert "[Inductor][Grouped Gemm] Add Blackwell CuTeDSL Kernel (#167182)"
This reverts commit 77b70970f70d53de71b9703ad4c3199d714c535a.

Reverted https://github.com/pytorch/pytorch/pull/167182 on behalf of https://github.com/NikhilAPatel due to breaks local source build ([comment](https://github.com/pytorch/pytorch/pull/167182#issuecomment-3503598156))
2025-11-07 16:45:23 +00:00
694592ac1e Move enrich_profiler_metadata config import out of gm.recompile() (#167114)
Fixes T243967987

Move `enrich_profiler_metadata` from `torch._dynamo.config` to `torch.fx.experimental._config`.

We cannot import anything inside recompile(), it made some perf regress internally. We move the config so we can import it at the top of `graph_module.py` without causing any circular import.

We also cannot delete the old config right now because some internal tests rely on copies of the old `graph_module.py` cpp file in unit tests. But I think we should be able to delete the old config soon after this PR lands.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167114
Approved by: https://github.com/angelayi
2025-11-07 16:12:47 +00:00
285748e838 fix the cpp_builder error under riscv (#167071)
**fix the cpp_builder error under riscv**

`g++: error: ‘-march=native’: ISA string must begin with rv32 or rv64`

(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779]   File "/usr/local/lib64/python3.11/site-packages/torch/_inductor/cpp_builder.py", line 1718, in build
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779]     run_compile_cmd(build_cmd, cwd=_build_tmp_dir)
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779]   File "/usr/local/lib64/python3.11/site-packages/torch/_inductor/cpp_builder.py", line 401, in run_compile_cmd
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779]     _run_compile_cmd(cmd_line, cwd)
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779]   File "/usr/local/lib64/python3.11/site-packages/torch/_inductor/cpp_builder.py", line 396, in _run_compile_cmd
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779]     raise exc.CppCompileError(cmd, output) from e
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779] torch._inductor.exc.InductorError: CppCompileError: C++ compile error
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779]
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779] Command:
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779] g++ /tmp/tmpv8qz53jp/header.hpp -D TORCH_INDUCTOR_CPP_WRAPPER -D STANDALONE_TORCH_HEADER -D C10_USING_CUSTOM_GENERATED_MACROS -fPIC -O3 -DNDEBUG -fno-trapping-math -funsafe-math-optimizations -ffinite-math-only -fno-signed-zeros -fno-math-errno -fexcess-precision=fast -fno-finite-math-only -fno-unsafe-math-optimizations -ffp-contract=off -fno-tree-loop-vectorize -march=native -Wall -std=c++17 -Wno-unused-variable -Wno-unknown-pragmas -fopenmp -I/usr/include/python3.11 -I/usr/local/lib64/python3.11/site-packages/torch/include -I/usr/local/lib64/python3.11/site-packages/torch/include/torch/csrc/api/include -D_GLIBCXX_USE_CXX11_ABI=1 -E -P -o /tmp/tmpv8qz53jp/header.i
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779]
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779] Output:
(EngineCore_DP0 pid=14414) ERROR 11-04 18:36:01 [core.py:779] g++: error: ‘-march=native’: ISA string must begin with rv32 or rv64

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167071
Approved by: https://github.com/malfet
2025-11-07 16:01:30 +00:00
192034c41b [easy][dynamo][pytree] simplify pytree polyfill module by move out the guard-if (#167221)
Move the guard-if in `polyfills.pytree` to `polyfills.loader` and dedent the code in the if-branch.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167221
Approved by: https://github.com/Lucaskabela
2025-11-07 15:23:03 +00:00
5bfce8f345 Unit test for torch.compile bmm dtype (#167140)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167140
Approved by: https://github.com/atalman, https://github.com/mlazos
2025-11-07 14:59:00 +00:00
edd611f3b0 [CI] Upgrade Ubuntu 24.04 for XPU CI tests (#162475)
As the title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162475
Approved by: https://github.com/EikanWang, https://github.com/atalman
2025-11-07 14:05:16 +00:00
aded2ebb90 [3/N] Add return types of Python functions (#167287)
This PR adds return types to some Python functions.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167287
Approved by: https://github.com/mlazos
2025-11-07 13:50:33 +00:00
5bda7afa05 [9/N] Fix unused loop variables in tests (#167290)
This PR fixes unused loop variables in tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167290
Approved by: https://github.com/mlazos
2025-11-07 11:45:31 +00:00
341e924981 [4/N] Use key in dict for existence checks (#167285)
This PR uses `key in dict` expressions for existence checks of dict elements in Python code. This operation is more efficient than `key in dict.keys()`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167285
Approved by: https://github.com/mlazos
2025-11-07 09:47:17 +00:00
5a9ae7cefe [CP] Correctly compile create_cp_block_mask (#167153)
Currently we re-compile create_block_mask every time, which is not very efficient and the global compilation also causes some issues. This PR lazily compile the create_block_mask and does it only once.

Fixes https://github.com/pytorch/pytorch/issues/167064

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167153
Approved by: https://github.com/drisspg, https://github.com/XilunWu
2025-11-07 09:31:45 +00:00
3d59e8aadf [14/N] Apply ruff UP035 rule (#167208)
This PR continues to apply the `UP035` ruff rule and add `collections.abc` to dynamo checks.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167208
Approved by: https://github.com/mlazos
2025-11-07 09:21:51 +00:00
4cf1d1af22 [Inductor][Tritonparse] Ensure inductor meta has config_args (#167261)
Summary: Before calling the tritonparse hook with `config_args`, ensure that we set `config_args` within `inductor_meta`. This way, even if it is not set, the hook still gets run and we can at least get the launch arguments.

Test Plan: Tritonparse tests

Differential Revision: D86463732

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167261
Approved by: https://github.com/FindHao
2025-11-07 08:55:47 +00:00
05b8214e6a Added a couple of utils for Pallas TPU backend. (#167264)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167264
Approved by: https://github.com/oulgen
2025-11-07 08:23:02 +00:00
35d2da32bd [ROCm][CI] Separate out rocm from slow workflow (#167262)
Running slow.yml on every commit is straining our limited MI200 capacity. Reducing the frequency in line with other MI200-based workflows as per https://github.com/pytorch/pytorch/pull/167220

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167262
Approved by: https://github.com/jeffdaily

Co-authored-by: Jithun Nair <37884920+jithunnair-amd@users.noreply.github.com>
2025-11-07 07:38:59 +00:00
0968e74266 [ROCm][CI] Run PR-Based workflow runs on mi300 nodes. (#167225)
This PR is meant to swap the PR-based ciflow tags from the mi200 nodes (less stable) to the mi300 nodes (more stable). This will ensure that developers see consistent testing on their PRs as well as on main. This PR does all of the following:

- Rename rocm.yml to rocm-mi200.yml : for clarity
- Add ciflow/rocm-mi200 trigger to rocm-mi200.yml : for devs who want to opt-in to single-GPU unit tests on MI200
- Move ciflow/rocm trigger from rocm-mi200.yml to rocm-mi300.yml : so PRs target MI300 runners by default

- Rename inductor-rocm.yml to inductor-rocm-mi200.yml : for clarity
- Remove ciflow/inductor-rocm trigger from inductor-rocm-mi200.yml : prevent MI200 inductor config unit tests being triggered by default
- Add ciflow/inductor-rocm-mi200 trigger to inductor-rocm-mi200.yml : for devs who want to opt-in to inductor config unit tests on MI200
- Move ciflow/periodic trigger from periodic-rocm-mi200.yml to periodic-rocm-mi300.yml : so PRs target MI300 runners by default

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167225
Approved by: https://github.com/jeffdaily, https://github.com/huydhn

Co-authored-by: Jithun Nair <jithun.nair@amd.com>
2025-11-07 07:37:34 +00:00
57dd6a0656 [OC][Torch] Extend autotune options for OC OBA 200x shapes (#166931)
Summary:
Add four best configs for shapes of the OC OBA 200x model:
```
M=2048 N=2048 K=12288
triton_mm_35 0.1526 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=64, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=True, kpack=2, matrix_instr_nonkdim=16, waves_per_eu=0, num_stages=2, num_warps=4, num_consumer_groups=0, num_buffers_warp_spec=0

M=2048 N=52416 K=1536
triton_mm_12 0.4604 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=128, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=True, kpack=2, matrix_instr_nonkdim=16, waves_per_eu=0, num_stages=2, num_warps=4, num_consumer_groups=0, num_buffers_warp_spec=0

M=2048 N=12288 K=2048
triton_mm_9 0.1444 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=256, BLOCK_N=128, EVEN_K=True, GROUP_M=8, USE_FAST_ACCUM=True, kpack=2, matrix_instr_nonkdim=16, waves_per_eu=0, num_stages=2, num_warps=8, num_consumer_groups=0, num_buffers_warp_spec=0

M=2048 N=2048 K=52416
triton_mm_35 0.6505 ms 100.0% ACC_TYPE='tl.float32', ALLOW_TF32=False, BLOCK_K=128, BLOCK_M=64, BLOCK_N=128, EVEN_K=False, GROUP_M=8, USE_FAST_ACCUM=True, kpack=2, matrix_instr_nonkdim=16, waves_per_eu=0, num_stages=2, num_warps=4, num_consumer_groups=0, num_buffers_warp_spec=0
```

Test Plan:
Run tritonbench for torch fp8(_scaled_mm) for all above shapes, e.g.

```
TRITON_PRINT_AUTOTUNING=1 buck2 run mode/opt-amd-gpu -c fbcode.enable_gpu_sections=true //pytorch/tritonbench:run -- --op fp8_gemm --only pt2_fp8_gemm --metrics tflops,accuracy --m 2048 --n 2048 --k 12288
```

Differential Revision: D86158497

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166931
Approved by: https://github.com/jananisriram
2025-11-07 07:08:48 +00:00
7318ed627b [user-streams] Trace events with the new ops (#167177)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167177
Approved by: https://github.com/anijain2305
ghstack dependencies: #167175, #167176, #167180, #167195, #167260
2025-11-07 06:25:35 +00:00
5b2ad2d5dc [user-streams] Add fallbacks for record and wait event (#167260)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167260
Approved by: https://github.com/shunting314
ghstack dependencies: #167175, #167176, #167180, #167195
2025-11-07 06:25:35 +00:00
faba6e205f [pallas backend] use dlpack directly (#167243)
previous version does not work on jax 0.8

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167243
Approved by: https://github.com/yf225, https://github.com/jansel
2025-11-07 05:54:51 +00:00
3261149aa3 [dynamo] remove old unimplemented() call (#167149)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167149
Approved by: https://github.com/Lucaskabela, https://github.com/mlazos
ghstack dependencies: #167001, #167146, #167159
2025-11-07 05:30:40 +00:00
bd7e18bc57 [dynamo] unimplemented -> unimplemented_v2 in torch/_subclasses/meta_utils.py (#167159)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167159
Approved by: https://github.com/Lucaskabela, https://github.com/mlazos
ghstack dependencies: #167001, #167146
2025-11-07 05:30:40 +00:00
643b3bc8f3 [dynamo] unimplemented -> unimplemented_v2 in variables/higher_order_ops.py (#167146)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167146
Approved by: https://github.com/Lucaskabela, https://github.com/mlazos
ghstack dependencies: #167001
2025-11-07 05:30:40 +00:00
91b626e2ef [dynamo] unimplemented -> unimplemented_v2 for the rest of variables/misc.py (#167001)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167001
Approved by: https://github.com/Lucaskabela, https://github.com/mlazos
2025-11-07 05:30:40 +00:00
bf8297afe0 [inductor] let mix-order-red tune XBLOCK and num-stages (#167161)
A few improvements for autotuning
- while testing mix order reduction for internal workloads, Paul found that tuning num-stages could be very helpful for triton kernel. The idea is illustrated on his diff: https://www.internalfb.com/diff/D86341591
- when rnumel is small, larger XBLOCK could be helpful for perf

This PR adds the ability to autotune num-stages and XBLOCK. This brings further 19% speedup for RMSNorm BWD on B200.

Testing result:

  eager 11 data points
  compiled 11 data points, 17.07x speedup (was 14.39x before the PR. The PR brings further 19% speedup)
  quack 11 data points, 12.72x speedup
  liger 11 data points, 11.75x speedup
  compiled-no-fusion 11 data points, 9.93x speedup

<img width="3564" height="2368" alt="RMSNormBackward_bench" src="https://github.com/user-attachments/assets/3e415242-a988-42bf-8a47-4ed5f11148a3" />

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167161
Approved by: https://github.com/jansel
ghstack dependencies: #166669, #166938
2025-11-07 04:49:53 +00:00
3f03f84ce2 [inductor] fix dashbaord regression due to mix order reduction (#166938)
The PR includes a misc list of fixes for the regressions I see from the dashboard:
1. the dashboard may use very small shape for rmsnorm backward. The data set can be fully cached in L2 thus mix order reduction does not show much benefit and may even has worse perf. Disable mix order reduction for small workload
2. disable the autotuning of split size by default to avoid the compilation time hit
3. avoid mix order reduction if there is non-contiguous memory access. Previously the check is only done for shared buffers accessed by both reductions. It turns out to be necessary to expand the check for buffers only accessed by one reduction. Check test test_avoid_non_coalesced_access which is simplified from a TIMM model.  Note that larger XBLOCK could fix the perf problem and make mix order reduction still applicable. But I don't think that's high priority. With larger XBLOCK, the kernel would consume much more shared memory/registers. That could also cause perf issue.

Dashboard result [here](https://hud.pytorch.org/benchmark/compilers?dashboard=torchinductor&startTime=Wed%2C%2029%20Oct%202025%2003%3A40%3A22%20GMT&stopTime=Wed%2C%2005%20Nov%202025%2004%3A40%3A22%20GMT&granularity=hour&mode=training&dtype=amp&deviceName=cuda%20(h100)&lBranch=gh/shunting314/257/head&lCommit=b6f4a24ea5f7574d6b1d3b854022aa09d70593db&rBranch=main&rCommit=22a745737a09b0600bb0b85b4c0bbb9fb627f137).

<img width="1484" height="531" alt="Screenshot 2025-11-04 at 10 58 48 PM" src="https://github.com/user-attachments/assets/60cda211-3cc4-4fe1-9eaf-d2fb2c7d15a1" />

- the perf drop for TIMM (default) is not real, it's due to one more model passed the accuracy test
- the perf drop for HF (cudagraphs) is not real. I checked each individual models that showed regressed on the dashboard. And they fall into the following categories
   - showed regressed, but absolute execution get reduced. e.g. OPTForCausalLM
   - showed regressed, but has slight speedup on h100 dev server: MobileBertForMaskedLM . speedup from 57.847709ms to 56.711640 ms
   - showed regressed, but the PR does not change the kernels generated (skip mix order reduction due to small workload or other reasons). e.g. XGLMForCausalLM, AlbertForMaskedLM .

Note that the neutral result on the dashboard is expected due to small workload size. For large workload,  we see about 1.5x geomean for rmsnorm/layernorm  backward on average and 2.2x for some shapes used by internal model. For 8GPU torchtitan training on llama3, we see 4% TPS (tokens per second) improvement.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166938
Approved by: https://github.com/jansel
ghstack dependencies: #166669
2025-11-07 04:49:53 +00:00
8a72188828 Raise error for 1D (size > 1) -> 0D parameter loads (#166335)
Fixes #165873

# Title
Fix load_state_dict: raise error for 1D (size > 1) -> 0D parameter loads

## Summary
This PR fixes a bug where loading a 1D tensor (size > 1) into a scalar (0D) parameter would silently take the first element instead of raising an error. The fix preserves backward compatibility for 1D tensors of size 1 while catching genuine shape mismatches.

## Motivation
Previously, loading a 1D tensor like torch.randn(32000) into a 0D scalar parameter would silently slice the first element, leading to silent data loss and potential bugs. This change ensures users get a clear error when there's a genuine shape mismatch.

## Behavior change

Before:
1D tensor (any length) -> 0D scalar -> silently coerced using input_param[0]

After:
- 1D tensor (size == 1) -> 0D scalar -> allowed (backward compatibility)
- 1D tensor (size > 1) -> 0D scalar -> raises RuntimeError with size mismatch message

In torch/nn/modules/module.py, _load_from_state_dict, added input_param.shape[0] == 1 check to the backward compatibility condition to only allow single-element 1D tensors.

## Tests
Added test_scalar_param_1d_tensor_raises to verify that loading 1D tensors of size > 1 raises an error, while size 1 loads successfully.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166335
Approved by: https://github.com/mikaylagawarecki
2025-11-07 04:43:11 +00:00
d325aa1877 [vision hash update] update the pinned vision hash (#167032)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned vision hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167032
Approved by: https://github.com/pytorchbot
2025-11-07 04:22:57 +00:00
7aedf3a576 Update torch-xpu-ops commit pin (#166945)
Update the torch-xpu-ops commit to [intel/torch-xpu-ops@9aac5a](9aac5a1ddf), includes:

- Enable FP8 concat/where/flip/index_put/index.Tensor on XPU backend
- Remove BUILD_SPLIT_KERNEL_LIB flag
- Fix the initialization order of ProcessGroupXCCL
- Separates communication initialization logic from getXCCLComm
- Fix segmentation fault in NLLLoss kernel

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166945
Approved by: https://github.com/EikanWang
2025-11-07 03:49:42 +00:00
eaf4815c1f Remove workarounds for older Python (#167173)
This PR removes workarounds for older Python.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167173
Approved by: https://github.com/albanD
2025-11-07 03:37:56 +00:00
a913b2bb93 [2/N] Add return types of Python functions (#167203)
This PR adds return types of some Python functions. Most of them return `None`. The types were added automatically by ruff ANN rules.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167203
Approved by: https://github.com/Skylion007

Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
2025-11-07 03:22:57 +00:00
1632876edf [3/N] Use key in dict for existence checks (#167214)
This PR uses `key in dict` expressions for existence checks of dict elements in Python code. This operation is more efficient than `key in dict.keys()`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167214
Approved by: https://github.com/Lucaskabela
2025-11-07 02:49:15 +00:00
0e1f76f77e Add two new docker images with Python 3.11/3.12 (#167092)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167092
Approved by: https://github.com/malfet, https://github.com/atalman
2025-11-07 02:44:03 +00:00
ae67a5a9d3 [ROCm] Specialized binary elementwise broadcast kernel for mixed dtypes with float/bfloat16/half (#167233)
* `c10::fetch_and_cast` and `c10::cast_and_store` produce branchy code since it supports all datatypes
* So, we do special handling for binary elementwise broadcast with mixed dtypes of float/bfloat16/half
* This improves performance

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167233
Approved by: https://github.com/jeffdaily
2025-11-07 02:42:09 +00:00
292bd62c71 Introduce TEST_ACCELERATOR and TEST_MULTIACCELERATOR to simplify UT (#167196)
# Motivation
This PR aims to introduce two variables (`TEST_ACCELERATOR` and `TEST_MULTIACCELERATOR`) to simplify UT generalization. Since out-of-tree backends may be imported later, these variables are defined as lazy values.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167196
Approved by: https://github.com/albanD
2025-11-07 01:51:18 +00:00
0e512ee9f0 Make pyrefly installable by lintrunner on Python-3.14 (#167270)
By pinning numpy to 2.3.4 for 3.14

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167270
Approved by: https://github.com/huydhn
2025-11-07 01:43:25 +00:00
31ac764239 Revert "Move enrich_profiler_metadata config import out of gm.recompile() (#167114)"
This reverts commit d144382dc96f109a6254c38734779e0a09fb7134.

Reverted https://github.com/pytorch/pytorch/pull/167114 on behalf of https://github.com/jeffdaily due to broke rocm ([comment](https://github.com/pytorch/pytorch/pull/167114#issuecomment-3500057321))
2025-11-07 01:21:15 +00:00
b228f6d180 Revert "[ROCm] Enable StaticCudaLauncher for ROCm (#166492)"
This reverts commit ba2e6b0b4f1718767762d7b20558d4de943be71b.

Reverted https://github.com/pytorch/pytorch/pull/166492 on behalf of https://github.com/jeffdaily due to test/inductor/test_ck_backend.py::TestCKBackend::test_max_autotune_precompile_matmul_dynamic_max_autotune_gemm_backends_CK_autotune_in_subproc_True [GH job link](https://github.com/pytorch/pytorch/actions/runs/19147453561/job/54731084387) [HUD commit link](ba2e6b0b4f) ([comment](https://github.com/pytorch/pytorch/pull/166492#issuecomment-3500049276))
2025-11-07 01:17:13 +00:00
e678450a69 [cuDNN][SDPA][Convolution] Expose cuDNN runtime version in CUDA hooks (#167111)
cuDNN dispatching heuristics rely on versions checks but currently only that compile-time version is exposed, if we want to allow users to resolve https://github.com/pytorch/pytorch/issues/166643 on their end by updating their cuDNN version locally we need to check the runtime version rather than compile-time version.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167111
Approved by: https://github.com/Skylion007
2025-11-07 01:15:18 +00:00
552c3f3e18 Add THO_DISPATCH_V2 macro (#166629)
The THO_DISPATCH_V2 macro is same as AT_DISPATCH_V2 but usable in headeronly context or stable ABI codes. The main difference is that AT_DISPATCH_V2 supports selective build while THO_DISPATCH_V2 does not.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166629
Approved by: https://github.com/janeyx99, https://github.com/albanD
ghstack dependencies: #165856
2025-11-07 01:13:55 +00:00
5b36e4e30f Move AT_DISPATCH_V2 helper macros to headeronly and add THO_DISPATCH_V2_TMPL (#165856)
Problem: the migration of `AT_DISPATCH_V2` macros to headeronly cannot be a simple copy-paste of macro definitions from one header file to another because the macros `AT_DISPATCH_SWITCH` and `AT_DISPATCH_CASE` may use functions that cannot be migrated to headeronly, e.g. when a selective build feature is enabled, there will be functions that are generated. On the other hand, when not using selective build, the dtype-dispatch macros are perfectly suitable for migrating to headeronly.

In this PR, the migration problem above is tackled by refactoring `AT_DISPATCH` related macros into headeronly macros and non-headeronly macros while preserving the current API and semantics. For instance, consider the current V2 macro definitions:
```c++
#define AT_DISPATCH_V2(TYPE, NAME, BODY, ...) \
  AT_DISPATCH_SWITCH(TYPE, NAME, AT_AP_VAR(AT_WRAP(BODY), TYPE, __VA_ARGS__))
#define AT_AP_VAR(N, T, ...) \
  AT_EXPAND(AT_CONCAT(AT_AP, AT_NUM_ARGS(__VA_ARGS__))(AT_WRAP(N), __VA_ARGS__))
#define AT_AP1(N, _1) AT_DISPATCH_CASE(_1, N)
...
```
where the headeronly-migration-problematic parts are using AT_DISPATCH_SWITCH and AT_DISPATCH_CASE macros (defined in ATen/Dispatch.h). In this PR, we introduce parametric versions of `AT_DISPATCH_V2` and `AT_AP1` macros that have `_TMPL` suffices, have DISPATCH_SWITCH and DISPATCH_CASE arguments, and are define in `torch/headeronly/core/Dispatch_v2.h`:
```c++
#define THO_DISPATCH_V2_TMPL(                               \
    DISPATCH_SWITCH, DISPATCH_CASE, TYPE, NAME, BODY, ...) \
  DISPATCH_SWITCH(                                         \
      TYPE,                                                \
      NAME,                                                \
      THO_AP_VAR_TMPL(DISPATCH_CASE, AT_WRAP(BODY), TYPE, __VA_ARGS__))
#define THO_AP_VAR_TMPL(C, N, T, ...) \
  AT_EXPAND(                         \
      AT_CONCAT(THO_AP, AT_NUM_ARGS(__VA_ARGS__))(C, AT_WRAP(N), __VA_ARGS__))
#define THO_AP1(C, N, _1) C(_1, N)
...
```
so that original V2 macro definition, defined in ATen/Dispatch_v2.h,  becomes:
```c++
#define AT_DISPATCH_V2(TYPE, NAME, BODY, ...) \
  THO_DISPATCH_V2_TMPL(                        \
      AT_DISPATCH_SWITCH,                     \
      AT_DISPATCH_CASE,                       \
      TYPE,                                   \
      NAME,                                   \
      AT_WRAP(BODY),                          \
      __VA_ARGS__)
```
that has exactly the same API and semantics as the original definition.

Note 1: ~we have changed the definition of `AT_AP1(N, _1) ...` to `AT_AP1(C, N, _1) ...` without renaming `AT_AP1` because `AT_AP1` is a helper macro that is not a part of public API (for instance, nothing in pytorch explicitly uses `AT_AP1`).~ UPDATE: restored the original `AT_AP` macros and introduced new `THO_AP` macros.

Note 2: this PR introduces a new API macro THO_DISPATCH_V2_TMPL that will be available for stable ABI users who can use it by providing custom versions of `AT_DISPATCH_SWITCH` and `AT_DISPATCH_CASE macros, say, with selective build features removed.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165856
Approved by: https://github.com/janeyx99
2025-11-07 01:13:55 +00:00
cd6d06a22b Revert "[BE][Typing][Dynamo] Type torch/_dynamo/variables/functions.py (#167103)"
This reverts commit 9a86ef763201e27f031469f0866c893707e9cf38.

Reverted https://github.com/pytorch/pytorch/pull/167103 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/167103#issuecomment-3500023910))
2025-11-07 01:06:34 +00:00
669cf21a6b Added Validation for batch_norm eps value (#166756)
Fixes #166405.
I've fixed this by adding epsilon validation in ```torch.nn.functional.batch_norm``` to reject non-positive values before they cause undefined behavior. Also added a test case ```test_batchnorm_invalid_eps``` to verify the fix works correctly.
While working on this, I noticed that ```layer_norm```, ```group_norm```, and ```instance_norm``` also don't validate epsilon and could have the same issue. Should I add validation for those in this PR as well?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166756
Approved by: https://github.com/mikaylagawarecki
2025-11-07 00:53:58 +00:00
9a86ef7632 [BE][Typing][Dynamo] Type torch/_dynamo/variables/functions.py (#167103)
Provides type coverage to torch/_dynamo/variables/dicts.py

Coverage report:
`mypy torch/_dynamo/variables/functions.py --linecount-report /tmp/coverage_log`

Compare before to after - we go from 0 lines and 0 funcs covered to 2698 lines and 166 funcs covered

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167103
Approved by: https://github.com/mlazos, https://github.com/fxdawnn
2025-11-07 00:40:49 +00:00
f47cadf75d [BE][Typing][Dynamo] Type torch/_dynamo/variables/lists.py (#167156)
Provides type coverage to torch/_dynamo/variables/dicts.py

Coverage report:
`mypy torch/_dynamo/variables/lists.py --linecount-report /tmp/coverage_log`

Compare before to after - we go from 0 lines and 0 funcs covered to 1759 lines and 102 funcs covered

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167156
Approved by: https://github.com/Skylion007, https://github.com/rtimpe
2025-11-07 00:15:40 +00:00
2923b02c6e [DTensor] add explicit mode (ExplicitRedistributionContext) (#166593)
usage:

```
dx = distribute_tensor(x, device_mesh, [Shard(0)])
dA = distribute_tensor(A, device_mesh, [Shard(0)])
with ExplicitRedistributionContext():
    with self.assertRaisesRegex(RuntimeError, "Implicit redistribution"):
        # Shard(0) @ Shard(0) requires a redistribution
        torch.matmul(dx, dA)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166593
Approved by: https://github.com/ezyang
2025-11-07 00:04:19 +00:00
4b9ba0fb26 [user-streams] Add requires cuda to all test cases (#167195)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167195
Approved by: https://github.com/Lucaskabela
ghstack dependencies: #167175, #167176, #167180
2025-11-06 23:13:47 +00:00
106d34c80a [user-streams] add requires cuda decorator (#167180)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167180
Approved by: https://github.com/donigian, https://github.com/Lucaskabela, https://github.com/Skylion007
ghstack dependencies: #167175, #167176
2025-11-06 23:13:47 +00:00
0b06109412 [user-streams] Fix bug in object bytecode construction (#167176)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167176
Approved by: https://github.com/Lucaskabela
ghstack dependencies: #167175
2025-11-06 23:13:47 +00:00
2073af5790 [user-streams] Refactor user object index in streams (#167175)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167175
Approved by: https://github.com/Lucaskabela
2025-11-06 23:13:47 +00:00
9b4ac45d2f Revert "[Inductor] addmm with bias -> unfuse bias if there is a pointwise/reduction consumer (#166165)"
This reverts commit eefa16342c9f322b56c7c0cd6d309c3ed8f0b882.

Reverted https://github.com/pytorch/pytorch/pull/166165 on behalf of https://github.com/jeanschmidt due to Breaking internal tests D86216934 ([comment](https://github.com/pytorch/pytorch/pull/166165#issuecomment-3499645688))
2025-11-06 22:34:48 +00:00
a45a17f65e Fix boxcox to return same result for same input in one batch (#166986)
Summary:
The SIMD path is using SLEEF version of pow which is slightly different from std::pow. The fix is to use the same vectorized code (with partial load and store) for the trailing data as well to ensure consistency between results.

Deploy:
Need to make a hotfix in waas to monitor release signals, since this diff can cause testing failures in veloski and waas release correctness tests.

Test Plan: Sandcastle.

Differential Revision: D86218207

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166986
Approved by: https://github.com/swolchok
2025-11-06 22:33:26 +00:00
c5593e75b3 Fix flaky memory profiler test (#167168)
Fixes #167037

Do not check the exact number of frames.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167168
Approved by: https://github.com/angelayi
2025-11-06 21:39:44 +00:00
c90a976370 Update pythoncapi_compat.h (#167138)
Update to commit 44c8e14bbbb5d5135ae90957036a61397e4df577.

Should slightly simplify https://github.com/pytorch/pytorch/pull/166342
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167138
Approved by: https://github.com/albanD
2025-11-06 21:31:58 +00:00
d144382dc9 Move enrich_profiler_metadata config import out of gm.recompile() (#167114)
Fixes T243967987

Move `enrich_profiler_metadata` from `torch._dynamo.config` to `torch.fx.experimental._config`.

We cannot import anything inside recompile(), it made some perf regress internally. We move the config so we can import it at the top of `graph_module.py` without causing any circular import.

We also cannot delete the old config right now because some internal tests rely on copies of the old `graph_module.py` cpp file in unit tests. But I think we should be able to delete the old config soon after this PR lands.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167114
Approved by: https://github.com/angelayi
2025-11-06 21:21:40 +00:00
78827c5e00 Distributed Autotuning (#163369)
This is the initial prototype of distributed autotuning. It's intended to be a basis for iteration rather than the final end product.

Currently when we run a SPMD program we compile the ranks independently. As a result the autotuning is repeated on every rank. So for a 8-GPU program with 8 matmul operators we'll autotune 64 (8*8) times.

Distributed autotuning uses collectives to distribute the autotuning across the ranks so each rank autotunes 1/worldsize the total operators. So in our 8-GPU example we would only perform 8 autotunes total (one on each rank) rather than 64.

There are several advantages:
1. Faster autotuning times - each CPU/GPU does less work total
2. Better determinism - currently it's possible for two ranks to choose different algorithms for the same operator. With distributed autotuning we choose the algorithm once for the entire program.

Results:

In testing using llama3 8B on torchtitan max-autotune time was reduced from 52s -> 26s and exhaustive-autotuning was reduced from 2009s -> 613s.

Usage:

The feature is controlled by the environment variable TORCHINDUCTOR_DISTRIBUTED_AUTOTUNE.

Co-authored-by: @PaulZhang12

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163369
Approved by: https://github.com/PaulZhang12
2025-11-06 21:10:21 +00:00
ab1e734cd7 [ez] avoid log spam when random data is generated (#166919)
It's annoying to see full screen of this warning when running fx_graph_runnable files saved in tlparse.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166919
Approved by: https://github.com/eellison
2025-11-06 21:05:20 +00:00
888958ad6c Prevent torch._check causing graph breaks (#164676)
Handle `torch._check` in `TorchInGraphFunctionVariable.call_function`. Basically, it has two arguments - a predicate (bool) and a message (callable). If predicate is a constant, evaluate `torch._check`. If predicate is true, it just will compile and nothing happens. If predicate is false, `torch._check` will raise an exception.

If predicate is not constant, we manually emit a proxy. I tried to build as_proxy() inside NestedUserFunctionVariable, but failed to, that's why I create it here. I try to extract message. If it's a function, I retrieve it. If not, set it to None. Maybe we could extract it if message is a closure, but not sure how

Fixes #163668

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164676
Approved by: https://github.com/williamwen42, https://github.com/mlazos

Co-authored-by: William Wen <william.wen42@gmail.com>
2025-11-06 21:00:48 +00:00
d19f36bea1 [BE][Ez]: Update fmtlib submodule to 12.1.0 (#166983)
Fixed some compiler idiosyncrasies, improves CPP support, bugfixes, and performance optimizations. This is a header only minor library change so should be low risk and improve the performance of our formatting/loggers. Also allows fmtlib to be used in more constexpr contexts.

Full changelog here: https://github.com/fmtlib/fmt/releases/tag/12.1.0
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166983
Approved by: https://github.com/atalman
2025-11-06 20:39:00 +00:00
096c9356de [CUDA][cuBLASLt] addmm -- enable 2D bias in the Lt path when followed by an activation (#165548)
As per title.
This one is based off [#163955](https://github.com/pytorch/pytorch/pull/163955), but I will rebase once it is merged.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165548
Approved by: https://github.com/eqy
2025-11-06 20:29:32 +00:00
03dea563f4 Add guidance on how to migrate kernels to the libtorch stable ABI (#167112)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167112
Approved by: https://github.com/janeyx99
2025-11-06 20:27:27 +00:00
2e83ae2de7 [pp] Add reduce_grad Action (#166449)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166449
Approved by: https://github.com/wconstab, https://github.com/sanketpurandare
2025-11-06 20:02:46 +00:00
77b70970f7 [Inductor][Grouped Gemm] Add Blackwell CuTeDSL Kernel (#167182)
Summary: This is a reland of https://github.com/pytorch/pytorch/pull/165036, which previously contained a minor bug in the logic that determined whether the kernel should be enabled. As a result, it was incorrectly activated on non-Blackwell GPUs.

Test Plan:
Inductor test (fbcode):
`INDUCTOR_TEST_DISABLE_FRESH_CACHE=1 TORCHINDUCTOR_CACHE_DIR=~/cutetest buck2 run mode/opt //caffe2/test/inductor:cutedsl_grouped_mm -c fbcode.nvcc_arch=b200a -c fbcode.enable_gpu_sections=true -c fbcode.platform010_cuda_version=12.8 -m "ovr_config//third-party/pypi/nvidia-cutlass-dsl/constraints:4.2.1"`

Tritonbench (fbcode):
`clear; CUDA_VISIBLE_DEVICES=7 TRITON_PRINT_AUTOTUNING=1 TRITON_ALWAYS_COMPILE=1 TORCH_LOGS=+inductor TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 TORCHINDUCTOR_MAX_AUTOTUNE_GEMM=1 buck2 run mode/opt //pytorch/tritonbench:run -c fbcode.nvcc_arch=b200a -c fbcode.enable_gpu_sections=true -c fbcode.platform010_cuda_version=12.8 -m "ovr_config//third-party/pypi/nvidia-cutlass-dsl/constraints:4.2.1" -- --op grouped_gemm --only aten_grouped_mm,preprocessed_pt2_cute_grouped_mm --precision bf16  --num-inputs 1 --metrics tflops,accuracy`

Tritonbench(oss):
`clear; CUDA_VISIBLE_DEVICES=2 TRITON_PRINT_AUTOTUNING=1 TRITON_ALWAYS_COMPILE=1 TORCH_LOGS=+inductor TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 TORCHINDUCTOR_MAX_AUTOTUNE_GEMM=1 python run.py --op grouped_gemm --only aten_grouped_mm,preprocessed_pt2_triton_grouped_mm --precision bf16  --num-inputs 1 --metrics tflops,accuracy`

Unit Tests(oss):
`clear; python test/inductor/test_cutedsl_grouped_mm.py`

Differential Revision: D86376880

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167182
Approved by: https://github.com/mlazos, https://github.com/jananisriram
2025-11-06 19:55:38 +00:00
c9b2db73ca [Sigmoid][Delta Update][2/N] update delta update api to load original value first before casting to target dtype (#167039)
Summary: The current delta update has a strong assumption that the non-lowered weights share the same tensor dtype from the lowered version. This is not true by design. When dtype mismatches the data loading will load the data into unexpected dtype which introduces undefined behavior. This diff aims to close the gap by always load tensor by its original dtype first then cast to desired dtype.

Test Plan:
No more NaN values!

{P2022339213}

Reviewed By: kqfu

Differential Revision: D86181685

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167039
Approved by: https://github.com/henryoier
2025-11-06 19:31:18 +00:00
ba2e6b0b4f [ROCm] Enable StaticCudaLauncher for ROCm (#166492)
This PR enables ROCm/HIP support for PyTorch's StaticCudaLauncher, which provides static compilation and launching of Triton kernels. The implementation has been tested on AMD MI300 and MI200 hardware.

**Changes**

**Python (torch/_inductor/runtime/)**
- static_cuda_launcher.py: Added ROCm detection, .hsaco binary support, and ROCm-specific scratch parameter handling
- triton_heuristics.py: Updated device type checks to support both cuda and hip

**C++ (torch/csrc/)**
- Module.cpp: Enabled StaticCudaLauncher for ROCm builds
- inductor/static_cuda_launcher.cpp: Added HIP API equivalents for all CUDA driver calls
- inductor/static_cuda_launcher.h: Updated header guard

**Tests (test/inductor/)**
- test_static_cuda_launcher.py: Removed @skipIfRocm decorators and updated binary file handling

**Enabled Unit Tests**
All tests in test/inductor/test_static_cuda_launcher.py now pass on ROCm:
1. test_basic
2. test_unsigned_integers
3. test_signed_integers
4. test_basic_1arg
5. test_constexpr
6. test_implied_constant
7. test_kernel_no_args
8. test_high_shared_mem
9. test_too_high_shared_mem
10. test_kernel_empty_tensor
11. test_kernel_many_args
12. test_basic_compile
13. test_incompatible_code
14. test_static_launch_user_defined_triton_kernels
15. test_empty_tensor
16. test_any
17. test_disable_static_cuda_launcher

In addition to this, the following tests from test/inductor/test_codecache.py also pass:
1. test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False_use_static_cuda_launcher_False
2. test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False
3. test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_True_use_static_cuda_launcher_True
4. test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False_use_static_cuda_launcher_False
5. test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_False
6. test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_True_use_static_cuda_launcher_True

The following tests are skipped since triton bundling is necessary for StaticCudaLauncher:
1. test_remote_cache_load_function_device_cuda_float32_dynamic_False_bundle_triton_False_use_static_cuda_launcher_True
2. test_remote_cache_load_function_device_cuda_bfloat16_dynamic_False_bundle_triton_False_use_static_cuda_launcher_True

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166492
Approved by: https://github.com/jeffdaily
2025-11-06 19:29:35 +00:00
8523a64c4b Fix python -m build: error: unrecognized arguments: --no-build-isolation (#166848)
Fixes #166326

The PR fixes the following error:
```
python -m build: error: unrecognized arguments: --no-build-isolation
```

The regression has been introduced in the [commit](50d418f69f (diff-e5a6ba9ea3717e5913cd885e81f143937ea727282edd6939479a2a60b1051bf5R73)) in the scope of [PR](https://github.com/pytorch/pytorch/pull/156712).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166848
Approved by: https://github.com/seemethere
2025-11-06 19:13:37 +00:00
9fef18e31d [ROCm] Enable multi-arch compilation and unit tests for AOT Inductor (#166357)
## Summary
This PR adds multi-architecture kernel compilation support for ROCm in PyTorch's AOT Inductor module, enabling a single compiled model to run across multiple AMD GPU architectures (MI200, MI300, MI350, etc.) without recompilation.

## Implementation
- **Multi-arch compilation pipeline**: Compiles LLVM IR to multiple GPU architectures and bundles them using `clang-offload-bundler`
- **Architecture detection**: Automatically detects target architectures from `torch.cuda.get_arch_list()`, with overrides via `PYTORCH_ROCM_ARCH` environment variable
- **ROCm-specific utilities**: New `rocm_multiarch_utils.py` module handles ROCm toolchain integration
- **Test infrastructure**: Adapted AOT Inductor tests to support both CUDA and ROCm compilation paths

## Testing
Successfully tested on:
- MI200
- MI300

**Enabled tests:**
- `test_simple_multi_arch`
- `test_compile_after_package_multi_arch`
- `test_compile_with_exporter`
- `test_compile_with_exporter_weights`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166357
Approved by: https://github.com/jeffdaily
2025-11-06 19:08:15 +00:00
aaea391b62 [annotate][export] Add annotation to assertion nodes in export (#167171)
Fixes #166906

```
 python test/export/test_export.py -k test_annotate_on_assert
```

The assertions are not marked with annotation because these nodes are created in `apply_runtime_assertion_pass`. Currently the annotation will only be added if the nodes are created during tracing. So we need to manually add the annotation.

Nodes added in `apply_runtime_assertion_pass` will have the same annotation as the input node to the assertion.

Output graph:

Note that `_assert_scalar_default_1` is not annotated becayse it's an assertion on the size of `x` which is not annotated.

```
ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, x: "f32[s77]", y: "i64[]"):
            # No stacktrace found for following nodes
            sym_size_int_1: "Sym(s77)" = torch.ops.aten.sym_size.int(x, 0)

            # Annotation: {'moo': 0} File: /data/users/shangdiy/pytorch/test/export/test_export.py:729 in forward, code: x = torch.cat([x, x])
            cat: "f32[2*s77]" = torch.ops.aten.cat.default([x, x]);  x = None

            # Annotation: {'moo': 0} File: /data/users/shangdiy/pytorch/test/export/test_export.py:730 in forward, code: b = y.item()
            item: "Sym(u0)" = torch.ops.aten.item.default(y);  y = None
            ge_1: "Sym(u0 >= 4)" = item >= 4
            _assert_scalar_default = torch.ops.aten._assert_scalar.default(ge_1, "Runtime assertion failed for expression u0 >= 4 on node 'ge_1'");  ge_1 = _assert_scalar_default = None

            # No stacktrace found for following nodes
            mul_1: "Sym(2*s77)" = 2 * sym_size_int_1;  sym_size_int_1 = None
            le: "Sym(2*s77 <= u0)" = mul_1 <= item;  mul_1 = None
            _assert_scalar_default_1 = torch.ops.aten._assert_scalar.default(le, "Runtime assertion failed for expression 2*s77 <= u0 on node 'le'");  le = _assert_scalar_default_1 = None

            # Annotation: {'moo': 0} File: /data/users/shangdiy/pytorch/test/export/test_export.py:732 in forward, code: return x * b
            mul: "f32[2*s77]" = torch.ops.aten.mul.Tensor(cat, item);  cat = item = None
            return (mul,)

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167171
Approved by: https://github.com/angelayi
2025-11-06 18:57:30 +00:00
7206668f7c Update torch.var documentation to use modern API (#167209)
## Summary
Fix outdated unbiased parameter references in normalization module documentation. Replace deprecated torch.var(input, unbiased=False/True) with modern torch.var(input, correction=0/1) API throughout BatchNorm, InstanceNorm, LayerNorm, and GroupNorm docstrings.

## Changes
- torch/nn/modules/batchnorm.py: Updated 4 instances across BatchNorm1d, BatchNorm2d, BatchNorm3d, and SyncBatchNorm
- torch/nn/modules/instancenorm.py: Updated 3 instances across InstanceNorm1d, InstanceNorm2d, and InstanceNorm3d
- torch/nn/modules/normalization.py: Updated 2 instances in LayerNorm and GroupNorm

## Test plan
Mathematical behavior remains identical: unbiased=False ≡ correction=0 (biased estimator), unbiased=True ≡ correction=1 (unbiased estimator). Documentation now uses consistent modern API terminology with no functional changes to code behavior.

Fixes #166804
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167209
Approved by: https://github.com/albanD
2025-11-06 18:52:22 +00:00
7729de07d3 Build libgomp (gcc-13) from src on AArch64 (#166549)
This improves thread-scaling on AArch64 (see details on #155795)
Fixes: #155795

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166549
Approved by: https://github.com/malfet
2025-11-06 18:31:03 +00:00
73078f305f Add missing super().setUp() (#167163)
In a trunk failure today, we saw the same test running on both trunk and slow shards.  The reason is that this test didn't invoke `super().setUp()`, so all the test features like slow and disabled test didn't apply to them.

I use Claude to find all test classes with a `setUp()` method that didn't called `super().setUp()` and patch all of them.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167163
Approved by: https://github.com/malfet
2025-11-06 17:55:23 +00:00
ea7add4837 fix static_input_indices subclass remapping under training (#167127)
We have some logic figure out "given which inputs have static indices in the pre-subclass-desugaring graph, figure out the static indices in the post-subclass-desugaring graph", and it was busted for training.

Separately, we should probably not have to do this logic at all - as @eellison mentioned, inputs/outputs in the graph are less likely to be tweaked through graph passes, so it would be more convenient and less hassle if we just stashed if a given input was static directly on the Descriptor for it. I did not end up doing that in this PR though.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167127
Approved by: https://github.com/ezyang
2025-11-06 17:34:35 +00:00
0ed4119420 [ROCm][CI] Run rocm.yml and inductor-rocm.yml every 3rd hour (#167220)
Even after [reducing frequency of rocm.yml and inductor-rocm.yml to per hour](https://github.com/pytorch/pytorch/pull/166870), we are still observing queueing on MI2xx runners as of Nov 6 2025 10:30AM CST:
<img width="470" height="191" alt="{DFECE929-174D-4EE4-9448-D43AA1AF0B53}" src="https://github.com/user-attachments/assets/014b2266-7c60-44e5-9a32-3ebea64232b6" />

We think it's because we had to move the periodic.yml workflow runs to the MI210 runners in light of the Cirrascale runners not being available: https://github.com/pytorch/pytorch/issues/166866. We observe [increased queueing](https://hud.pytorch.org/queue_time_analysis?dateRange=7&startDate=2025-10-30T16%3A00%3A48.381Z&endDate=2025-11-06T16%3A00%3A48.381Z&granularity=hour&chartType=bar&repos=pytorch%2Fpytorch&category=machine_type&machineTypes=linux.rocm.gpu.2&items=linux.rocm.gpu.2) after the point where we added periodic jobs to the MI210 runners.

<img width="453" height="252" alt="linux rocm gpu 2_queueing" src="https://github.com/user-attachments/assets/532984cf-046b-4a02-a096-f17364632da3" />

This PR temproarily changes the rocm.yml and inductor-rocm.yml workflows to run on a 3-hourly basis rather than every hour, until the Cirrascale outage is resolved.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167220
Approved by: https://github.com/jeffdaily
2025-11-06 17:23:23 +00:00
03fd2b796e [Flight Recorder] Reverted to include stack traces for dump pipe triggered FR dump (#167023)
[Flight Recorder] Reverted to include stack traces for dump pipe triggered FR dump (#167023)

Summary:

We should also retry if include stacktraces failed. Changed was introduced in https://github.com/pytorch/pytorch/pull/164591

Test Plan: eyes

Reviewed By: fduwjj

Differential Revision: D86248484
2025-11-06 09:16:29 -08:00
fd7bf9ce10 [Inductor] Fix unbacked float symbol handling in kernel codegen (#166890)
When a fn compiled with `torch.compile` calls `.item()` on a float tensor arg (e.g., for thresholds in `torch.clamp`), the generated triton kernel references an unbacked float symbol (e.g., `zuf0`) that was never added to the kernel's parameter list, causing a compilation error.

Fixes: #166888 #163674

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166890
Approved by: https://github.com/eellison, https://github.com/mlazos
2025-11-06 17:14:31 +00:00
41c9eeecec Update Sphinx dependencies (#164901)
This pull request updates the PyTorch documentation build system to support newer versions of Sphinx and its related dependencies, improves coverage checking for undocumented objects, and adds configuration enhancements to the docs build. The most important changes are grouped below.

**Dependency Upgrades and Compatibility:**

* Upgraded `sphinx` to version 7.2.6 and updated related documentation dependencies (`breathe`, `exhale`, `docutils`, `myst-nb`, `sphinx-design`, `myst-parser`, and others) in `.ci/docker/requirements-docs.txt` to ensure compatibility with Python 3.13 and improve documentation generation. [[1]](diffhunk://#diff-b5577a8e38a2e4c5d91865096b259738cc1dbcb97921abb73045dae0255b1479L1-L12) [[2]](diffhunk://#diff-b5577a8e38a2e4c5d91865096b259738cc1dbcb97921abb73045dae0255b1479L39-R45) [[3]](diffhunk://#diff-b5577a8e38a2e4c5d91865096b259738cc1dbcb97921abb73045dae0255b1479L59-R64)
* Replaced the editable install of `pytorch_sphinx_theme2` with a pinned version for stability in documentation builds.

**Documentation Coverage and Build Improvements:**

* Updated the coverage check logic in `.ci/pytorch/python_doc_push_script.sh` to parse the new Sphinx 7.2.6+ coverage report format, extracting the undocumented count from the statistics table for more reliable coverage validation.

**Configuration and Formatting Enhancements:**

* Introduced `autosummary_filename_map` in `docs/source/conf.py` to resolve duplicated autosummary output filenames for functions and classes with the same name, improving documentation clarity.

**Minor Documentation Formatting:**

* Removed an unused `:template:` directive from `docs/source/quantization-support.md` for cleaner autosummary output.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164901
Approved by: https://github.com/albanD
2025-11-06 17:14:26 +00:00
bfc0ba4af9 nn.Linear: nD contiguous input + bias -- dispatch to addmm also when weight is sparse (#166071)
As per title.

It seems safe to be able to generalize to arbitrary contiguous inputs since `at::matmul` is likely to do the flattening to avoid `baddmm`.

Additionally, we guard for bias to be 1D and contiguous which is guaranteed to be fused with no copies.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166071
Approved by: https://github.com/ngimel
2025-11-06 16:50:12 +00:00
3fdc5dbf1d Make CUDA preload logic more straightforward (#167046)
I.e. remove distinction between two cases, and always preload full set of libraries
For some reason, when one uses `virtualenv` instead of `venv`,
preloading `cudart` works, but it fails to find cudnn or cublasLT later on

Fix it, by getting read of partial preload logic for one of the cases and always preload full set of libraries

Test plan on stock Ubuntu:
```
pip install virtualenv
virtualenv --symlinks -p python3.11 --prompt virtv venv-virt
source venv-virt/bin/activate
pip install torch
python -c 'import torch'
```

Fixes https://github.com/pytorch/pytorch/issues/165812
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167046
Approved by: https://github.com/atalman
2025-11-06 16:30:16 +00:00
cc477f6009 [inductor] Use runtime estimations in iterative sink waits pass (#167081)
Split of https://github.com/pytorch/pytorch/pull/162469 to be under 2K
reorder iterative part

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167081
Approved by: https://github.com/eellison
ghstack dependencies: #167080
2025-11-06 16:14:48 +00:00
7b055a0103 Add per_process_memory_fraction to PYTORCH_CUDA_ALLOC_CONF (#161035)
torch.cuda.memory.set_per_process_memory_fraction allows setting
an upper bound on how much device memory is allocated. This PR
exposes this setting to an environment variable.

For example, PYTORCH_CUDA_ALLOC_CONF="per_process_memory_fraction:0.5"
will limit the device memory to half of the available memory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/161035
Approved by: https://github.com/ngimel, https://github.com/eqy
2025-11-06 16:10:16 +00:00
da2eb31b82 [MTIA][PyTorch] Add mtia as native device for PyTorch tests (#167089)
Summary: Add MTIA as a native device type in PyTorch.

Test Plan: CI

Reviewed By: PatriceVignola

Differential Revision: D80111801

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167089
Approved by: https://github.com/andyanwang, https://github.com/nautsimon, https://github.com/albanD
2025-11-06 15:43:45 +00:00
2005b5f548 [inductor] Use runtime estimations in iterative reorder collectives pass (#167080)
Split of https://github.com/pytorch/pytorch/pull/162469 to be under 2K
reorder iterative part

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167080
Approved by: https://github.com/eellison
2025-11-06 14:20:49 +00:00
b2d72a4008 Revert "Don't hardcode double argument for reduction base (#166951)"
This reverts commit a74fe75c450277eb88a95c764e8b0a664a550a86.

Reverted https://github.com/pytorch/pytorch/pull/166951 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/166951#issuecomment-3497253260))
2025-11-06 13:26:04 +00:00
80ec2ab78e [8/N] Fix unused loop variables in tests (#166921)
This PR continues to fix or remove unused loop variables in tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166921
Approved by: https://github.com/mlazos
2025-11-06 12:20:00 +00:00
c724f0097d [2/N] Use key in dict for existence checks (#167174)
This PR uses `key in dict` expressions for existence checks of dict elements in Python code. This operation is more efficient than `key in dict.keys()`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167174
Approved by: https://github.com/mlazos
2025-11-06 12:13:47 +00:00
a51208c656 Check cluster_dims attribute exists before access (#167187)
Error in Helion CI's AMD job: https://github.com/pytorch/helion/actions/runs/19118581048/job/54633730633
```
>                   (binary.metadata.num_ctas, *binary.metadata.cluster_dims)
                                                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                    if hasattr(binary, "metadata")
                    else ()
                )
            ),
            "function": get_first_attr(binary, "function", "cu_function"),
            "runner": get_first_attr(binary, "run", "c_wrapper"),
            "math": math_lib,
            "torch": torch_lib,
            "triton": triton_lib,
        }
E       torch._inductor.exc.InductorError: AttributeError: 'KernelMetadata' object has no attribute 'cluster_dims'
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167187
Approved by: https://github.com/oulgen
2025-11-06 08:02:57 +00:00
ed4aa449b6 CustomOp Inline Fusion (#165952)
Add Inline Fusion Support for Custom Op Autotuning
--------------------------------------------------

This PR extends PyTorch Inductor's custom op autotuning with inline fusion capabilities, enabling the winning decomposition to be inlined directly into the computation graph for fusion with surrounding operations.

### Usage

```python

def decompose_k_implementation(
    a: torch.Tensor, b: torch.Tensor, k_splits: int = 4
) -> torch.Tensor:
    """Matrix multiply with k-way decomposition."""
    ...

@torch.library.custom_op("my_lib::matmul_relu", mutates_args={})
def custom_matmul_relu_dk(
    a: torch.Tensor, b: torch.Tensor, k_splits: int
) -> torch.Tensor:
    return torch.relu(decompose_k_implementation(a, b, k_splits))

register_custom_op_autotuning(
    custom_op=custom_matmul_relu_dk,
    configs=[
        CustomOpConfig(k_splits=2),
        CustomOpConfig(k_splits=4),
        CustomOpConfig(k_splits=8),
        CustomOpConfig(k_splits=32),
        CustomOpConfig(k_splits=64),
    ],
    name="decompose_k_autotuned",
    input_gen_fns={
        "a": lambda fake: torch.randn_like(fake, device='cuda'),
        "b": lambda fake: torch.randn_like(fake, device='cuda'),
    }
)
```

### How It Works
Enable optimizations from Inductor by inlining the best decomposition, allowing fusion with surrounding elementwise operations and other graph-level optimizations. This provide potentially better performance and memory efficiency.
During customop autotuning phase, we still benchmarks all CustomOpConfigs to find the fastest implementation. Then during inline fusion, inductor inline the decompositions into the main graph, converting the winning choice to individual ComputedBuffer IR nodes (fusable). At the end, Inductor automatically fuses inlined operations with surrounding elementwise ops (e.g., bias add, ReLU, scaling). Note that the winning choice must be a SubgraphChoiceCaller (decomposition-based) rather than an ExternKernelChoice for inlining to work. If the ExternKernelChoice is returned, no inline happens.

Performance Results
Benchmarked on matmul+relu workload with decompose-k fusion (H100 GPU, 15 test shapes):
<img width="782" height="377" alt="Screenshot 2025-11-04 at 12 43 11 AM" src="https://github.com/user-attachments/assets/22131d4c-a8ce-4f55-bdcd-ac758ddad8cd" />

Metric | Result
-- | --
Average Speedup vs ATen | 1.28x
Max Speedup vs ATen | 1.41x

<br class="Apple-interchange-newline">

The performance comparison are detailed in the below plots. We spot that on most use cases, the inline fusion gains better performance compared to aten baseline and the current torch.compile.
<img width="4874" height="3545" alt="image" src="https://github.com/user-attachments/assets/190a1233-412f-4f34-84cd-9b7cb582f504" />

**Test**: `test_decompose_k_with_fusion` demonstrates decompose-k with inline fusion enabled.

--------------

### Integration to mm.py decomposeK with a flag enable_inline_subgraph_fusion=True in config (deprecated to avoid breaking async compilation. removed from the PR already)
FP32:
<img width="738" height="357" alt="Screenshot 2025-11-04 at 12 05 08 AM" src="https://github.com/user-attachments/assets/ee421d22-c426-42f2-8dcd-4dcc547d6219" />
FP16:
<img width="769" height="403" alt="Screenshot 2025-11-04 at 12 13 49 AM" src="https://github.com/user-attachments/assets/346d1ffc-15af-40b0-9378-cf9b297711c2" />

The TCF column represents torch compile fusion, which is close to custom_op decomposek. The difference might due to different candidate k values.

#### Usage:
Note: this only happens when we don't benchmark_epilogue_fusion, i.e., not using multi_template_buffer.

```python
# Define the matmul+relu function
    def matmul_relu(x, y):
        return torch.nn.functional.relu(torch.matmul(x, y))

    # Compile with inline subgraph fusion enabled
    @torch.compile
    def compiled_matmul_relu(x, y):
        return matmul_relu(x, y)

    # Reset dynamo to ensure clean compilation
    torch._dynamo.reset()

    with config.patch(
        {
            "max_autotune": True,
            # CRITICAL: These two flags enable inline subgraph fusion
            "benchmark_epilogue_fusion": False,  # Must be False for inline fusion!
            "enable_inline_subgraph_fusion": True,  # Enable inline fusion
        }
    ):
        # Compile and run
        result = compiled_matmul_relu(a, b)
        torch.cuda.synchronize()
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165952
Approved by: https://github.com/PaulZhang12, https://github.com/eellison
2025-11-06 06:59:10 +00:00
9eebda944d make narrow_tensor_symint DDE-free (#166379)
https://github.com/pytorch/pytorch/issues/158081

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166379
Approved by: https://github.com/Lucaskabela
ghstack dependencies: #166361
2025-11-06 06:09:22 +00:00
09d8953fb4 Update tensorpipe submodule (#167108)
To pick a single change 2b4cd91092 that should fix compilation errors with clang-21
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167108
Approved by: https://github.com/Skylion007
2025-11-06 06:08:13 +00:00
8b2365094d Expose torch.compiler.config.force_disable_caches as a public API (#166699)
Exposing this flag as some upstream frameworks (like vLLM) could benefit from knowing whether torch.compile caches are enabled or not to adjust their own caching behavior.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166699
Approved by: https://github.com/oulgen, https://github.com/mlazos
2025-11-06 05:59:05 +00:00
7b423c2d21 [user-streams] Mark stream ops as side effectful (#167152)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167152
Approved by: https://github.com/Lucaskabela
ghstack dependencies: #167141, #167151
2025-11-06 05:03:18 +00:00
46b3f913b3 [user-streams] Add record/wait ops (#167151)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167151
Approved by: https://github.com/Lucaskabela
ghstack dependencies: #167141
2025-11-06 05:03:18 +00:00
f7b7f40a6f [user-streams] Enable stream ops to work in eager (#167141)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167141
Approved by: https://github.com/Lucaskabela
2025-11-06 05:03:18 +00:00
91337ae3ff [audio hash update] update the pinned audio hash (#167031)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned audio hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167031
Approved by: https://github.com/pytorchbot
2025-11-06 04:57:05 +00:00
eea951758f [dynamo, 3.14] disable dynamo cpython tests in 3.14 (again) (#167000)
The previous PR was not enough to prevent errors caused by cpython dynamo tests in 3.14
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167000
Approved by: https://github.com/mlazos, https://github.com/guilhermeleobas
2025-11-06 04:34:33 +00:00
3feea296a5 torch.fx: add debug-level logging to Interpreter.run_node (#117351) (#166622)
### Summary
Adds a debug-level logging statement to torch.fx.Interpreter.run_node, as proposed in [#117351](https://github.com/pytorch/pytorch/issues/117351), to make FX graph execution traceable when debugging or instrumenting model transformations.

When debug logging is enabled, each executed node emits a single structured log line formatted via `LazyString(lambda: n.format_node())`, deferring string construction unless logging is active.

### Example Output
With `logging.DEBUG` enabled:

```
run_node x = x()
run_node add = _operator.add(x, 1)
run_node clamp = torch.clamp(add, min=0.0, max=5.0)
run_node output = output(clamp)
```

With `logging.DEBUG` disabled no additional output is produced (unchanged default behavior).

### Test Plan

Verified locally with Python 3.11 on macOS using a PyTorch build from source.

- With `logging.DEBUG` enabled: each node emits a debug log via LazyString.
- With `logging.DEBUG` disabled: no additional output.
- Confirmed all `Interpreter` tests pass locally:
`pytest test/test_fx.py -k "Interpreter"`

Updated the example output to reflect the new `_format_fx_node` helper and inclusion of `kwargs`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166622
Approved by: https://github.com/aorenste
2025-11-06 04:33:09 +00:00
c3c3653418 [1/N] Add return types of Python functions (#167162)
This PR adds return types of some Python functions. Most of them return `None`. The types were added automatically by ruff `ANN` rules.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167162
Approved by: https://github.com/Lucaskabela
2025-11-06 04:32:14 +00:00
f72772b184 [PP] make runtime dbg log print custom actions (#167113)
Previously the log only printed if the default implementation for an
action was used, now it prints before dispatching to custom registered
actions.

Tested by running on autoparallel graph runner and observing forward
pass action logged

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167113
Approved by: https://github.com/sanketpurandare, https://github.com/Skylion007
2025-11-06 04:20:50 +00:00
981dd71893 Refactor: extract OperatorArgsKwargsView from parseIValuesToPyArgsKwargs (#166368)
Intended to make it easier to reuse this logic for processing operator arguments as IValues in following PR(s).

Testing: python test/test_python_dispatch.py (broke during development, seems to work now)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166368
Approved by: https://github.com/albanD
2025-11-06 04:18:54 +00:00
d31599f40b [7/N] Fix unused loop variables in tests (#167043)
This PR continues to fix or remove unused loop variables in tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167043
Approved by: https://github.com/Lucaskabela
2025-11-06 03:36:59 +00:00
85fab6c9b0 Fix duplicate benchmarking entries for addmm (#166652)
There have been duplicate entries for addmm in dashboard. This PR fixes the duplicate entries issues
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166652
Approved by: https://github.com/yangw-dev
2025-11-06 03:25:03 +00:00
c08ce30d18 [ci][cpu] Update compiler to GCC-13 in jammy-aarch64 (#166849)
This is needed because manylinux uses GCC-13 since #152825
As a result of the current compiler version mismatches, we've seen tests passing jammy-aarch64 pre-commit CI, but failing for wheels built in manylinux
Related to: #166736

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166849
Approved by: https://github.com/robert-hardwick, https://github.com/malfet, https://github.com/Skylion007, https://github.com/atalman
2025-11-06 03:14:16 +00:00
e1a1aeaf5b [1/N] Use key in dict for existence checks (#167035)
This PR uses `key in dict` expressions for existence checks of dict elements in Python code. This operation is more efficient than `key in dict.keys()`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167035
Approved by: https://github.com/janeyx99
2025-11-06 02:25:10 +00:00
943227f57b [c10d] Fix split_group bug by having the parent pg option deep copied (#167125)
Summary: Inside group_split api, we share the reference of PG option with parent PG if a PG option is not explicitly specified. This is bad because if we split parent pg multiple times, we will run into errors.

Test Plan: UT + internal test.

Differential Revision: D86225394

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167125
Approved by: https://github.com/Skylion007
2025-11-06 02:08:05 +00:00
3a2d75a086 Change template 'Release highlight for proposed Feature'->'New Feature for Release' (#167145)
Makes it simpler and more clear

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167145
Approved by: https://github.com/huydhn
2025-11-06 02:01:57 +00:00
69af74972b Bugfix to forward autodiff causing different datatype 2 (#165784)
Fixes #160513

## The Problem Summary
The issue boiled down to data type promotion logic. The code base has two different functions that deal with dtype promotion logic. If it is purely multi-dimensional tensor operations, the cpp code gets triggered and that follows the numpy dtype promotion logic.  That is why in #160513 NDim tensors are fine as NDim dtypes gets precedence.  The issue came with python scalars and 0Dim tensors. When it detects "scalars", a python implementation of dtype promotion logic gets triggered (torch/_prims_common/__init__.py:1544). Since this is in python, the implementation can't distinguish what is from a wrapped tensor and a 0Dim tensor and thus will just take the highest dtype which is the python double wrapped number.

## The Fix
The python implementation for dtype promotion had to know where the scalar came from. Once the scalar can be distinguished then the appropriate dtype can be set. The first approach was to try and expose the `is_wrapped_number` method but this came with a big issue.  During the `forward_ad` the derivative of those scalars turned out to be `ZeroTensor`s.  The `ZeroTensor` internally uses a hack to initialize a meta dtype tensor which skips expensive dispatch operations. But the copy would not grab everything especially the `is_number_wrapped_` property.  I thought about modifying the copy but that seemed to go away from the spirit of what the copy was intended for and plus the tests for `is_wrapped_number_` requires `dim > 0` and a scalar `ZeroTensor` is a meta dtype tensor which complicates things.

So I chose the route of creating a new property called `was_wrapped_number` and exposed this property to the python tensor API. I had to modify the autograd code generation to set `was_wrapped_number` in the mul, add, and div operations in  `VariableType.cpp`.  Once this property was set, the dtype promotion logic could be updated to consider wrapped numbers and 0Dim numbers. Once that hierarchy was taken care of, the buggy behavior was fixed.

I wrote a new ops testing module `TestForwardADWithScalars`.  I saw that this bug was unique and required new testing paradigm. This only tests the multiply, add, and divide and I chose this because all operations boil down to these three operations.

[edit]: Just used `efficientzerotensor` meta and converted that to a python number. Since wrapped number is converted back to a python number, dtype promotion is preserved.  The constraint to achieve this happened by setting the forward grad zero tensor of a wrapped number with a wrapped number flag since the tangent of the wrapped number should still be a wrapped number. After that this specific zerotensor was then sent through as a meta type in the `BinaryOps.cpp` to get appropriate dtype for resulting arithmetic.

@ezyang @OihanJoyot

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165784
Approved by: https://github.com/ezyang
2025-11-06 01:59:53 +00:00
7432676187 [MPS] Fix crash in BCELoss backwards with reduction="none" and inputs with trailing 1s in shape (#166786)
Fixes #166746 by removing squeezes that caused shape mismatches when calling backwards through `BCELoss(reduction='none')`.

Based on running these tests, it seems MPSGraph can handle inputs without squeezing.
```
python test/test_mps.py TestMPS -k test_bce
python test/test_mps.py TestConsistency -k binary_cross
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166786
Approved by: https://github.com/malfet
2025-11-06 01:55:38 +00:00
fd5edda1ed Reland "Add model code stack trace to torch.profile (#166677)" (#167110)
```python
python test/test_fx.py -k profiler
```

Insert `torch._C._profiler._RecordFunctionFast` to fx graph codegen.

We post-process the profiler dump using `map_recorded_events_to_aten_ops_with_stack_trace` to add the stack trace to the dump'd trace.

`map_recorded_events_to_aten_ops_with_stack_trace` queries `fx.traceback._FX_METADATA_REGISTRY` for node metadata. Each graph module has a hash'd fake file name (e.g. `fx_generated__iv4zodvbcmdkhx77jrg7h2f2opebujhfmc6tf6nx7vioq244baw.py`), which is the key to the registry.

One can do `fx_g.enrich_profiler_metadata()` to add debugging info. Or `fx_g.enrich_profiler_metadata(enable=False)` to remove.

`aot_eager` makes calls `fx_g.enrich_profiler_metadata()` if TORCH_ENRICH_RPOFILER_STACK_TRACE is set or _dynamo.config.enrich_profiler_metadata=True.

<img width="1188" height="565" alt="Screenshot 2025-10-31 at 4 40 52 PM" src="https://github.com/user-attachments/assets/41e8113f-3e6d-439b-bffd-cfbf0c03a47a" />

Example code gen'd.
```
def forward(self, args_list):
    args_iter = iter(args_list)
    arg0_1 = next(args_iter)
    arg1_1 = next(args_iter)
    args_list.clear()
    _rf = torch._C._profiler._RecordFunctionFast('## fx_generated__iv4zodvbcmdkhx77jrg7h2f2opebujhfmc6tf6nx7vioq244baw.py ##'); _rf.__enter__()
    repeated_subgraph0 = self.repeated_subgraph0
    _rf_invoke_subgraph = torch._C._profiler._RecordFunctionFast('## 3 ##'); _rf_invoke_subgraph.__enter__()
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', arg0_1, arg1_1);  repeated_subgraph0 = arg0_1 = arg1_1 = None
    _rf_invoke_subgraph.__exit__(None, None, None)
    _rf_getitem = torch._C._profiler._RecordFunctionFast('## 4 ##'); _rf_getitem.__enter__()
    getitem = invoke_subgraph[0];  invoke_subgraph = None
    _rf_getitem.__exit__(None, None, None)
    return (getitem,)
    _rf.__exit__(None, None, None)

def forward(self, arg0_1, arg1_1):
    _rf = torch._C._profiler._RecordFunctionFast('## fx_generated__ozpadpj5cxoalxeyopej33g2vvtvhxg4xsk7bhx7ldmcibtybyn.py ##'); _rf.__enter__()
    _rf_mul = torch._C._profiler._RecordFunctionFast('## 2 ##'); _rf_mul.__enter__()
    mul = torch.ops.aten.mul.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
    _rf_mul.__exit__(None, None, None)
    _rf_sin = torch._C._profiler._RecordFunctionFast('## 3 ##'); _rf_sin.__enter__()
    sin = torch.ops.aten.sin.default(mul);  mul = None
    _rf_sin.__exit__(None, None, None)
    _rf_add = torch._C._profiler._RecordFunctionFast('## 4 ##'); _rf_add.__enter__()
    add = torch.ops.aten.add.Tensor(sin, 5);  sin = None
    _rf_add.__exit__(None, None, None)
    return (add,)
    _rf.__exit__(None, None, None)

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167110
Approved by: https://github.com/pianpwk
2025-11-06 01:14:27 +00:00
872d1daec2 Avoid DDE in narrow with unbacked start (#166361)
Slice knows how to handle unbacked start, we do not need to offset start before calling slice, we can leave it for slice.
The only edge case is when start<0 and start+length ==0 in that case slice and narrow would deviate,
for that case we shall pass dim_size instead of start+length

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166361
Approved by: https://github.com/aorenste
2025-11-06 01:04:19 +00:00
eqy
6cd57e6fc2 [cuBLAS] Force tensor-core-no-reduction algo in cuBLASLt for n=1 cases (#166735)
Ostensibly useful for batch-invariance purposes

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166735
Approved by: https://github.com/ngimel
2025-11-06 00:50:42 +00:00
d29efba8fa Move almalinux docker image to DEVTOOLSET 13 (#167018)
1. Update general Almalinux image to Devtoolset 13.
2. Fix ROCm images, missing devtoolset-13
This image used by Linux Job in test-infra
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167018
Approved by: https://github.com/sudharssun, https://github.com/d4l3k
2025-11-06 00:34:40 +00:00
a344069f2a Add missing skipIf(not PLATFORM_SUPPORTS_MEM_EFF_ATTENTION) to test/test_transformers.py (#166969)
This PR adds missing skips for efficient attention tests.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166969
Approved by: https://github.com/jeffdaily
2025-11-05 23:16:51 +00:00
af829c0dad [ROCm] Skip nvfp4 tests on ROCm (#167066)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167066
Approved by: https://github.com/jeffdaily, https://github.com/slayton58
2025-11-05 23:15:17 +00:00
3869aa115b fix fr reset api (#166970)
Summary:
- there are various places that access fr's `entries_` field
- if we empty the entries_ on reset, the accesses can result in an error
- so we only perform a soft delete instead of clearing out the entries copletely
  - only reset id_ on the reset
  - keep track of a reset_epoch which increments everytime reset is called
  - dump_entries only returns entries from the latest epoch
  - api's that access entries also check if the reset epoch matches
- make the `next_` always track the index in the circular buffer - this change was needed to make the soft delete's implementation easier

---
[//]: # (BEGIN SAPLING FOOTER)
Stack created with [Sapling](https://sapling-scm.com). Best reviewed with [ReviewStack](https://reviewstack.dev/pytorch/pytorch/pull/166970).
* #166972
* #166971
* __->__ #166970

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166970
Approved by: https://github.com/fduwjj
2025-11-05 23:06:00 +00:00
47eb34b7ac [ATEN][CUDA] Reduce register pressure in radix_sort_pairs to improve torch.sort performance (#167094)
# Summary
This PR improves `torch.sort` and `torch.unique` performance by **15% to 50%** on NVIDIA GPUs by optimizing CUDA register allocation in radix sort operations.

The key change: specialize `OpaqueType<N>` to use native integer types (uint8_t, uint16_t, uint32_t, uint64_t) for common sizes (1, 2, 4, 8 bytes) instead of `char data[N]`. This enables more efficient register allocation while preserving the template deduplication strategy.

The following table shows the speedup on various input shapes and GPUs. Sorting is performed on the last dimension, and baseline torch version is 2.9.0.

| GPU  | input shape | input dtype | **Before** **(ms)** | After (ms) | Speedup |
| ---- | ----------- | ----------- | ------------------- | ---------- | ------- |
| H100 | (16, 1e6)   | int32       | 1.61                | 1.37       | 1.18×   |
| H100 | (1, 1e8)    | int32       | 6.6                 | 5.0        | 1.3×    |
| H20  | (16, 1e6)   | int64       | 3.57                | 3.03       | 1.18×   |
| H20  | (1, 1e8)    | int64       | 19.3                | 13.0       | 1.48×   |

# Analysis

`torch.sort` and `torch.unique` use `radix_sort_pairs`, which internally calls `cub::DeviceRadixSort::SortPairs`. Since values are only copied (never compared), we cast them to `OpaqueType<sizeof(value_t)>` to minimize template instantiations. For example, both `int32` and `float32` values map to the same `OpaqueType<4>.`

## The Problem

The previous `char data[N]` implementation causes inefficient register allocation. Here is one reason I find from SASS code. For 8-byte types:

- `char data[8]:` Compiler may allocate 8 registers (one per byte)

- `uint64_t data`: Compiler allocates 2 registers (standard 64-bit handling)

This happens because the compiler doesn't recognize char[8] as a cohesive 64-bit value, treating each byte independently, which increases register pressure and reduces GPU occupancy.

From Nsight Compute, when using `char data[8]`, the registers per thread is 166, and corresponding theoretical occupancy is 18.75%. When using native `uint64_t`, the registers per thread is 80, and corresponding theoretical occupancy is 37.5%.

## The Solution

Specialize `OpaqueType<N>` for common sizes using native integer types:

```
// Before
template <int N> struct alignas(N) OpaqueType { char data[N]; };

// After
template <int N> struct alignas(N) OpaqueType { char data[N]; }; // fallback
template <> struct alignas(1) OpaqueType<1> { uint8_t data; };
template <> struct alignas(2) OpaqueType<2> { uint16_t data; };
template <> struct alignas(4) OpaqueType<4> { uint32_t data; };
template <> struct alignas(8) OpaqueType<8> { uint64_t data; };
```

This preserves the template deduplication strategy (all 8-byte types still use the same `OpaqueType<8>` instantiation) while enabling better register allocation.

# Testing & Compatibility
## Testing:
 Correctness tests pass for various input types (bfloat16, int32, float32, int64), shapes, and dimensions (1, 2, 3)
 Register usage reduction verified with NSight Compute
 Linter passes
## Compatibility:
 No API/ABI changes
 Template instantiation count unchanged

# Reference
For detailed analysis, please refere to my previous blog: [Performance Optimization of torch.sort on GPU](https://yywangcs.notion.site/Performance-Optimization-of-torch-sort-on-GPU-192fc9f5d8058018a1bec1efa35da3f9)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167094
Approved by: https://github.com/ngimel, https://github.com/Skylion007
2025-11-05 22:34:19 +00:00
08200280ce [CP][BE][3/N] Add _templated_ring_attention to the backward compatility stub (#166991)
While `_templated_ring_attention` is a private API, it is unfortunatelly used by some packages.
Add it to __all__ so that people can still use it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166991
Approved by: https://github.com/XilunWu
ghstack dependencies: #166456, #166501
2025-11-05 22:22:55 +00:00
ad7a57262c [12/N] Apply ruff UP035 rule (#166929)
This PR continues to apply ruff UP035 rule to test code and some remaining torch files.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166929
Approved by: https://github.com/Lucaskabela
2025-11-05 22:06:19 +00:00
711a775878 fix nccl estimations (#167093)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167093
Approved by: https://github.com/kwen2501, https://github.com/eellison
2025-11-05 22:01:49 +00:00
e9a688f02e [DebugMode] output, tensor id annotations for DebugMode (#165076)
Adds optional "node" id for tensors, output info annotations to DebugMode, with `DebugMode(record_output=True, record_ids=True)`

Example output for `test_debug_mode_mm`, with both enabled:
```
  torch.mm(dt$0: f32[8, 8]| S(0), dt$1: f32[8, 32]| S(0))  ->  dt$12: f32[8, 32]| S(0)
    aten::mm(dt$2: f32[8, 8]| S(0), dt$3: f32[8, 32]| S(0))
      redistribute_input(1, S(0) -> R)
        redistribute_input(t$4: f32[1, 32], trace: S(0)->R)
          _c10d_functional::all_gather_into_tensor(t$5: f32[1, 32], 8, 0)  ->  t$6: f32[8, 32]
          _c10d_functional::wait_tensor(t$7: f32[8, 32])  ->  t$8: f32[8, 32]
      aten::mm(t$9: f32[1, 8], t$10: f32[8, 32])  ->  t$11: f32[1, 32]
  <method 'sum' of 'torch._C.TensorBase' objects>(dt$13: f32[8, 32]| S(0))  ->  dt$17: f32[]| P
    aten::sum(dt$14: f32[8, 32]| S(0))
      aten::sum(t$15: f32[1, 32])  ->  t$16: f32[]"""
```

Sadly the only way to get DTensor op outputs is to set `record_torchfunction=True`, as dispatch calls just defer to DTensor's dispatch logic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165076
Approved by: https://github.com/zpcore
2025-11-05 22:00:11 +00:00
e69aaaf45a [user-streams] Add backward test (#167021)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167021
Approved by: https://github.com/Lucaskabela
ghstack dependencies: #167019
2025-11-05 21:24:44 +00:00
fd8f368d31 [user-streams] Add graph annotation checks (#167019)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167019
Approved by: https://github.com/Lucaskabela
2025-11-05 21:24:44 +00:00
13d2cc7bd2 Remove python workaround for ContextDecorator (#167049)
This PR removes the import workaround for ContextDecorator because the import always succeeds in Py 3.10+.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167049
Approved by: https://github.com/Skylion007
2025-11-05 20:56:04 +00:00
c6c913d18e Add torch::stable::Tensor sizes and strides (#165153)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165153
Approved by: https://github.com/mikaylagawarecki
ghstack dependencies: #164991, #165152
2025-11-05 20:55:34 +00:00
ef3f953966 Revert "[DebugMode] output, tensor id annotations for DebugMode (#165076)"
This reverts commit a64c7d740428010d700b4bcd395af8a7b2d5c21f.

Reverted https://github.com/pytorch/pytorch/pull/165076 on behalf of https://github.com/wdvr due to Sorry but this is breaking internally. See diff [D86245252](https://l.workplace.com/l.php?u=https%3A%2F%2Fwww.internalfb.com%2Fdiff%2FD86245252&h=AT1oPbS1XTv6HjYeYdxmDMW1-jlT0pS8yBO2iSfbPfUB9ydsEjFXBNT56QhV1v5TKc4_QaQNxykNowSKmb4fgenjOyCv20NuL7oV_Id5fhh32hhv1IpjgsDJYK-PBFfSfv_miLIWfNgj902KcgXojbBgDcDzQeS9lNt0GQ) for details. To validate your fixes internally, you can follow the instructions here: https://fburl.com/fixing-ghfirst-reverts ([comment](https://github.com/pytorch/pytorch/pull/165076#issuecomment-3493358159))
2025-11-05 20:52:43 +00:00
ea44f12bce [13/N] Apply ruff UP035 rule (#167048)
This PR continues to apply ruff UP035 rule to test code and some remaining torch files.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167048
Approved by: https://github.com/Skylion007
2025-11-05 20:51:53 +00:00
a74fe75c45 Don't hardcode double argument for reduction base (#166951)
Fixes https://github.com/pytorch/pytorch/issues/43254

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166951
Approved by: https://github.com/ngimel, https://github.com/Skylion007
ghstack dependencies: #166813
2025-11-05 20:34:15 +00:00
6d30666bc1 Revert "[12/N] Apply ruff UP035 rule (#166929)"
This reverts commit 5863ba1b2e4de9ea0ae16a663465ec5d3d6f9f52.

Reverted https://github.com/pytorch/pytorch/pull/166929 on behalf of https://github.com/donigian due to Temporarily need to revert this to continue a revert for #165076. @cyyever Please re-merge after revert of #165076. ([comment](https://github.com/pytorch/pytorch/pull/166929#issuecomment-3493090596))
2025-11-05 20:02:47 +00:00
8e8cbb85ee Revert "[Inductor] Fix unbacked float symbol handling in kernel codegen (#166890)"
This reverts commit 0c7a4a6b48d49306eae8d0a9ee8d32b1899e5e23.

Reverted https://github.com/pytorch/pytorch/pull/166890 on behalf of https://github.com/malfet due to Looks like it broke torchfuzz tests, see fbd70fb84e/1 and same test on slow ([comment](https://github.com/pytorch/pytorch/pull/166890#issuecomment-3493011038))
2025-11-05 19:42:39 +00:00
fbd70fb84e Update typing docs to reference pyrefly (#166883)
Replacing mypy codumentation in the CONTRIBUTING.MD file with pyrefly references. I have made initial changes to https://github.com/pytorch/pytorch/wiki/Guide-for-adding-type-annotations-to-PyTorch documentation, and will replace the script at the bottom with one tailored to the pyrefly tool as a follow-up.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166883
Approved by: https://github.com/malfet
2025-11-05 19:35:38 +00:00
6c5db82584 [Inductor] Naive foreach autotune support (#162053)
Initial autotuning support for foreach kernels, 4x improvement for some kernels in internal workload. More improvements can surely be made here in the future. Removing num_warps for definition to enable autotune support in generated wrapper code.

Before:
triton_for_fused_18.kd 🔍 | 4.986 ms | 4.986 ms | 2.493 ms | 2 |
triton_for_fused_6.kd 🔍 | 0.098 ms | 0.098 ms | 0.049 ms | 2 |
triton_for_fused_7.kd 🔍 | 0.036 ms | 0.036 ms | 0.018 ms | 2 |

After:
triton_for_fused_18.kd 🔍 | 1.273 ms | 1.273 ms | 0.636 ms | 2 |
triton_for_fused_6.kd 🔍 | 0.044 ms | 0.044 ms | 0.022 ms | 2 |
triton_for_fused_7.kd 🔍 | 0.024 ms | 0.024 ms | 0.012 ms | 2 |

num_warps=8 default due to https://github.com/pytorch/pytorch/blob/main/torch/_inductor/codegen/triton_combo_kernel.py#L374

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162053
Approved by: https://github.com/mlazos, https://github.com/naromero77amd, https://github.com/jeffdaily

Co-authored-by: Nichols A. Romero <nick.romero@amd.com>
2025-11-05 19:27:23 +00:00
6052a01b71 [BE][Typing][Dynamo] Type torch/_dynamo/variables/dicts.py (#167022)
Provides type coverage to torch/_dynamo/variables/dicts.py

Coverage report:
`mypy torch/_dynamo/variables/dicts.py --linecount-report /tmp/coverage_log`

Compare before to after - we go from 0 lines and 0 funcs covered to 1547 lines and 89 funcs covered

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167022
Approved by: https://github.com/Skylion007
2025-11-05 19:18:35 +00:00
14b153bcf2 include DTensor metadata when pretty-printing fx.Graphs (#166750)
Example below. You need to trace your function with DTensor inputs in order for the graph proxies to run on DTensor (and not the inner local tensor). You also need to run with `tracing_mode="fake"`, or with your own `FakeTensorMode`, to see the nice DTensor printing. If this doesn't feel very ergonomic then maybe we can find some better UX for printing a graph with DTensor in it:

<img width="1446" height="582" alt="image" src="https://github.com/user-attachments/assets/99ea5ce6-1008-4ba5-b58e-542cd34a340b" />

```
import torch
from torch.testing._internal.distributed.fake_pg import FakeStore
from torch.distributed.tensor import distribute_tensor, Shard, Replicate
from torch.utils._debug_mode import DebugMode
from torch.fx.experimental.proxy_tensor import make_fx
from torch.utils._python_dispatch import TorchDispatchMode
from torch.utils import _pytree as pytree

world_size = 8
device_type = "cpu"
fake_store = FakeStore()
torch.distributed.init_process_group("fake", store=fake_store, rank=0, world_size=world_size)
device_mesh = torch.distributed.init_device_mesh(device_type, (world_size,))
dim = 128

A = torch.randn(8, dim)
B = torch.randn(dim, dim)
dA = distribute_tensor(A, device_mesh, [Shard(0)]).requires_grad_()
dB = distribute_tensor(B, device_mesh, [Replicate()]).requires_grad_()

def f(dA, dB):
    dy = dA @ dB
    loss = dy.sum()
    loss.backward()
    return dA.grad, dB.grad

# We actually need the tracing_mode='fake' here, or to trace under a FakeTensorMode.
# make_fx has some logic to ensure we don't accidentally stash real tensors in the graph
# so we won't stash our DTensors properly if they don't hold Fake inner tensors
gm = make_fx(f, tracing_mode='fake')(dA, dB)
# DCE isn't necessary here, there were just a lot of dead detach() nodes that spammed the graph
gm.graph.eliminate_dead_code()
gm.recompile()
gm.print_readable(colored=True)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166750
Approved by: https://github.com/ezyang, https://github.com/wconstab, https://github.com/Skylion007
2025-11-05 18:58:54 +00:00
641de23c96 ci: Add aarch64 docker builds for modern clang (#166416)
Should enable us to build using some arm optimizations that are only
available on the newest versions of clang.

Signed-off-by: Eli Uriegas <eliuriegas@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166416
Approved by: https://github.com/malfet
2025-11-05 18:55:56 +00:00
89165c0a2b Update triton to 3.5.1 release (#166968)
This includes sm103 https://github.com/triton-lang/triton/pull/8485 fix

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166968
Approved by: https://github.com/Lucaskabela, https://github.com/njriasan
2025-11-05 18:26:34 +00:00
dcc2ba4ca4 Add some code for exploring the space of accessible size/stride configs via plain views (#167076)
We are working on a translation from as_strided to view operations, but
only when the as_strided is representable as a plain view.  A useful
testing utility in this situation is the ability to enumerate all valid
views on an original tensor.  So we have a small test here that shows
it is possible.

To avoid an explosion of states, we don't handle permutes and size=1,
which are degenerate cases (you can always do a single permute and
a series of unsqueezes to get to the final desired state.)

Authored with claude code assistance.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/167076
Approved by: https://github.com/albanD
ghstack dependencies: #166868, #166867
2025-11-05 18:25:19 +00:00
ad5c7c20e0 Revert "[cuDNN] Smoke-test runtime cuDNN version matches compile time version in CI (#165922)"
This reverts commit 1d3f5e19da068ec1340db041b7105b287a513578.

Reverted https://github.com/pytorch/pytorch/pull/165922 on behalf of https://github.com/atalman due to Introduces Segfault in linux-jammy-cuda12.8-py3.10-gcc11 ([comment](https://github.com/pytorch/pytorch/pull/165922#issuecomment-3492667312))
2025-11-05 18:13:57 +00:00
c86540f120 Revert "Add model code stack trace to torch.profile (#166677)"
This reverts commit c00696144dae1f02e04ce345480b55e46c7d32a8.

Reverted https://github.com/pytorch/pytorch/pull/166677 on behalf of https://github.com/jeffdaily due to broke rocm ([comment](https://github.com/pytorch/pytorch/pull/166677#issuecomment-3492658160))
2025-11-05 18:11:11 +00:00
c17aa0f113 [ROCm] Enable group gemm through CK (#166334)
Fixes #161366
All the 4 types of dimension matrix are supported.
2d-2d, 2d-3d, 3d-3d, 3d-2d. The corresponding test cases in test_matmul_cuda are working
for both forward and backward pass.
The CK path is enabled for gfx942, gfx950.
ToDo: Need to enable support on gfx90a since the ck kernel used in this commit produces gpu error,
might require a different CK kernel config, based on the profiler result on gfx90a.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166334
Approved by: https://github.com/atalman
2025-11-05 18:03:59 +00:00
4ff068c33a [Code Clean] Replace assert with if statement and raise AssertionError (#166935)
Including:
- `torch/profiler/profiler.py`

Fixes part of #164878

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166935
Approved by: https://github.com/fffrog, https://github.com/albanD
2025-11-05 17:59:16 +00:00
0c7a4a6b48 [Inductor] Fix unbacked float symbol handling in kernel codegen (#166890)
When a fn compiled with `torch.compile` calls `.item()` on a float tensor arg (e.g., for thresholds in `torch.clamp`), the generated triton kernel references an unbacked float symbol (e.g., `zuf0`) that was never added to the kernel's parameter list, causing a compilation error.

Fixes: #166888

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166890
Approved by: https://github.com/eellison
2025-11-05 17:50:08 +00:00
f93ee16fb6 [CI] Parse xml and upload json while running (#166988)
Then we can point an ClickHouse ingestor at this s3 path and get them into ClickHouse while the job is running.

use filelock to make sure each json is uploaded once so we don't end up with dups in ClickHouse
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166988
Approved by: https://github.com/izaitsevfb
2025-11-05 17:19:24 +00:00
9c2c3dbc15 Revert "Update triton to 3.5.1 release (#166968)"
This reverts commit b4e4ee81d386db922d8f63359f9870eff1f44052.

Reverted https://github.com/pytorch/pytorch/pull/166968 on behalf of https://github.com/malfet due to It might have caused deadlock/test timeouts, see d4dcd0354c/1 ([comment](https://github.com/pytorch/pytorch/pull/166968#issuecomment-3492399396))
2025-11-05 17:12:30 +00:00
d4dcd0354c [pytree][dynamo] add test to ensure tree_map preserves dict order (#166236)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166236
Approved by: https://github.com/mlazos
2025-11-05 17:04:40 +00:00
aba2fa3259 Fix clang-21 warnings (#166859)
Fixes compiler warnings thrown by Clang-21

Fixes #166755

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166859
Approved by: https://github.com/aditew01, https://github.com/fadara01, https://github.com/malfet
2025-11-05 16:55:51 +00:00
d2d13bf62d Invert unary read and write for fusion (#161404)
For [this repro](https://gist.github.com/eellison/75a99616a0fcca0436316bbfd8987fae) enables fusion of `to_blocked` with the prior `to_mx` calculation, so that there is only a single kernel per tensor, resulting in a 10% speedup of the non conversion code (need to update my local devserver to 12.9 to time the matmul as well).

The `to_mx` kernel has a contiguous write:

```Py
op6_op7: FusedSchedulerNode(SchedulerNode,SchedulerNode)
op6_op7.writes = [MemoryDep('buf6', c0, {c0: 2097152}), MemoryDep('buf7', c0, {c0: 67108864})]
op6_op7.unmet_dependencies = []
op6_op7.met_dependencies = [MemoryDep('arg1_1', c0, {c0: 67108864})]
op6_op7.outputs = [
    buf6: ComputedBuffer
    buf6.layout = FixedLayout('cuda:0', torch.float32, size=[8192, 256], stride=[256, 1])
    buf6.users = [
        NodeUser(node=SchedulerNode(name='op7'), can_inplace=False, is_weak=False),
        NodeUser(node=SchedulerNode(name='op9'), can_inplace=False, is_weak=False),
    ]
    buf7: ComputedBuffer
    buf7.layout = FixedLayout('cuda:0', torch.float8_e4m3fn, size=[8192, 256, 32], stride=[8192, 32, 1])
    buf7.users = [NodeUser(node=ExternKernelSchedulerNode(name='op10'), can_inplace=False, is_weak=False)]
]
```

While the `to_blocked` has a single discontiguous read and a single contiguous write.

```Py
op9: SchedulerNode(ComputedBuffer)
op9.writes = [MemoryDep('buf9', c0, {c0: 2097152})]
op9.unmet_dependencies = [   MemoryDep('buf6', 32768*((c0//32768)) + 8192*(((ModularIndexing(c0, 1, 16))//4)) + 256*(ModularIndexing(c0, 16, 32)) + 4*(ModularIndexing(c0, 512, 64)) + (ModularIndexing(ModularIndexing(c0, 1, 16), 1, 4)), {c0: 2097152})]
op9.met_dependencies = []
op9.outputs = [
    buf9: ComputedBuffer
    buf9.layout = FixedLayout('cuda:0', torch.float8_e8m0fnu, size=[2097152], stride=[1])
    buf9.users = [NodeUser(node=ExternKernelSchedulerNode(name='op10'), can_inplace=False, is_weak=False)]
]
```

To enable fusion, we invert the read, giving op9 and contiguous read and discontiguous write. More explanation here: https://gist.github.com/eellison/6f9f4a7ec10a860150b15b719f9285a9

[Tlparse with this optimization](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/eellison/custom/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000).

[Tlparse without this optimization](https://manifold.edge.x2p.facebook.net/v0/read/tree/logs/eellison/custom/index.html?bucketName=tlparse_reports&apiKey=tlparse_reports-key&withPayload=1&timeoutMsec=10000).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/161404
Approved by: https://github.com/shunting314
2025-11-05 16:10:52 +00:00
7a6ff88196 Widen ops support to take in IntHOArrayRef vs only std::vec (#165152)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165152
Approved by: https://github.com/mikaylagawarecki
ghstack dependencies: #164991
2025-11-05 16:00:24 +00:00
59563dfe56 Refactor out headeronly ArrayRef (#164991)
Differential Revision: [D85091961](https://our.internmc.facebook.com/intern/diff/D85091961)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164991
Approved by: https://github.com/swolchok
2025-11-05 16:00:24 +00:00
5c639466f7 Revert "[Inductor][Grouped Gemm] Add Blackwell CuTeDSL Kernel (#167003)"
This reverts commit 658c5f879c37142b1df51c7eb6c5a5bb06318597.

Reverted https://github.com/pytorch/pytorch/pull/167003 on behalf of https://github.com/atalman due to regressed vllm signal: [GH job link](https://github.com/pytorch/pytorch/actions/runs/19093785744/job/54553796743) [HUD commit link](658c5f879c) ([comment](https://github.com/pytorch/pytorch/pull/167003#issuecomment-3491527704))
2025-11-05 14:30:15 +00:00
0b4dd08e04 [dynamo] Introduce _set_lru_cache (#167038)
Addresses the short-term plan for https://github.com/pytorch/pytorch/issues/166926. This PR can't be defaulted on, that would be terrible for cache look up times.

There's a proper fix in the works by @williamwen42.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167038
Approved by: https://github.com/williamwen42
2025-11-05 09:05:11 +00:00
edd8d356b6 fixes keyerror when loading parameter with unsaved optimizer state (#165228)
Fixes #164257

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165228
Approved by: https://github.com/fegin
2025-11-05 08:07:46 +00:00
658c5f879c [Inductor][Grouped Gemm] Add Blackwell CuTeDSL Kernel (#167003)
Summary: This is a reland of https://github.com/pytorch/pytorch/pull/165036?fbclid=IwY2xjawN3RL1leHRuA2FlbQIxMQBicmlkETExOEcxcnVhNVA1TzRSVmhiAR63GOEpJbZA-JhQ0CSj9ji8H_RHBUhDwYNDtxjOYfDol56OGqmC4r7jPP96Fw_aem_bWvtMfVifLQrnpv1YB_fJA, which previously contained a minor bug in the logic that determined whether the kernel should be enabled. As a result, it was incorrectly activated on non-Blackwell GPUs.

Test Plan:
Inductor test (fbcode):
`INDUCTOR_TEST_DISABLE_FRESH_CACHE=1 TORCHINDUCTOR_CACHE_DIR=~/cutetest buck2 run mode/opt //caffe2/test/inductor:cutedsl_grouped_mm -c fbcode.nvcc_arch=b200a -c fbcode.enable_gpu_sections=true -c fbcode.platform010_cuda_version=12.8 -m "ovr_config//third-party/pypi/nvidia-cutlass-dsl/constraints:4.2.1"`

Tritonbench (fbcode):
`clear; CUDA_VISIBLE_DEVICES=7 TRITON_PRINT_AUTOTUNING=1 TRITON_ALWAYS_COMPILE=1 TORCH_LOGS=+inductor TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 TORCHINDUCTOR_MAX_AUTOTUNE_GEMM=1 buck2 run mode/opt //pytorch/tritonbench:run -c fbcode.nvcc_arch=b200a -c fbcode.enable_gpu_sections=true -c fbcode.platform010_cuda_version=12.8 -m "ovr_config//third-party/pypi/nvidia-cutlass-dsl/constraints:4.2.1" -- --op grouped_gemm --only aten_grouped_mm,preprocessed_pt2_cute_grouped_mm --precision bf16  --num-inputs 1 --metrics tflops,accuracy`

Tritonbench(oss):
`clear; CUDA_VISIBLE_DEVICES=2 TRITON_PRINT_AUTOTUNING=1 TRITON_ALWAYS_COMPILE=1 TORCH_LOGS=+inductor TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 TORCHINDUCTOR_MAX_AUTOTUNE_GEMM=1 python run.py --op grouped_gemm --only aten_grouped_mm,preprocessed_pt2_triton_grouped_mm --precision bf16  --num-inputs 1 --metrics tflops,accuracy`

Unit Tests(oss):
`clear; python test/inductor/test_cutedsl_grouped_mm.py`

Differential Revision: D86231180

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167003
Approved by: https://github.com/jananisriram
2025-11-05 06:51:30 +00:00
59a6c83dfe [fx] Add strict argument validation to Interpreter.boxed_run (#166784)
# Summary

This PR fixes an issue where `torch.fx.Interpreter.boxed_run` would silently ignore extra input arguments instead of validating the argument count.

Previously, `boxed_run` would only consume as many inputs as there were placeholder nodes and then clear the entire `args_list`, hiding potential bugs. This change introduces a strict check to ensure `len(args_list)` matches the number of placeholder nodes, raising a `RuntimeError` on a mismatch.

Fixes #166583.

# Changes

* Validate `len(args_list)` against the number of placeholder nodes at the beginning of `boxed_run`.
* Raise a `RuntimeError` with a clear message ("extra arguments" or "missing arguments") if the counts do not match.
* Move `args_list.clear()` to only execute after successful validation and environment setup. If an error is raised, `args_list` is preserved for debugging.

# Testing

* Added `test_interpreter_boxed_run_argument_validation` to `test/test_fx.py`.
* This test covers three scenarios:
    1.  Correct number of arguments (succeeds, `args_list` is cleared).
    2.  Extra arguments (raises `RuntimeError`, `args_list` is preserved).
    3.  Missing arguments (raises `RuntimeError`, `args_list` is preserved).

# User-facing impact / BC notes

This is a bug fix. Code that was incorrectly passing the wrong number of arguments to `boxed_run` will now fail fast with a `RuntimeError` instead of executing silently with unintended inputs. Correctly written code is unaffected.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166784
Approved by: https://github.com/ezyang, https://github.com/xmfan
2025-11-05 06:39:32 +00:00
431dfe8692 [dynamo] extend collections.defaultdict support with *args, **kwargs and custom default_factory (#166793)
Fixes #166238

Extend `collections.defaultdict` to accept `*args` and `**kwargs` in the constructor. And also support custom `default_factory`, such as `dd.default_factory` (a `GetAttrVariable`).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166793
Approved by: https://github.com/guilhermeleobas
2025-11-05 06:09:39 +00:00
c00696144d Add model code stack trace to torch.profile (#166677)
```python
python test/test_fx.py -k profiler
```

Insert `torch._C._profiler._RecordFunctionFast` to fx graph codegen.

We post-process the profiler dump using `map_recorded_events_to_aten_ops_with_stack_trace` to add the stack trace to the dump'd trace.

`map_recorded_events_to_aten_ops_with_stack_trace` queries `fx.traceback._FX_METADATA_REGISTRY` for node metadata. Each graph module has a hash'd fake file name (e.g. `fx_generated__iv4zodvbcmdkhx77jrg7h2f2opebujhfmc6tf6nx7vioq244baw.py`), which is the key to the registry.

One can do `fx_g.enrich_profiler_metadata()` to add debugging info. Or `fx_g.enrich_profiler_metadata(enable=False)` to remove.

`aot_eager` makes calls `fx_g.enrich_profiler_metadata()` if TORCH_ENRICH_RPOFILER_STACK_TRACE is set or _dynamo.config.enrich_profiler_metadata=True.

<img width="1188" height="565" alt="Screenshot 2025-10-31 at 4 40 52 PM" src="https://github.com/user-attachments/assets/41e8113f-3e6d-439b-bffd-cfbf0c03a47a" />

Example code gen'd.
```
def forward(self, args_list):
    args_iter = iter(args_list)
    arg0_1 = next(args_iter)
    arg1_1 = next(args_iter)
    args_list.clear()
    _rf = torch._C._profiler._RecordFunctionFast('## fx_generated__iv4zodvbcmdkhx77jrg7h2f2opebujhfmc6tf6nx7vioq244baw.py ##'); _rf.__enter__()
    repeated_subgraph0 = self.repeated_subgraph0
    _rf_invoke_subgraph = torch._C._profiler._RecordFunctionFast('## 3 ##'); _rf_invoke_subgraph.__enter__()
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', arg0_1, arg1_1);  repeated_subgraph0 = arg0_1 = arg1_1 = None
    _rf_invoke_subgraph.__exit__(None, None, None)
    _rf_getitem = torch._C._profiler._RecordFunctionFast('## 4 ##'); _rf_getitem.__enter__()
    getitem = invoke_subgraph[0];  invoke_subgraph = None
    _rf_getitem.__exit__(None, None, None)
    return (getitem,)
    _rf.__exit__(None, None, None)

def forward(self, arg0_1, arg1_1):
    _rf = torch._C._profiler._RecordFunctionFast('## fx_generated__ozpadpj5cxoalxeyopej33g2vvtvhxg4xsk7bhx7ldmcibtybyn.py ##'); _rf.__enter__()
    _rf_mul = torch._C._profiler._RecordFunctionFast('## 2 ##'); _rf_mul.__enter__()
    mul = torch.ops.aten.mul.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
    _rf_mul.__exit__(None, None, None)
    _rf_sin = torch._C._profiler._RecordFunctionFast('## 3 ##'); _rf_sin.__enter__()
    sin = torch.ops.aten.sin.default(mul);  mul = None
    _rf_sin.__exit__(None, None, None)
    _rf_add = torch._C._profiler._RecordFunctionFast('## 4 ##'); _rf_add.__enter__()
    add = torch.ops.aten.add.Tensor(sin, 5);  sin = None
    _rf_add.__exit__(None, None, None)
    return (add,)
    _rf.__exit__(None, None, None)

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166677
Approved by: https://github.com/ezyang
2025-11-05 06:08:34 +00:00
9ffc480c5a Add min/max support for barebones uint types (#166813)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166813
Approved by: https://github.com/Skylion007
2025-11-05 04:44:21 +00:00
14956eaef4 [ROCm][CI] revert ROCm magma commit hash to last known good (#167044)
PR https://github.com/pytorch/pytorch/pull/166693 updated the magma commit hash but this has been linked to ROCm 7.1 CI failures.  Go back to last known working magma version.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167044
Approved by: https://github.com/jeffdaily

Co-authored-by: Jeff Daily <jeff.daily@amd.com>
2025-11-05 04:18:04 +00:00
066c5c57a9 Fix typo in gloo_hip library name (#166502)
The typo was never noticed; conditions to enable it require system gloo: `-DUSE_SYSTEM_GLOO=ON -DUSE_GLOO=ON -DUSE_DISTRIBUTED=ON -DUSE_ROCM=ON`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166502
Approved by: https://github.com/jerryzh168, https://github.com/cyyever
2025-11-05 04:14:01 +00:00
08ef852a4b [unified v2][apple] Clean up APPLETVOS from caffe2 (#166953)
Summary: This is not used, so delete it

Test Plan:
```
$ buck targets xplat/... > /dev/null
```

Reviewed By: dtolnay

Differential Revision: D86125712

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166953
Approved by: https://github.com/seemethere
2025-11-05 03:09:56 +00:00
56fc99915b Fix typos in complex numbers docs (#166671)
This PR fixes two small typos in the complex numbers docs:

1. "numbercial" -> "numerical"
2. "easily to switch" -> "easily switch to"
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166671
Approved by: https://github.com/jcaip, https://github.com/Arpitha781, https://github.com/mlazos, https://github.com/cyyever
2025-11-05 03:05:06 +00:00
5863ba1b2e [12/N] Apply ruff UP035 rule (#166929)
This PR continues to apply ruff UP035 rule to test code and some remaining torch files.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166929
Approved by: https://github.com/Lucaskabela
2025-11-05 03:03:41 +00:00
a743f9eeb5 Revert "Avoid DDE in narrow with unbacked start (#166361)"
This reverts commit ed45c5f38df6aa419c67d139d932c2c94404223a.

Reverted https://github.com/pytorch/pytorch/pull/166361 on behalf of https://github.com/malfet due to Looks like it broke test_torchfuzz subtests, see 01e6e35c7f/1 ([comment](https://github.com/pytorch/pytorch/pull/166361#issuecomment-3488916766))
2025-11-05 02:39:55 +00:00
53b03f1a2b Revert "make narrow_tensor_symint DDE-free (#166379)"
This reverts commit d7e2d0ad301b5d0db049bf5d2a2fc7ff9c89c58c.

Reverted https://github.com/pytorch/pytorch/pull/166379 on behalf of https://github.com/malfet due to Need to revert previous PR in the stack ([comment](https://github.com/pytorch/pytorch/pull/166379#issuecomment-3488910172))
2025-11-05 02:36:46 +00:00
cd5d810c3a Annotation should be deepcopied (#167017)
The annotation should be deepcopied. Otherwise all nodes with the same `seq_nr` share the same underlying dict

Pull Request resolved: https://github.com/pytorch/pytorch/pull/167017
Approved by: https://github.com/yiming0416
2025-11-05 02:22:33 +00:00
01e6e35c7f Send / recv support in local tensor (#166595)
This change introduces LocalRunnerMode that allows you to run multiple
    SPMD functions concurrently. SMPD functions are executing one at a time,
    yielding execution capability while waiting for send or receive operations
    to complete. Send and receive peer operations only supported while running
    under LocalRunnerMode.

    The example test in this change demonstrates how ranks are sending data
    to the next peer and receiving data from the previous peer (ring).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166595
Approved by: https://github.com/wconstab, https://github.com/ezyang
2025-11-05 01:36:44 +00:00
bcd159bcdd Fix the vmap op fallback bug (#166032)
## The bug

In some environments, if run:

```py
    def inner_func(x):
      return x.to(torch.float32, memory_format=torch.channels_last)

    x = torch.randn(2, 2, 3, 4, device="cpu", dtype=torch.float64)
    torch.vmap(inner_func)(x)
```

we get:

```
E       RuntimeError: Batching rule not implemented for aten::to.dtype_layout; the fallback path doesn't work on out= or view ops.
```

Otherwise, it would always fallback and result in an error for ops like `to.dtype` and `to.dtype_layout` even the kernels are registered.

## The cause

The alias key of `FuncTorchBatchedDecomposition` is not properly translated to runtime dispatch keys when updating the dispatch table of `OperatorEntry::dispatchTable_`. [[link](984b096d10/aten/src/ATen/core/dispatch/OperatorEntry.cpp (L500-L501))]
The [`getRuntimeDispatchKeySet`](f3fa560dec/c10/core/DispatchKeySet.cpp (L62)) use if-else to translate all other alias keys but `FuncTorchBatchedDecomposition`.

This would result in not finding the kernel in many cases.

## The fix

This PR adds one more `if` statement to `getRuntimeDispatchKeySet` to map `FuncTorchBatchedDecomposition` to the corresponding runtime dispatch key, `FuncTorchBatched`.
So, that the dispatch table can be properly updated.

This fix allows people to use ops inside vmaps in more environments and across more compilers.

## Why does it work without the PR

As long as the `FuncTorchBatchedDecomposition` [[link](51319ca090/aten/src/ATen/functorch/BatchRulesDecompositions.cpp (L35))]
is registered before the fallback method of `FuncTorchBatched` [[link](d311a3d1dc/aten/src/ATen/functorch/LegacyBatchingRegistrations.cpp (L759))], everything runs fine.

In this case, it relies on the registration of the fallback method to update the dispatch table, which flushes all the kernels in `OperatorEntry::kernels_` into `dispatchTable_`, among which there are kernels registered with `FuncTorchBatchedDecomposition`.

## When does it fail

However, the order of the op registration and the fallback registration is not garanteed at all.
It relies on the C++ static initialization order, which varies from environment to environment.
On our compiler, it the fallback registration goes first and the alias key kernels under `FuncTorchBatchedDecomposition` comes later and not get flushed into the dispatch table by the fallback registration.
Therefore, it cannot find the kernel for it.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166032
Approved by: https://github.com/albanD
2025-11-05 01:16:58 +00:00
64ae31c5d3 [HOP][print] Add HOP subclass for printing (#166660)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166660
Approved by: https://github.com/angelayi, https://github.com/anijain2305

Co-authored-by: Angela Yi <yiangela7@gmail.com>
2025-11-05 01:16:49 +00:00
45da6e1fe1 [CD] Upload XPU inductor benchmark test reports to s3 (#166954)
As the title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166954
Approved by: https://github.com/atalman
2025-11-05 01:02:57 +00:00
39160dba0c shrink_group implementation to expose ncclCommShrink API (#164518)
Closes #164529

To expose the new [ncclCommShrink](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/api/comms.html#ncclcommshrink) API to PyTorch.

This is useful when you need to exclude certain GPUs or nodes from a collective operation, for example in fault tolerance scenarios or when dynamically adjusting resource utilization.

For more info:  [Shrinking a communicator](https://docs.nvidia.com/deeplearning/nccl/user-guide/docs/usage/communicators.html#shrinking-a-communicator)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164518
Approved by: https://github.com/kwen2501
2025-11-05 00:54:40 +00:00
f2fbc81c50 [RFC] Add experimental Pallas TorchInductor backend (#166822)
Very simple Pallas TorchInductor backend
Given
```
import torch

def f(x, y):
    return x.sin() + y

torch._inductor.config.cuda_backend="pallas"

x = torch.randn(4).cuda()
y = torch.randn(4).cuda()

compiled = torch.compile(f, backend="inductor", fullgraph=True)
torch.testing.assert_close(compiled(x, y), f(x, y))
```
it outputs
```
import torch
import jax
import jax.numpy as jnp
from jax.experimental import pallas as pl
from torch.utils import dlpack as torch_dlpack
def pallas_fused_add_sin_56b646d2_kernel(in_ptr0, in_ptr1, out_ptr0):
    tmp0 = in_ptr0[...]
    tmp1 = jnp.sin(tmp0)
    tmp2 = in_ptr1[...]
    tmp3 = tmp1 + tmp2
    out_ptr0[...] = tmp3
def pallas_fused_add_sin_56b646d2_main(in_ptr0, in_ptr1, out_ptr0, stream=None):
    # Convert Torch -> JAX for inputs
    in_ptr0_jax = jax.dlpack.from_dlpack(torch_dlpack.to_dlpack(in_ptr0))
    in_ptr1_jax = jax.dlpack.from_dlpack(torch_dlpack.to_dlpack(in_ptr1))
    # Prepare output spec from PyTorch tensor
    # Map PyTorch dtype to JAX dtype string
    _torch_dtype_to_jax = {
        torch.float32: jnp.float32, torch.float64: jnp.float64, torch.float16: jnp.float16,
        torch.int32: jnp.int32, torch.int64: jnp.int64, torch.int16: jnp.int16, torch.int8: jnp.int8,
        torch.uint8: jnp.uint8, torch.bool: jnp.bool_,
    }
    out_spec = jax.ShapeDtypeStruct(out_ptr0.shape, _torch_dtype_to_jax[out_ptr0.dtype])
    compiled = pl.pallas_call(
        lambda *refs: pallas_fused_add_sin_56b646d2_kernel(*refs),
        out_shape=out_spec,
        grid=(1,),
    )
    res = compiled(in_ptr0_jax, in_ptr1_jax)
    # Copy result back into the provided torch output tensor
    res_t = torch_dlpack.from_dlpack(jax.dlpack.to_dlpack(res))
    out_ptr0.copy_(res_t)
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166822
Approved by: https://github.com/jansel
ghstack dependencies: #166976, #166982
2025-11-05 00:52:41 +00:00
4271ffe918 don't produce invalid grid configs (#166974)
Proper fix for #164048, fixes gather too, reverts #164049
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166974
Approved by: https://github.com/eqy
2025-11-05 00:20:27 +00:00
7eefcfb1db [BE][Typing][Dynamo] Type torch/_dynamo/variables/ctx_manager.py (#166878)
Provides type coverage to torch/_dynamo/variables/ctx_manager.py

Coverage report:
`mypy torch/_dynamo/variables/ctx_manager.py --linecount-report /tmp/coverage_log`

Compare before to after - we go from 0 lines and 0 funcs covered to 1541 lines and 144 funcs covered

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166878
Approved by: https://github.com/Skylion007
2025-11-04 23:54:18 +00:00
4b12c0344d Add default .github/copilot-instructions.md and item in .gitignore for allowing local changes (#166864)
Fixes [#166850](https://github.com/pytorch/pytorch/issues/166850)

- Create a default `.github/copilot-instructions.md` file (used Claude Sonnet 4.5 in Copilot).
- Add `.github/copilot-instructions.md` to the `.gitignore` file.

The prompt used is below, which is preset by Copilot:
```
Analyze this codebase to generate or update `.github/copilot-instructions.md` for guiding AI coding agents.

Focus on discovering the essential knowledge that would help an AI agents be immediately productive in this codebase. Consider aspects like:
- The "big picture" architecture that requires reading multiple files to understand - major components, service boundaries, data flows, and the "why" behind structural decisions
- Critical developer workflows (builds, tests, debugging) especially commands that aren't obvious from file inspection alone
- Project-specific conventions and patterns that differ from common practices
- Integration points, external dependencies, and cross-component communication patterns

Source existing AI conventions from `**/{.github/copilot-instructions.md,AGENT.md,AGENTS.md,CLAUDE.md,.cursorrules,.windsurfrules,.clinerules,.cursor/rules/**,.windsurf/rules/**,.clinerules/**,README.md}` (do one glob search).

Guidelines (read more at https://aka.ms/vscode-instructions-docs):
- If `.github/copilot-instructions.md` exists, merge intelligently - preserve valuable content while updating outdated sections
- Write concise, actionable instructions (~20-50 lines) using markdown structure
- Include specific examples from the codebase when describing patterns
- Avoid generic advice ("write tests", "handle errors") - focus on THIS project's specific approaches
- Document only discoverable patterns, not aspirational practices
- Reference key files/directories that exemplify important patterns

Update `.github/copilot-instructions.md` for the user, then ask for feedback on any unclear or incomplete sections to iterate.
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166864
Approved by: https://github.com/malfet

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
2025-11-04 23:53:56 +00:00
661b639663 use_cpp_bmm_template supports more use cases (#165469)
Summary: In certain scenarios, such as when the first stride is 0, the entire tensor may not be contiguous, but the 2D matrix within each batch can still be contiguous, allowing us to apply max autotune. This diff specifically checks for contiguity within the 2D matrix of each batch, and enables more uses for cpp bmm template.

Differential Revision: D84561331

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165469
Approved by: https://github.com/desertfire
2025-11-04 23:47:17 +00:00
0cd809f60c [inductor][AMD] Filter out invalid Triton Configs for MI350X _scaled_mm (#166442)
Summary: Mirrors change done in D81180838 but for inductor. Without this change, running _scaled_mm on MI350X accelerator would crash.

Test Plan: HIP_VISIBLE_DEVICES=7 TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 buck2 run mode/opt-amd-gpu   -m rocm70   -c fbcode.rocm_arch=mi350   scripts/jchunx/gemm:scaled_mm_microbench --   --csv_file /home/jchunx/scripts/fp8_shapes.csv   --backend triton,aten   --fast_accum=true   2>&1 | tee ~/logs/scaled_mm.log

Reviewed By: bilal

Differential Revision: D85694383

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166442
Approved by: https://github.com/bilal
2025-11-04 23:47:11 +00:00
a96728d188 Clarify safety of CUDA graph memory pool sharing across graphs that are replayed in arbtirary order. (#166975)
Some users at pytorch conference were asking me about whether it is safe to share a memory pool among cuda graphs that never run concurrently, but may run in arbitrary order, if they don't depend upon each other's output. Even though your capture order doesn't match replay order in this situation, this is safe. However, our documents confusingly said this wasn't allowed. This update is intended to help with that. Since vLLM essentially depends upon this behavior, I call it out specifically.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166975
Approved by: https://github.com/eellison, https://github.com/BoyuanFeng
2025-11-04 23:36:03 +00:00
c1e91bd4c3 [export] Codemod unittests to use new graph capture API (#166957)
Summary:
as title.

Test Plan:
pytest test/functorch/test_aot_joint_with_descriptors.py
pytest test/higher_order_ops/test_local_map.py

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166957
Approved by: https://github.com/angelayi, https://github.com/yushangdi
2025-11-04 22:55:30 +00:00
d7e2d0ad30 make narrow_tensor_symint DDE-free (#166379)
https://github.com/pytorch/pytorch/issues/158081

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166379
Approved by: https://github.com/Lucaskabela
ghstack dependencies: #166361
2025-11-04 22:43:15 +00:00
81038fd326 Revert "Add model code stack trace to torch.profile (#166677)"
This reverts commit e8052f2f99de1fb7284e38082ff5714e17cd9562.

Reverted https://github.com/pytorch/pytorch/pull/166677 on behalf of https://github.com/malfet due to Broke lint, please rebase, we've moved from mypy to pyrefly ([comment](https://github.com/pytorch/pytorch/pull/166677#issuecomment-3488219996))
2025-11-04 22:26:35 +00:00
e020fb3431 [Minor][Inductor] move some combo kernel log from warning to debug (#166993)
Combo kernel warns for long reduction and large pointwise. This becomes too spammy for users such as vLLM.

This PR moves these logs from warn to debug. I validated the spammy log is removed on llama-3.1-8B.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166993
Approved by: https://github.com/zou3519, https://github.com/eellison
2025-11-04 22:09:27 +00:00
e8052f2f99 Add model code stack trace to torch.profile (#166677)
```python
python test/test_fx.py -k profiler
```

Insert `torch._C._profiler._RecordFunctionFast` to fx graph codegen.

We post-process the profiler dump using `map_recorded_events_to_aten_ops_with_stack_trace` to add the stack trace to the dump'd trace.

`map_recorded_events_to_aten_ops_with_stack_trace` queries `fx.traceback._FX_METADATA_REGISTRY` for node metadata. Each graph module has a hash'd fake file name (e.g. `fx_generated__iv4zodvbcmdkhx77jrg7h2f2opebujhfmc6tf6nx7vioq244baw.py`), which is the key to the registry.

One can do `fx_g.enrich_profiler_metadata()` to add debugging info. Or `fx_g.enrich_profiler_metadata(enable=False)` to remove.

`aot_eager` makes calls `fx_g.enrich_profiler_metadata()` if TORCH_ENRICH_RPOFILER_STACK_TRACE is set or _dynamo.config.enrich_profiler_metadata=True.

<img width="1188" height="565" alt="Screenshot 2025-10-31 at 4 40 52 PM" src="https://github.com/user-attachments/assets/41e8113f-3e6d-439b-bffd-cfbf0c03a47a" />

Example code gen'd.
```
def forward(self, args_list):
    args_iter = iter(args_list)
    arg0_1 = next(args_iter)
    arg1_1 = next(args_iter)
    args_list.clear()
    _rf = torch._C._profiler._RecordFunctionFast('## fx_generated__iv4zodvbcmdkhx77jrg7h2f2opebujhfmc6tf6nx7vioq244baw.py ##'); _rf.__enter__()
    repeated_subgraph0 = self.repeated_subgraph0
    _rf_invoke_subgraph = torch._C._profiler._RecordFunctionFast('## 3 ##'); _rf_invoke_subgraph.__enter__()
    invoke_subgraph = torch.ops.higher_order.invoke_subgraph(repeated_subgraph0, 'subgraph_0', arg0_1, arg1_1);  repeated_subgraph0 = arg0_1 = arg1_1 = None
    _rf_invoke_subgraph.__exit__(None, None, None)
    _rf_getitem = torch._C._profiler._RecordFunctionFast('## 4 ##'); _rf_getitem.__enter__()
    getitem = invoke_subgraph[0];  invoke_subgraph = None
    _rf_getitem.__exit__(None, None, None)
    return (getitem,)
    _rf.__exit__(None, None, None)

def forward(self, arg0_1, arg1_1):
    _rf = torch._C._profiler._RecordFunctionFast('## fx_generated__ozpadpj5cxoalxeyopej33g2vvtvhxg4xsk7bhx7ldmcibtybyn.py ##'); _rf.__enter__()
    _rf_mul = torch._C._profiler._RecordFunctionFast('## 2 ##'); _rf_mul.__enter__()
    mul = torch.ops.aten.mul.Tensor(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
    _rf_mul.__exit__(None, None, None)
    _rf_sin = torch._C._profiler._RecordFunctionFast('## 3 ##'); _rf_sin.__enter__()
    sin = torch.ops.aten.sin.default(mul);  mul = None
    _rf_sin.__exit__(None, None, None)
    _rf_add = torch._C._profiler._RecordFunctionFast('## 4 ##'); _rf_add.__enter__()
    add = torch.ops.aten.add.Tensor(sin, 5);  sin = None
    _rf_add.__exit__(None, None, None)
    return (add,)
    _rf.__exit__(None, None, None)

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166677
Approved by: https://github.com/ezyang
ghstack dependencies: #166676
2025-11-04 22:05:36 +00:00
a64c7d7404 [DebugMode] output, tensor id annotations for DebugMode (#165076)
Adds optional "node" id for tensors, output info annotations to DebugMode, with `DebugMode(record_output=True, record_ids=True)`

Example output for `test_debug_mode_mm`, with both enabled:
```
  torch.mm(dt$0: f32[8, 8]| S(0), dt$1: f32[8, 32]| S(0))  ->  dt$12: f32[8, 32]| S(0)
    aten::mm(dt$2: f32[8, 8]| S(0), dt$3: f32[8, 32]| S(0))
      redistribute_input(1, S(0) -> R)
        redistribute_input(t$4: f32[1, 32], trace: S(0)->R)
          _c10d_functional::all_gather_into_tensor(t$5: f32[1, 32], 8, 0)  ->  t$6: f32[8, 32]
          _c10d_functional::wait_tensor(t$7: f32[8, 32])  ->  t$8: f32[8, 32]
      aten::mm(t$9: f32[1, 8], t$10: f32[8, 32])  ->  t$11: f32[1, 32]
  <method 'sum' of 'torch._C.TensorBase' objects>(dt$13: f32[8, 32]| S(0))  ->  dt$17: f32[]| P
    aten::sum(dt$14: f32[8, 32]| S(0))
      aten::sum(t$15: f32[1, 32])  ->  t$16: f32[]"""
```

Sadly the only way to get DTensor op outputs is to set `record_torchfunction=True`, as dispatch calls just defer to DTensor's dispatch logic.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165076
Approved by: https://github.com/zpcore
2025-11-04 21:30:46 +00:00
cdca63db8c Fix quoting in pytest_cache.py invocations (#166955)
Especially the job identifier can contain spaces so needs to be quoted

Fixes e.g. https://github.com/pytorch/pytorch/actions/runs/19063797853/job/54449422160#step:15:52

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166955
Approved by: https://github.com/Skylion007
2025-11-04 21:28:19 +00:00
ed45c5f38d Avoid DDE in narrow with unbacked start (#166361)
Slice knows how to handle unbacked start, we do not need to offset start before calling slice, we can leave it for slice.
The only edge case is when start<0 and start+length ==0 in that case slice and narrow would deviate,
for that case we shall pass dim_size instead of start+length

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166361
Approved by: https://github.com/aorenste
2025-11-04 21:24:57 +00:00
7f0e932136 [dynamo] don't use LocalSource for temp variables created by side_effects (#166917)
Fixes https://github.com/pytorch/pytorch/issues/166900

Implementation notes:
- I tried to disallow guard generation before side effect application in order to futureproof improper guard generation. However, this was not feasible since it is possible to realize lazy VTs while generating side effects (e.g. realizing a constant variable that is used in a deque update).
- `codegen_save_tempvars` now generates `TempLocalSource` for create temporary variables now, so that they won't get confused with `LocalSource` - we should error out when we attempt to create guards for `TempLocalSource`. I considered using `SyntheticLocalSource`, but that has additional `subguards_allowed` behavior that we may not want to have for temp variables.
- We moved the guard installation for constant user-defined pytree objects from `as_python_constant` to `__init__`. Objects created outside the compile-region will be guarded, while objects created inside the compile-region will not be guarded.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166917
Approved by: https://github.com/anijain2305
2025-11-04 21:16:18 +00:00
2673f8b007 Fix torch.linalg.eig inductor stride mismatch (#162484)
Fixes #159445

### Summary

- Fixed a stride layout issue in the `torch.linalg.eig` meta kernel that prevented successful compilation with the inductor backend. The meta kernel was producing incorrect row-major strides.

- LAPACK/BLAS libraries (underlying implementation) expect column-major layout

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162484
Approved by: https://github.com/isuruf
2025-11-04 21:06:58 +00:00
4e1bd16738 inductor: Switch quiesce to use timer based implementation. (#166581)
Major change is to switch to a timer based implementation. Additionally,
we get rid of the context manager for turning of the compile pool. We
still have the warmup calls.

Note that this only modifies the async_compile methods, the fx pool is
left running.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166581
Approved by: https://github.com/masnesral
ghstack dependencies: #166467
2025-11-04 21:01:49 +00:00
871d0cd196 If USE_CUDA=1 is set, do not fallback to no CUDA (#166982)
So many times i build pytorch only to notice chef nuked my nvcc and i wasted 30m building a cpu version, lets hard error fast

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166982
Approved by: https://github.com/malfet
ghstack dependencies: #166976
2025-11-04 20:51:14 +00:00
2bba37309b [inductor] runtime estimations disable use_nccl_estimator by default (#166973)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166973
Approved by: https://github.com/eellison, https://github.com/jathu
2025-11-04 20:48:22 +00:00
b4e4ee81d3 Update triton to 3.5.1 release (#166968)
This includes sm103 https://github.com/triton-lang/triton/pull/8485 fix

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166968
Approved by: https://github.com/Lucaskabela, https://github.com/njriasan
2025-11-04 20:34:13 +00:00
3283eaa5ba Upload test stats for trunk/sha tag (#166916)
Noticed that workflow runs for `trunk/{sha}` tags (issued by autorevert) don't populate test_run_s3 Clickhouse table.

This PR is addressing this by changing the gate condition to upload tests stats.

see https://github.com/pytorch/pytorch/actions/runs/19054297956/job/54421254448#step:8:23
as an evidence that HEAD_BRANCH is correctly populated for trunk tags.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166916
Approved by: https://github.com/huydhn, https://github.com/clee2000
2025-11-04 20:33:56 +00:00
397d9fe2ae [inductor] coordesc not tune XBLOCK for mix-order-reduction (#166669)
For mix-order reduction, we current force XBLOCK to be 1 to simplify codegen. Don't tune it in CDT.

Differential Revision: [](https://our.internmc.facebook.com/intern/diff/)

Differential Revision: [D86224689](https://our.internmc.facebook.com/intern/diff/D86224689)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166669
Approved by: https://github.com/jansel, https://github.com/mlazos, https://github.com/eellison, https://github.com/v0i0
2025-11-04 20:27:07 +00:00
d77c24caac Revert "[Inductor][Grouped Gemm] Add Blackwell CuTeDSL Kernel (#165036)"
This reverts commit 0e1a88904f4a5e30634b196678b56e1d6ec074f5.

Reverted https://github.com/pytorch/pytorch/pull/165036 on behalf of https://github.com/atalman due to regressed vllm signal: [GH job link](https://github.com/pytorch/pytorch/actions/runs/19059329909/job/54439919668) [HUD commit link](0e1a88904f) ([comment](https://github.com/pytorch/pytorch/pull/165036#issuecomment-3487846555))
2025-11-04 20:13:33 +00:00
cef98ae5cb [aotd] Compiled saved tensor hooks context (#166887)
Draft to expose compiled saved tensor hook context to selectively apply them.
Exposing node, fw_graph, bw_graph.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166887
Approved by: https://github.com/bdhirsh
2025-11-04 20:07:00 +00:00
52ea135f77 [BE] Delete Python-3.9 stdlib definitions from torch.package (#166768)
And simplify the entire function to just assert and return

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166768
Approved by: https://github.com/cyyever, https://github.com/atalman
2025-11-04 19:33:14 +00:00
a5f3035aaf More pyrefly local errors (#166976)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166976
Approved by: https://github.com/maggiemoss, https://github.com/Skylion007
2025-11-04 18:51:35 +00:00
1d3f5e19da [cuDNN] Smoke-test runtime cuDNN version matches compile time version in CI (#165922)
Fix and regression test for https://github.com/pytorch/pytorch/issues/165801

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165922
Approved by: https://github.com/malfet, https://github.com/atalman, https://github.com/Skylion007, https://github.com/drisspg

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Co-authored-by: Andrey Talman <atalman@fb.com>
2025-11-04 18:46:43 +00:00
496277a8ff [ROCm][CI] Lower runner check gpu count for distributed jobs (#166961)
This is a PR to temporarily relieve the queueing that is caused by an mi250 node outage. See this ticket for more information:
https://github.com/pytorch/pytorch/issues/166866

It relaxes the GPU count check to allow distributed jobs to run on 2-GPU runners

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166961
Approved by: https://github.com/jeffdaily
2025-11-04 18:44:21 +00:00
53f75cd5ba Fixed some syntax errors in SECURITY.md file. (#166718)
Fixed some syntax errors in SECURITY.md file including PyTorch's capitalization problems, some grammatical inconsistencies, etc
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166718
Approved by: https://github.com/mikaylagawarecki
2025-11-04 18:18:38 +00:00
527b1109a8 Delete deprecated fp32 precision warnings (#166956)
The deprecation warning led to warning spamming in PyTorch APIs, like
torch.compile. This is not how a deprecation warning should go: if we
add a deprecation warning, we'd better update our built-in APIs to
prevent warning spam.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166956
Approved by: https://github.com/albanD
2025-11-04 17:50:04 +00:00
clr
3144713325 subproc_pool: Add support for enabling quiesce via a timer (#166467)
This adds the capability to subproc pool to enable quiesce via a timer

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166467
Approved by: https://github.com/masnesral
2025-11-04 17:37:41 +00:00
eefa16342c [Inductor] addmm with bias -> unfuse bias if there is a pointwise/reduction consumer (#166165)
Prefer unfused addmm when there is at least a single elemwise/reduction consumer..

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166165
Approved by: https://github.com/eellison
2025-11-04 17:23:04 +00:00
d02f68f484 [BE] Use [[maybe_unused]] (#166865)
Instead of `(void) foo; // Unused parameter` trick, as this is a C++17 standard feature

Will replace further repetitions of the same pattern soon after
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166865
Approved by: https://github.com/mikaylagawarecki, https://github.com/Skylion007, https://github.com/janeyx99
2025-11-04 17:08:28 +00:00
68eb55c4b2 Add model code stack trace to cuda.memory._snapshot (#166676)
We store a mapping between generated fx graph code and original model code stack trace in `fx.traceback._FX_METADATA_REGISTRY`. And we do a post-processing on the memory snapshot to append the original model stack trace information.

To achieve this, the biggest change we had to do in `aot_eager` mode is to give each generated fx graph a unique stack trace, i.e. it cannot just be `<eval_with_key>`. We set co_filename to **pretend** that the code is from `co_filename` file. Now instead of `<eval_with_key>` in stack trace, we get something like `fx_generated_3a4b5c6d7e8f9a0.py`.

`augment_with_fx_traces` arg is added to `torch.cuda.memory._snapshot` and `_dump_snapshot`. When the arg is set to True, a post-processing will run to populate the original model stack trace to the snapshot frames.

The new behavior of GraphModule can be controlled by `TORCH_ENRICH_RPOFILER_STACK_TRACE` or `_dynamo.config.enrich_profiler_metadata=True`.

Alternative:

Instead of setting co_filename, we can also do it like below:
Note that if we do it this way, we will need to dump the file to make the graph module torch-scriptable. TorchScript requires source access in order to carry out compilation, so we need to make sure original .py files are available.
```
        key = filename
        globals_copy = globals.copy()
        globals_copy["__file__"] = key
        globals_copy["__name__"] = key
        linecache.lazycache(key, globals_copy)
        exec(compile(src, key, "exec"), globals)
````

Other changes:

- Update `MemoryViz.js` to display fx node information and original model code if exist

```
python test/test_fx.py -k test_lineno_map
python test/test_fx.py -k test_custom_traceback_raised
python test/test_public_bindings.py
python test/test_cuda.py -k test_fx_memory
python test/test_fx.py -k test_informative_co_filename
python test/test_fx.py -k test_autowrap_functions
python test/dynamo/test_utils.py -k test_inductor_provenance
```

```python
# Profile with memory snapshot
torch.cuda.memory._record_memory_history()

with  torch._dynamo.config.patch("enrich_profiler_stack_trace", True):
    compiled = torch.compile(mod, backend="aot_eager", fullgraph=True)
    result = compiled(torch.randn(10, 10, device="cuda:0"))

torch.cuda.memory._dump_snapshot("memory_snapshot.pickle", augment_with_fx_traces=True)
torch.cuda.memory._record_memory_history(enabled=None)
```

<img width="913" height="711" alt="Screenshot 2025-10-30 at 10 40 44 AM" src="https://github.com/user-attachments/assets/8d7a1833-f98d-4756-b666-1d63ab57b27b" />

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166676
Approved by: https://github.com/albanD, https://github.com/ezyang
2025-11-04 17:01:02 +00:00
8d4b8ab430 [ez] Print some more test timing info in the logs (#166447)
You can just subtract timestamps, but this makes it easier
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166447
Approved by: https://github.com/Skylion007
2025-11-04 16:45:22 +00:00
afd50bdd29 [CI] Use smaller amx + avx2 runners for inductor test? (#164989)
Results from CI:
No failures but generally takes longer, maybe ~20% increase in time?
But the smaller runner is ~25% of the cost of the current runner, so in terms of cost this is a decrease

If the 20% is too much, we can try the 4x larger runners, which are about half the cost of the current runner, so it would probably still result in cost savings with hopefully less impact to time

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164989
Approved by: https://github.com/BoyuanFeng, https://github.com/huydhn
2025-11-04 16:43:06 +00:00
56dfd4c74b Add CUDA MXFP4 scaled mm support via. FBGEMM (#166526)
Summary:

* Pull in `f4f4bf16` from FBGemm to provide MXFP4 support for CUDA
* Add testing

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:
Signed-off-by: Simon Layton <simonlayton@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166526
Approved by: https://github.com/drisspg, https://github.com/ngimel
2025-11-04 15:53:16 +00:00
24db5c4451 [inductor] do not hard fail on FakePG with nccl estimator (#166869)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166869
Approved by: https://github.com/eellison
ghstack dependencies: #166521
2025-11-04 15:22:38 +00:00
cc8bfd1206 Docker release build: Use 13.0.0 nvidia docker (#166904)
Forward fix for failing Docker release builds
Related to: https://github.com/pytorch/pytorch/issues/166897

Nightly Docker build failure https://github.com/pytorch/pytorch/actions/runs/18900508440/job/53946606434
Due to missing base image:
```
ERROR: failed to build: failed to solve: docker.io/nvidia/cuda:13.0.2-devel-ubuntu22.04: not found
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166904
Approved by: https://github.com/tinglvv, https://github.com/malfet
2025-11-04 13:58:10 +00:00
c45b156605 Fix DeepSeek scaling tensor handling (#166752)
Summary:

cuBlasLt enforces size/stride requirements for 1x128 and 128x128 blockwise scaling
kernels, some of which weren't being handled, causing silent incorrect
answers especially for 128x128 scaling cases.

cuBlasLt enforces ([docs](https://docs.nvidia.com/cuda/cublas/#scaling-factors-layouts)) for deepseek-style
scaling, for `A: MxN`, `B: KxN` you have the following:

```Py
L = K // 128
L4 = round_up(L, 4)

1x128 x 128x128:
* A_scale: [M, K // 128], stride: [1, M]
* B_scale: [L4, N // 128], stride: [1, L4]

128x128 x 1x128:
* A_scale: [L4, M // 128], stride: [1, L4]
* B_scale: [N, K // 128], stride: [1, N]

1x128 x 1x128:
* A_scale: [M, K // 128], stride: [1, M]
* B_scale: [N, K // 128], stride: [1, N]
```

Notable here is the `L4` term, which means that we must round up to the nearest multiple of 4 blocks
in the `K` dimension. This wasn't enforced previously, and caused silent wrong answers
where `(K // 128) % 4 != 0`.

Test Plan:

Reviewers:

Subscribers:

@vkuzo

Tasks:

Tags:
Signed-off-by: Simon Layton <simonlayton@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166752
Approved by: https://github.com/drisspg, https://github.com/vkuzo
2025-11-04 13:32:24 +00:00
8fff7e36b4 [xpu][test] Add UT for expandable segments (#166495)
# Motivation
This PR aims to reuse some UT to validate the expandable segment feature.

# Additional Context
Currently, the failure is related to the internal track `GSD-11403`, we could get the fix when upgrading the driver to `ci-neo-master-034630` or greater
TODO: add test conv and gemm into this test case when upgrading the driver.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166495
Approved by: https://github.com/albanD, https://github.com/EikanWang, https://github.com/gujinghui
ghstack dependencies: #166299, #166292, #166424
2025-11-04 08:01:35 +00:00
82fa2aa269 DTensor: Fix trivial as_strided case, add alias support (#166867)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166867
Approved by: https://github.com/albanD
ghstack dependencies: #166868
2025-11-04 07:18:32 +00:00
09e0285608 [xpu][feature][inductor] Enable decompose_mm_pass and UT on Intel GPU (#166613)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166613
Approved by: https://github.com/hl475
2025-11-04 06:58:05 +00:00
d980d8dc79 [dynamo] Implement __sym_float__ for SymBool to fix multiplication TypeError (#165264)
Fixes #164684

### Description

Symbolic tracing fails during multiplication between a `SymBool` and a `Tensor`. This scenario is triggered when `.item()` is called on a 0-dim boolean tensor within a `torch.compile` region. In compile mode, this yields a `SymBool`, and the subsequent `SymBool * FakeTensor` operation is unsupported, leading to a `TypeError` or a data-dependent `UserError`.

### Solution

This PR addresses the issue at the type-conversion level, as suggested by reviewers.

The root cause of the TypeError is that torch.sym_float() (which is called by _maybe_convert_to_dtype during type promotion for aten.mul) lacks a conversion path for SymBool and incorrectly falls back to builtins.float(SymBool).

This fix addresses this by implementing the __sym_float__(self) method within the SymBool class (defined in torch/__init__.py).

The torch.sym_float(a) utility function is already designed to check for hasattr(a, "__sym_float__") before falling back to builtins.float(). By adding this method, SymBool instances now correctly advertise their ability to be cast to SymFloat. The new method implementation leverages self.node.sym_float() to correctly convert the symbolic boolean value to its symbolic float representation (0.0 or 1.0), resolving the TypeError at its source.

This approach is more fundamental than modifying a specific operation in builtin.py and ensures SymBool can be correctly promoted to SymFloat in any operation, while still preserving its boolean nature for control flow operations like guard_or_false (which is verified by a new test case).

### Verification

1.  **Bug Reproduced**: The initial `UserError: Could not guard on data-dependent expression` was successfully reproduced with the script from the issue. As shown below
<img width="1369" height="945" alt="Screenshot 2025-10-13 at 10 29 05" src="https://github.com/user-attachments/assets/8daa4555-3347-4af5-906a-02150b8df9d1" />

2.  **Fix Validated**: After applying the code changes, the same script now runs to completion, printing ` eager success` and ` compile success`. As shown below
<img width="1228" height="82" alt="Screenshot 2025-10-13 at 10 29 21" src="https://github.com/user-attachments/assets/94c4f143-b898-4dda-9bff-0ad5450a30fa" />

3. Added a new test class DynamoOpPromotionTests to test/dynamo/test_misc.py with three new test cases:
1. test_symbool_tensor_mul_does_not_fail: Verifies that the original bug report code (with .item() + *) no longer raises an error when compiled.
2. test_symbool_guard_or_false: Verifies that this fix does not cause a regression for guard_or_false(SymBool) (the concern raised by reviewers).
3. test_symbool_tensor_mul: Verifies the behavior of Tensor(bool) * Tensor(float) (without .item()) for completeness.
All new tests were added and pass locally.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165264
Approved by: https://github.com/laithsakka, https://github.com/Lucaskabela
2025-11-04 06:33:20 +00:00
c7d00de115 [xpu][fix] Fix XPU oneDNN memory query bug: pointer to array (#166830)
# Motivation

I believe this is a bug - here's why:
In [dnnl_common_types.h](98132c4908/include/oneapi/dnnl/dnnl_common_types.h (L116-L125)) is defined as a pointer to an `int64_t[12]` array;
We can confirm this from the implementation in [memory_desc.cpp](98132c4908/src/common/memory_desc.cpp (L746-L748)) where the member indeed points to an internal array.

# Solution

Therefore, when accessing `md_padded_dims`, we should first dereference the pointer and then use it with an index - directly using it without dereferencing would corrupt memory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166830
Approved by: https://github.com/EikanWang
2025-11-04 06:12:40 +00:00
d3cf90ada5 Revert "[inductor] require shape in TritonCSEVariable (#162275)"
This reverts commit c21868b4359586550b12e1d9102283c792f45dff.

Reverted https://github.com/pytorch/pytorch/pull/162275 on behalf of https://github.com/izaitsevfb due to breaking test_rms_norm_bwd_float32_split_reductions_True_shape2 ([comment](https://github.com/pytorch/pytorch/pull/162275#issuecomment-3484049109))
2025-11-04 06:06:18 +00:00
0e1a88904f [Inductor][Grouped Gemm] Add Blackwell CuTeDSL Kernel (#165036)
Make sure you're on cutlass 4.2.0+

Test Plan:
Tritonbench(oss):
`clear; CUDA_VISIBLE_DEVICES=2 TRITON_PRINT_AUTOTUNING=1 TRITON_ALWAYS_COMPILE=1 TORCH_LOGS=+inductor TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 TORCHINDUCTOR_MAX_AUTOTUNE_GEMM=1 python run.py --op grouped_gemm --only aten_grouped_mm,preprocessed_pt2_triton_grouped_mm --precision bf16  --num-inputs 1 --metrics tflops,accuracy`

Unit Tests(oss):
`clear; python test/inductor/test_cutedsl_grouped_mm.py`

Differential Revision: D82010227

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165036
Approved by: https://github.com/alexsamardzic, https://github.com/drisspg, https://github.com/mlazos
2025-11-04 05:58:58 +00:00
3232caa078 [XPU][Fix] Register convolution_overrideable for flops count (#166839)
Fixes #166838
1. Register `convolution_overrideable` key for flop_counter. CUDA relies on keys with `cudnn_convolution`. For devices like `XPU`, it falls to `convolution_overrideable`. Without the correct registration, the flop_couter will silently return 0 for XPU in line:
e1d011d6eb/torch/_inductor/analysis/profile_analysis.py (L178-L179)

2. Enable the tests when enabling the XPU on `test_analysis.py`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166839
Approved by: https://github.com/guangyey, https://github.com/EikanWang, https://github.com/jansel
2025-11-04 05:56:29 +00:00
a6c6acea9d [11/N] Apply ruff UP035 rule (#166225)
This PR continues to apply ruff UP035 rule to inductor code. ruff UP035 rule aims to use Python 3.10 syntax and libraries.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166225
Approved by: https://github.com/aorenste
2025-11-04 04:53:40 +00:00
55be1cc739 [dynamo, 3.14] add explicit SymFloat int conversion (#166902)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166902
Approved by: https://github.com/malfet, https://github.com/pianpwk
ghstack dependencies: #166757, #166894, #166895
2025-11-04 04:38:03 +00:00
344cebda52 [dynamo, 3.14] disable cpython dynamo unittests if 3.14 (#166895)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166895
Approved by: https://github.com/guilhermeleobas
ghstack dependencies: #166757, #166894
2025-11-04 04:38:03 +00:00
ba72c6b981 [dynamo, 3.14] fix dynamo error message test for 3.14 (#166894)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166894
Approved by: https://github.com/malfet
ghstack dependencies: #166757
2025-11-04 04:38:03 +00:00
888efcc453 [dynamo, 3.14] support tracing type.__dict__[__annotations__].__get__ to trace through typing.get_type_hints (#166757)
This is covered by `test_get_type_hints` in test/dynamo/test_repros.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166757
Approved by: https://github.com/Lucaskabela
2025-11-04 04:38:03 +00:00
24aa9a2ef7 [ROCm][CI] Add distributed testing back to trunk.yml (#166915)
Adding distributed testing back to trunk since we have been observing [reasonable queueing](https://hud.pytorch.org/queue_time_analysis?dateRange=30&startDate=2025-10-05T01%3A44%3A55.924Z&endDate=2025-11-04T01%3A44%3A55.925Z&granularity=week&chartType=bar&repos=pytorch%2Fpytorch&category=machine_type&machineTypes=linux.rocm.gpu.gfx942.1&items=linux.rocm.gpu.gfx942.1) based on current MI3xx capacity.

Partially addresses https://github.com/pytorch/pytorch/issues/166108.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166915
Approved by: https://github.com/jeffdaily
2025-11-04 04:29:29 +00:00
f70faf2b9a [xpu][feature] Introduce PeerToPeerAccess API for XPU (#166424)
# Motivation
This PR introduces support for peer-to-peer (P2P) access between devices, including querying and enabling P2P connections between two devices.
It supports two categories of allocations:
- Regular allocations;
- Expandable segment allocations.

# Additional Context
The follow-up is that we should use this feature to optimize our copy kernel when P2P is supported.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166424
Approved by: https://github.com/gujinghui, https://github.com/albanD
ghstack dependencies: #166299, #166292
2025-11-04 04:03:28 +00:00
167e64ba1a [xpu][feature] Support expandable segment feature for XPU (#166292)
# Motivation
This PR intends to add expandable segment feature support on XPU. This will help
- Reduce memory fragmentation;
- Gradually map physical pages into virtual address space as needed.

# Additional Context
The traditional caching allocator frequently allocates and frees device memory blocks. However, over time, with varying tensor size, the device address space becomes fragmented. Even when there's enough total free memory, a lack of contiguous space can cause large allocations to fail.
The **expandable segment** feature addresses this by dynamically extending physical memory within a reserved virtual address range, reducing fragmentation and minimizing reallocation overhead.
The potential drawbacks are
- Virtual memory overhead;
- Potential page mapping overhead;
- Increased complexity.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166292
Approved by: https://github.com/albanD, https://github.com/EikanWang, https://github.com/gujinghui
ghstack dependencies: #166299
2025-11-04 04:03:28 +00:00
875b18d53c [xpu][feature] Introduce ExpandableSegment for XPU (#166299)
# Motivation
This PR intends to add `ExpandableSegment` struct, which is used to help support the expandable segment feature. I split it to a single PR to facilitate the code review.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166299
Approved by: https://github.com/EikanWang, https://github.com/albanD, https://github.com/gujinghui
2025-11-04 04:03:28 +00:00
eec3749c44 [DebugMode] .fwd_stack_trace for autograd bwd ops (#166842)
In #166440, didn't realize you could turn on anomaly mode while disabling NaN checks for these stacks. Adding them to `debug_mode.operators[*].fwd_stack_trace`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166842
Approved by: https://github.com/yushangdi, https://github.com/mikaylagawarecki
2025-11-04 03:28:43 +00:00
40133fe966 Fix MSCV C++ compilation error of pycore_stackref.h header (#165686)
Wraps the header in a C file and compile it using a C compiler, which should support designated initializers

Fix issue #160647

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165686
Approved by: https://github.com/williamwen42
2025-11-04 02:51:16 +00:00
f288433d3e [dynamo] Raise on as_python_constant error on getattr (#166909)
This ensures that we graph break at the right time, leading to the right
stack trace.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166909
Approved by: https://github.com/tugsbayasgalan
2025-11-04 02:45:59 +00:00
864633fca0 [xpu][test] Enable test_fxir_backend tests for XPU (#166493)
This PR enables `test_fxir_backend.py`'s tests formerly skipped xpu tests. No additional changes needed for the features.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166493
Approved by: https://github.com/angelayi, https://github.com/EikanWang
2025-11-04 02:14:46 +00:00
c21868b435 [inductor] require shape in TritonCSEVariable (#162275)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/162275
Approved by: https://github.com/mlazos
ghstack dependencies: #164158
2025-11-04 02:13:41 +00:00
a0a8eca01a Fixes torch.compile(nn.ModuleList()) changes bool() behavior (#159208)
Fixes #159139
## The Cause

The bug occurs because the OptimizedModule wrapper in torch._dynamo.eval_frame doesn't call the len method. This causes Python's bool() check to fall back to the default object truthiness (always True) instead of correctly evaluating containers with len() == 0 as False.
## The Fix

A very easy fix . I just added the len method to OptimizedModule in torch._dynamo.eval_frame class to delegate the call to the original module
```python
def __len__(self):
    """
    Proxy the len() call to the original module to fix truthiness checks.
    """
    return len(self._orig_mod)
```
This successfully fixes the issue . The script now works as expected.
## Reproduction Script
```python
import torch
import torch.nn as nn

# Create an empty nn.ModuleList
original = nn.ModuleList()

# Compile it using torch.compile
compiled = torch.compile(original)

# Compare their boolean evaluations
print(f"bool(original): {bool(original)}")
print(f"bool(compiled): {bool(compiled)}")

# Trigger failure if they differ
assert bool(original) == bool(compiled), "BUG: truthiness behavior mismatch after compilation"
```
## Output

bool(original): False
bool(compiled): False

Pull Request resolved: https://github.com/pytorch/pytorch/pull/159208
Approved by: https://github.com/Lucaskabela

Co-authored-by: pushkar-hue <pushkarsharma.rtm@gmail.com>
Co-authored-by: Lucas Kabela <lucasakabela@gmail.com>
2025-11-04 02:12:10 +00:00
0958f307d9 Add _heapq polyfill (#161093)
----

* Redirect `_heapq.*` functions to the python implementation
* Handle TypeError in PolyfilledFunctionVariable to raise observed exceptions
* Implement `__next__` method in IteratorVariable class

Pull Request resolved: https://github.com/pytorch/pytorch/pull/161093
Approved by: https://github.com/Lucaskabela
2025-11-04 02:11:33 +00:00
7551507c41 [BE][Typing][Dynamo] Type torch/_dynamo/variables/builtin.py (#166745)
Provides type coverage to torch/_dynamo/variables/builtin.py

### Coverage report:
`mypy torch/_dynamo/variables/builtin.py --linecount-report /tmp/coverage_log`
Compare before to after - we go from 2213 lines and 64 funcs covered to 3212 lines and 85 funcs covered

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166745
Approved by: https://github.com/williamwen42
2025-11-04 01:33:10 +00:00
f92834d477 Fix unused assignments (#166791)
This PR cleans up unused assignments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166791
Approved by: https://github.com/xmfan
2025-11-04 01:07:19 +00:00
e1fc01bef8 Enable clang-tidy on some excluded headers (#166835)
This PR enables clang-tidy on some excluded headers.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166835
Approved by: https://github.com/Skylion007
2025-11-04 00:37:32 +00:00
22a745737a Remove ifndef C10_MOBILE around aoti_torch_abi_version impl (#166882)
See if after the headeronly migration the mobile build would still fail.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166882
Approved by: https://github.com/mikaylagawarecki
2025-11-04 00:37:22 +00:00
ee708ea96c fix test_type_hints (#163150)
Fixes #163149

### Summary:
Fixes mypy type checking failures in `test_type_hints` by consolidating typing imports and eliminating duplicate/conflicting import patterns that caused mypy to fail resolving type annotations.

### Impact:

- `test_type_hints` works fine now
- module: tests

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163150
Approved by: https://github.com/Skylion007
2025-11-04 00:29:22 +00:00
64819e3701 [Pytorch] Improve conversion from bf16 on aarch64/NEON (#166880)
Summary:
Conversion from/to bfloat16 was not getting covered by conversion templates, because these used bfloat16_t as data type instead of the custom c10::BFloat16

Conversion by casting from/to bfloat16_t is broken in clang-[17, 20], fixed in clang-21.
Because Pytorch does not currently have CI running binaries compiled using clang-21, we won't implement this approach for now.

We are currently only adding conversion from bfloat16, as it can be implementing by zero-extending into a 4-byte float.

We've observed the following performance improvements, when compiling with clang-19 and targeting armv9a+sve2:

Before:

bfloat16_t->uint8  ===> 423.583us
bfloat16_t->int8  ===> 424.090us
bfloat16_t->int16  ===> 430.817us
bfloat16_t->int64  ===> 571.547us
bfloat16_t->double ===> 459.089us

After:

bfloat16_t->uint8  ===> 123.783us  ----> 342% higher throughput
bfloat16_t->int8  ===> 131.575us  -----> 322% higher throughput
bfloat16_t->int16  ===> 136.794us  ----> 315% higher throughput
bfloat16_t->int64  ===> 177.699us  ----> 322% higher throughput
bfloat16_t->double  ===> 165.556us  ---> 277% higher throughput

Test Plan:
Correctness:

buck2 test mode/opt //caffe2/test:test_ops
buck2 test mode/opt //caffe2/test:torch

Performance:
buck2 run mode/opt //caffe2/benchmarks/operator_benchmark/fb:operator_benchmark_test

Differential Revision: D86119613

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166880
Approved by: https://github.com/mcfi, https://github.com/aditew01
2025-11-04 00:19:42 +00:00
79ff2c66c8 Revert "Fix unused assignments (#166791)"
This reverts commit 5125872aeb737fa20ea2ec08338e9342cba694e7.

Reverted https://github.com/pytorch/pytorch/pull/166791 on behalf of https://github.com/cyyever due to incomplete PR ([comment](https://github.com/pytorch/pytorch/pull/166791#issuecomment-3483116247))
2025-11-04 00:13:50 +00:00
665a411351 Revert "[CUDA] Skip pynvml test on platforms that don't have complete support (#159689)"
This reverts commit 68e31e2f814f9f6a9fb87381367e6b33e17c1c2b.

Reverted https://github.com/pytorch/pytorch/pull/159689 on behalf of https://github.com/izaitsevfb due to breaking internal tests [D86127316] ([comment](https://github.com/pytorch/pytorch/pull/159689#issuecomment-3483095879))
2025-11-04 00:10:14 +00:00
5c89bdb461 [MPS] Fix smooth_l1_loss backward for fp16 (#166687)
- Enable fp16 implementation for CPU, by using `convert_to_float` primitives instead of `convert_bfloat16_float` and extending bf16 implementation to half
- Simplify OpInfo definitions for the backward

Originally PR used `AT_DISPATCH_ALL_TYPES_AND(kHalf,`, but it cause ICE with gcc-13 when compiled with SVE128:
```
/opt/rh/gcc-toolset-13/root/usr/bin/c++ -DAT_BUILD_ARM_VEC256_WITH_SLEEF -DAT_PER_OPERATOR_HEADERS -DBUILD_ONEDNN_GRAPH -DCAFFE2_BUILD_MAIN_LIB -DCAFFE2_PERF_WITH_SVE=1 -DCPUINFO_SUPPORTED_PLATFORM=1 -DENABLE_IPC_FABRIC -DFMT_HEADER_ONLY=1 -DFXDIV_USE_INLINE_ASSEMBLY=0 -DHAVE_MALLOC_USABLE_SIZE=1 -DHAVE_MMAP=1 -DHAVE_POSIX_FALLOCATE=1 -DHAVE_SHM_OPEN=1 -DHAVE_SHM_UNLINK=1 -DKINETO_NAMESPACE=libkineto -DMINIZ_DISABLE_ZIP_READER_CRC32_CHECKS -DNNP_CONVOLUTION_ONLY=0 -DNNP_INFERENCE_ONLY=0 -DONNXIFI_ENABLE_EXT=1 -DONNX_ML=1 -DONNX_NAMESPACE=onnx_torch -DUSE_C10D_GLOO -DUSE_DISTRIBUTED -DUSE_EXTERNAL_MZCRC -DUSE_MIMALLOC -DUSE_RPC -DUSE_TENSORPIPE -DXNN_LOG_LEVEL=0 -D_FILE_OFFSET_BITS=64 -Dtorch_cpu_EXPORTS -I/pytorch/build/aten/src -I/pytorch/aten/src -I/pytorch/build -I/pytorch -I/pytorch/nlohmann -I/pytorch/moodycamel -I/pytorch/third_party/mimalloc/include -I/pytorch/torch/csrc/api -I/pytorch/torch/csrc/api/include -I/pytorch/caffe2/aten/src/TH -I/pytorch/build/caffe2/aten/src/TH -I/pytorch/build/caffe2/aten/src -I/acl -I/acl/include -I/pytorch/build/caffe2/../aten/src -I/pytorch/torch/csrc -I/pytorch/torch/headeronly -I/pytorch/third_party/miniz-3.0.2 -I/pytorch/third_party/kineto/libkineto/include -I/pytorch/third_party/kineto/libkineto/src -I/pytorch/third_party/cpp-httplib -I/pytorch/aten/src/ATen/.. -I/pytorch/third_party/FXdiv/include -I/pytorch/c10/.. -I/pytorch/third_party/pthreadpool/include -I/pytorch/third_party/cpuinfo/include -I/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/include -I/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/src -I/pytorch/aten/src/ATen/native/quantized/cpu/qnnpack/deps/clog/include -I/pytorch/third_party/NNPACK/include -I/pytorch/third_party/FP16/include -I/pytorch/third_party/tensorpipe -I/pytorch/build/third_party/tensorpipe -I/pytorch/third_party/tensorpipe/third_party/libnop/include -I/pytorch/third_party/kleidiai -I/pytorch/third_party/fmt/include -I/pytorch/build/third_party/ideep/mkl-dnn/include -I/pytorch/third_party/ideep/mkl-dnn/src/../include -I/pytorch/third_party/onnx -I/pytorch/build/third_party/onnx -I/pytorch/third_party/flatbuffers/include -isystem /pytorch/build/third_party/gloo -isystem /pytorch/cmake/../third_party/gloo -isystem /pytorch/cmake/../third_party/tensorpipe/third_party/libuv/include -isystem /pytorch/third_party/protobuf/src -isystem /opt/OpenBLAS/include -isystem /pytorch/third_party/XNNPACK/include -isystem /pytorch/cmake/../third_party/eigen -isystem /pytorch/third_party/ideep/mkl-dnn/include/oneapi/dnnl -isystem /pytorch/third_party/ideep/include -isystem /pytorch/INTERFACE -isystem /pytorch/third_party/nlohmann/include -isystem /pytorch/third_party/concurrentqueue -isystem /pytorch/build/include -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DLIBKINETO_NOXPUPTI=ON -DUSE_PYTORCH_QNNPACK -DAT_BUILD_ARM_VEC256_WITH_SLEEF -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -DC10_NODEPRECATED -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=range-loop-construct -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-unknown-pragmas -Wno-unused-parameter -Wno-strict-overflow -Wno-strict-aliasing -Wno-stringop-overflow -Wsuggest-override -Wno-psabi -Wno-error=old-style-cast -faligned-new -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-dangling-reference -Wno-error=dangling-reference -Wno-stringop-overflow -DHAVE_SVE_CPU_DEFINITION -DHAVE_SVE256_CPU_DEFINITION -DHAVE_ARM_BF16_CPU_DEFINITION -O3 -DNDEBUG -DNDEBUG -fPIC -fdiagnostics-color=always -DTORCH_USE_LIBUV -DCAFFE2_USE_GLOO -D__NEON__ -DBLAS_HAS_SBGEMM -Wall -Wextra -Wdeprecated -Wunused -Wno-unused-parameter -Wno-missing-field-initializers -Wno-array-bounds -Wno-unknown-pragmas -Wno-strict-overflow -Wno-strict-aliasing -Wredundant-move -Wno-interference-size -Wno-maybe-uninitialized -fvisibility=hidden -pthread -fopenmp -O3  -march=armv8-a+sve+bf16 -D__ARM_FEATURE_BF16 -DCPU_CAPABILITY_SVE -msve-vector-bits=256 -DCPU_CAPABILITY=SVE256 -DCPU_CAPABILITY_SVE256 -MD -MT caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/PointwiseOpsKernel.cpp.SVE256.cpp.o -MF caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/PointwiseOpsKernel.cpp.SVE256.cpp.o.d -o caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/cpu/PointwiseOpsKernel.cpp.SVE256.cpp.o -c /pytorch/build/aten/src/ATen/native/cpu/PointwiseOpsKernel.cpp.SVE256.cpp
during RTL pass: expand
In file included from /pytorch/aten/src/ATen/native/cpu/PointwiseOpsKernel.cpp:6,
                 from /pytorch/build/aten/src/ATen/native/cpu/PointwiseOpsKernel.cpp.SVE256.cpp:1:
/pytorch/aten/src/ATen/native/cpu/Loops.h: In function ‘void at::native::SVE256::vectorized_loop(char**, int64_t, int64_t, func_t&&, vec_func_t&&) [with func_t = at::native::{anonymous}::smooth_l1_backward_cpu_kernel(at::TensorIterator&, const c10::Scalar&, double)::<lambda()>::<lambda()>::<lambda(scalar_t, scalar_t, scalar_t)>&; vec_func_t = at::native::{anonymous}::smooth_l1_backward_cpu_kernel(at::TensorIterator&, const c10::Scalar&, double)::<lambda()>::<lambda()>::<lambda(at::vec::SVE256::Vectorized<c10::Half>, at::vec::SVE256::Vectorized<c10::Half>, at::vec::SVE256::Vectorized<c10::Half>)>&]’:
/pytorch/aten/src/ATen/native/cpu/Loops.h:200:1: internal compiler error: in expand_insn, at optabs.cc:8185
  200 | vectorized_loop(char** C10_RESTRICT data_, int64_t n, int64_t S, func_t&& op, vec_func_t&& vop) {
      | ^~~~~~~~~~~~~~~
Please submit a full bug report, with preprocessed source.
See <http://bugzilla.redhat.com/bugzilla> for instructions.
Preprocessed source stored into /tmp/ccgYMlTo.out file, please attach this to your bugreport.
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166687
Approved by: https://github.com/Skylion007
2025-11-03 23:54:54 +00:00
7b64ad906c [FSDP][Replicate] got rid of reshard_after_forward and updated test cases (#166469)
**Summary:** I have gotten of reshard_after_forward and shard_placement as inputs for replicate as there will be no sharding. I have also updated all the necessary tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166469
Approved by: https://github.com/weifengpy
ghstack dependencies: #166433, #166459
2025-11-03 23:48:18 +00:00
d944279def [FSDP][Replicate] added two replicate overload declarations and changed device_mesh to mesh (#166459)
**Summary:** Just like in fully_shard, I added two overload replicate functions. The `@overload` declarations are necessary because the `@contract` decorator uses `ParamSpec` to capture function parameters, which creates a generic `_ContractFn` protocol signature (`*args: _P.args, **kwargs: _P.kwargs`) that Pyrefly cannot properly type-check when calling the function with explicit keyword arguments. In addition, to make the api cleaner I changed device_mesh input argument to mesh to match fully_shard formatting.

**Test Cases**
1.  pytest test/distributed/_composable/test_replicate_with_fsdp.py
2. pytest test/distributed/_composable/test_replicate_training.py
3. pytest test/distributed/_composable/test_composability/test_pp_composability.py -k test_replicate_pp

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166459
Approved by: https://github.com/weifengpy
ghstack dependencies: #166433
2025-11-03 23:35:21 +00:00
5048e4701d explicitly remove call_mod_node_to_replace after inlining the submodule in const_fold._inline_module` (#166871)
Summary:
https://github.com/pytorch/pytorch/pull/166609 updated `is_impure` check to now check ops inside a subgraph to decide whether a `call_module` node is pure or not.

This change of behavior affects dead code elimination, commonly run as `gm.graph.eliminate_dead_code()`. Specifically, dead code elimination will not erase a node that has no users if this node has side effect or is impure. With above mentioned pr, dead code elimination no longer eliminates unused subgraphs that contain side-effectful ops.

This affects `const_fold.split_const_subgraph`, what this function does is:
1. split a graph into two submodules, one containing all const ops and one containing non-const ops
2. inline the submodule containing non-const ops back to main graph.
3. run dead code elimination to remove the unused non-const submodule.

With pr #166609 step 3 no longer erases the unused module. As an example, exported graph
```
 graph():
    %x : [num_users=2] = placeholder[target=x]
    %_guards_fn : [num_users=0] = call_module[target=_guards_fn](args = (%x,), kwargs = {})
    %empty_permuted : [num_users=1] = call_function[target=torch.ops.aten.empty_permuted.default](args = ([5, 10], [0, 1]), kwargs = {device: cpu, pin_memory: False})
    %bernoulli : [num_users=1] = call_function[target=torch.ops.aten.bernoulli.p](args = (%empty_permuted, 0.6), kwargs = {})
    %mul : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%x, %bernoulli), kwargs = {})
    %div : [num_users=1] = call_function[target=torch.ops.aten.div.Tensor](args = (%mul, 0.6), kwargs = {})
    return (div,)
```
After running const_fold, empty_permuted is const-folded, the rest of ops are not, and the main graph looks like
```
graph():
    %x : [num_users=3] = placeholder[target=x]
    %_fx_const_folded_attrs : [num_users=2] = get_attr[target=_FX_CONST_FOLDED_ATTRS]
    %_guards_fn : [num_users=0] = call_module[target=_guards_fn](args = (%x,), kwargs = {})
    %bernoulli_p : [num_users=1] = call_function[target=torch.ops.aten.bernoulli.p](args = (%_fx_const_folded_attrs, 0.6), kwargs = {})
    %mul_tensor : [num_users=1] = call_function[target=torch.ops.aten.mul.Tensor](args = (%x, %bernoulli_p), kwargs = {})
    %div_tensor : [num_users=1] = call_function[target=torch.ops.aten.div.Tensor](args = (%mul_tensor, 0.6), kwargs = {})
    %submod_1 : [num_users=0] = call_module[target=submod_1](args = (%x, %_fx_const_folded_attrs), kwargs = {})
    return (div_tensor,)
```
`submod_1` is dangling, unused, and just inlined into the graph.

## Fix
This pr updates `const_fold._inline_module` function to explicitly remove the non-const submodule which is unused, after it has inlined the submodule's ops into main graph.

Test Plan:
Added a test in `test_fx_const_fold.py`.

The test would have failed before this PR becuase it yields above example graph leaving an unused `call_module[target=submod_1]` op.

With the PR, the module is erased from main graph correctly.

Differential Revision: D86056354

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166871
Approved by: https://github.com/blaine-rister, https://github.com/mlazos
2025-11-03 23:23:10 +00:00
616314cfd5 [FSDP][Replicate] final version integrating 1D device mesh replicate into fsdp (#166433)
**Summary:** I have created a new composable replicate api that's integrated into FSDP's codebase with minimal changes. The key changes I made are when we use DDPMeshInfo, we use Replicate placements, prevent initial sharding of parameters, set worldsize to 1 to skip allgathers and reducescatter.

**Test Cases**
1. pytest test/distributed/_composable/test_replicate_training.py
2. pytest test_pp_composability.py
3. pytest test_replicate_with_fsdp.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166433
Approved by: https://github.com/weifengpy
2025-11-03 23:20:23 +00:00
2b7e4c3ef2 [DCP] Add option to use PrefixStore to create checkpoint background process (#166560)
Summary:
DCP checkpoint background process currently determines the port used for pg via get_free_port().

During checkpoint background process initialization, gloo pg init occasionally times out on the first call but succeeds in a subsequent call.

We hypothesized that the timeouts are related to the port being used, and the solution would be to create the pg with PrefixStore and reuse the master port.

This diff adds the option for checkpoint background process to use PrefixStore with MASTER_ADDR + MASTER_PORT.

The default behavior is unchanged. Enabling the new PrefixStore behavior requires setting "DCP_USE_PREFIX_STORE" env var to "1".

context:
 https://fb.workplace.com/groups/319878845696681/permalink/1516883985996155/

Differential Revision: D84928180

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166560
Approved by: https://github.com/meetv18
2025-11-03 23:08:12 +00:00
6c98657239 Add some Triton related suppressions that don't show on CI (#166868)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166868
Approved by: https://github.com/maggiemoss, https://github.com/zou3519
2025-11-03 22:54:50 +00:00
86b2d82e84 Revert "[Inductor] addmm with bias -> unfuse bias if there is a pointwise/reduction consumer (#166165)"
This reverts commit 94f2657c4b534136aa8958bc35d44ceac5ccd60c.

Reverted https://github.com/pytorch/pytorch/pull/166165 on behalf of https://github.com/izaitsevfb due to breaks test_LinearAndSoftmax_codegen test ([comment](https://github.com/pytorch/pytorch/pull/166165#issuecomment-3482926991))
2025-11-03 22:52:41 +00:00
eea8ff2d34 Fix torch.full with dynamic tensor fill_value in torch.compile (#166554)
Fixes #166253

## Summary
When `torch.full` is called with a 0-D tensor as `fill_value` inside a `torch.compile`'d function, the value was being incorrectly cached, causing subsequent calls with different values to return the first value.

## Root Cause
The Dynamo handler for `torch.full` was calling `aten._local_scalar_dense` to convert tensor fill_values to Python scalars at compile time, which baked the value into the compiled graph as a constant.

## Solution
Modified the Dynamo handler to decompose `torch.full(size, tensor_fill_value)` into `empty(size).fill_(tensor_fill_value)` when `fill_value` is a `TensorVariable`, keeping the fill value dynamic in the compiled graph.

## Testing
Added test case that verifies torch.full works correctly with dynamic tensor fill_values across multiple calls and dtypes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166554
Approved by: https://github.com/Lucaskabela
2025-11-03 21:44:10 +00:00
11f73d78c8 [export] Downgrade captured buffers as normal constants. (#166777)
Summary:
make_fx() will register tensor constants as new buffers while tracing a shuffle graph for dynamo graph capture. This breaks the invariance that the resulting graph looks identical to the original eager model in terms of state dict.

So we need to de-register the buffers and set them as plain tensor constants.

Test Plan:
pytest test/export/test_experimental.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166777
Approved by: https://github.com/tugsbayasgalan
ghstack dependencies: #166775, #166776
2025-11-03 21:28:42 +00:00
7d1b976146 [export] Make dict_keys_getitem tracable. (#166776)
Summary:
dict_keys_getitem can show up in the bytecode but it's using dict.keys() which is not fx tracable.

fx.wrap should make it as a standalone function in the graph to be invoked later with real inputs.

Test Plan:
pytest test/export/test_experimental.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166776
Approved by: https://github.com/jamesjwu
ghstack dependencies: #166775
2025-11-03 21:28:42 +00:00
27cfdd9e77 [export] Return more information from tracing context in graph capture. (#166775)
Summary:
as title, we should return an entire tracing_context object instead of fake_mode only, since tracing context should contain full set of information.

Test Plan:
pytest test/export/test_experimental.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166775
Approved by: https://github.com/tugsbayasgalan
2025-11-03 21:28:42 +00:00
01d8d8584b [MTIAGraph][Pytorch][2.1/n] Add API to destroy graph C++ instance (#166806)
I missed this API for MTIAGraph in D84457757(https://github.com/pytorch/pytorch/pull/165963)

Differential Revision: [D86026706](https://our.internmc.facebook.com/intern/diff/D86026706/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166806
Approved by: https://github.com/albanD
ghstack dependencies: #166805
2025-11-03 21:11:40 +00:00
b8855e7b0b Add conv ops to operator microbenchmark (#166331)
Adding `conv` (conv1d, conv2d, conv3d) to the list of operator microbenchmarks run in the CI script (`.ci/pytorch/test.sh`), ensuring convolution operators are now benchmarked alongside existing ones.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166331
Approved by: https://github.com/huydhn, https://github.com/jbschlosser
2025-11-03 20:54:52 +00:00
6725ee89c8 Fix cuda blas build error due to extra && (#166811)
Fixes #166810

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166811
Approved by: https://github.com/slayton58, https://github.com/Skylion007, https://github.com/malfet
2025-11-03 20:35:26 +00:00
3a38ec78e1 [inductor] Expand use of generic benchmark function (#164938)
Use the more generic `Benchmarker.benchmark` function to allow benchmarking other devices that support the required functionality, for example prologue and epilogue fusion can be benchmarked for triton CPU.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164938
Approved by: https://github.com/nmacchioni, https://github.com/eellison
2025-11-03 20:15:25 +00:00
77b9399d83 [random] Add generator arg to rand*_like APIs (#166160)
Fixes #165865

## What this PR does?

- [x] Add `generator` arg to `rand*_like` APIs (`rand_like()`, `randn_like()`, `randint_like()`).
- [x] Add unit tests for  `rand*_like` APIs
- [x] Add corresponding arg docs
- [x] Refactor `rand*_like()` codes in `TensorFactories.cpp`
- [x] Add corresponding and former missed items in `VmapModeRegistrations.cpp`

## Example (using `rand_like()`)

```python
gen0 = torch.Generator()
gen1 = torch.Generator()
gen2 = torch.Generator()

gen0.manual_seed(42)
gen1.manual_seed(42)
gen2.manual_seed(2025)

tensor = torch.empty(10)

t0 = torch.rand_like(tensor, generator=gen0)
t1 = torch.rand_like(tensor, generator=gen1)
t2 = torch.rand_like(tensor, generator=gen2)

assert t0 == t1
assert t2 != t0
assert t2 != t1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166160
Approved by: https://github.com/cyyever, https://github.com/albanD
2025-11-03 19:58:45 +00:00
83cd626365 [opaque_obj_v2] make_fx support (#165005)
By wrapping the python objects with FakeScriptObject(FakeOpaqueQueue) we restrict users to do anything to this object. torch.compile support can be easily enabled by the rest of [this stack](https://github.com/pytorch/pytorch/pull/163936) and existing support for ScriptObjects.

One thing to note is that by default in functionalization we mark all ops that take in FakeScriptObjects as being effectful. Should this be the case for these custom ops that take in python objs?

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165005
Approved by: https://github.com/zou3519
2025-11-03 19:48:37 +00:00
5125872aeb Fix unused assignments (#166791)
This PR cleans up unused assignments.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166791
Approved by: https://github.com/xmfan
2025-11-03 19:45:01 +00:00
c10975d2e6 Revert "Avoid DDE in narrow with unbacked start (#166361)"
This reverts commit c76199980d09198964409919335e86cc6e3dc575.

Reverted https://github.com/pytorch/pytorch/pull/166361 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/166361#issuecomment-3482194351))
2025-11-03 19:41:07 +00:00
eqy
68e31e2f81 [CUDA] Skip pynvml test on platforms that don't have complete support (#159689)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/159689
Approved by: https://github.com/msaroufim, https://github.com/Skylion007
2025-11-03 19:40:20 +00:00
ee1bc3f0d5 Manylinux ROCm docker images. use devtoolset-13 (#166764)
Update devtoolset in Manylinux 2.28 rocm builds. 11 is too old does not support compiling with C++20 properly

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166764
Approved by: https://github.com/sudharssun, https://github.com/jeffdaily
2025-11-03 19:32:33 +00:00
612ead1619 [distributed] Replace assert statements with AssertionError exceptions (#165216)
Replaces 71 assert statements across 11 files in `torch.distributed` with explicit if-checks raising AssertionError to prevent assertions from being disabled with Python -O flag.

Fixes #164878

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165216
Approved by: https://github.com/albanD
2025-11-03 19:30:48 +00:00
3af1f7bbf4 [easy][MTIAGraph][Pytorch] clang-format files (#166805)
Per suggestion from the previous PR(https://github.com/pytorch/pytorch/pull/165963), separating clang-format changes.

Differential Revision: [D86031474](https://our.internmc.facebook.com/intern/diff/D86031474/)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166805
Approved by: https://github.com/Skylion007, https://github.com/albanD
2025-11-03 19:27:09 +00:00
71a2e93547 [cuDNN][SDPA] Check-in test for #166211 (#166570)
Repros without the neeed for specific tensor data.
Should be passing with cuDNN frontend 1.15.0 which current `main` has.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166570
Approved by: https://github.com/atalman

Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com>
Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
2025-11-03 19:21:14 +00:00
c76199980d Avoid DDE in narrow with unbacked start (#166361)
Slice knows how to handle unbacked start, we do not need to offset start before calling slice, we can leave it for slice.
The only edge case is when start<0 and start+length ==0 in that case slice and narrow would deviate,
for that case we shall pass dim_size instead of start+length

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166361
Approved by: https://github.com/aorenste
2025-11-03 19:13:40 +00:00
e3bd7bd1f4 [FP8] Enable FP16 output support for torch scaled_mm when using CUTLASS on SM90 (#166744)
Summary: NVIDIA uses CUTLASS for row-wise scaling prior to cuBLAS version 12.9. This change enables support for FP16 data type for both bias and output when using CUTLASS.

Test Plan:
pytest -svv test/test_scaled_matmul_cuda.py

Test results on cuda-12.4:
```
test/test_scaled_matmul_cuda.py::TestFP8MatmulCUDA::test_scaled_mm_vs_emulated_row_wise_bfloat16_cuda PASSED [0.0022s]
test/test_scaled_matmul_cuda.py::TestFP8MatmulCUDA::test_scaled_mm_vs_emulated_row_wise_float16_cuda PASSED [0.0023s]
test/test_scaled_matmul_cuda.py::TestFP8MatmulCUDA::test_scaled_mm_vs_emulated_row_wise_float32_cuda SKIPPED [0.0005s]
======================= 51 passed, 516 skipped in 5.26s ========================
```

Test results on cuda-12.9:
```
test/test_scaled_matmul_cuda.py::TestFP8MatmulCUDA::test_scaled_mm_vs_emulated_row_wise_bfloat16_cuda PASSED [0.0046s]
test/test_scaled_matmul_cuda.py::TestFP8MatmulCUDA::test_scaled_mm_vs_emulated_row_wise_float16_cuda PASSED [0.0040s]
test/test_scaled_matmul_cuda.py::TestFP8MatmulCUDA::test_scaled_mm_vs_emulated_row_wise_float32_cuda PASSED [0.0038s]
======================= 70 passed, 482 skipped in 5.88s ========================
```

Reviewed By: pranavsharma, RandySheriff

Differential Revision: D84169910

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166744
Approved by: https://github.com/slayton58
2025-11-03 19:10:16 +00:00
aa4a8c9b92 [Inductor][Triton][FP8] Support tile-wise (1x128) scaling in Inductor (#165132)
Summary:
Support tile-wise `1x128` scaling in Inductor Triton for FP8 GEMMs, i.e. scaling values along tensors `a` and `b` represent a `1x128` slice of input.

NOTE: Block-wise `128x128` and `1x128` scaling is only supported in CUDA 12.9+; therefore, tile-wise scaling is currently unsupported in `fbcode` (CUDA 12.4). Use OSS PyTorch to run tile-wise scaling (as with deepseek-style scaling).

Test Plan:
Works out-of-the-box with TritonBench:
```
TORCHINDUCTOR_CACHE_DIR=~/personal/cache_dir_inductor CUDA_LAUNCH_BLOCKING=1 TORCH_USE_CUDA_DSA=1 TRITON_PRINT_AUTOTUNING=1 TRITON_ALWAYS_COMPILE=1 TORCH_LOGS=+inductor TORCHINDUCTOR_FORCE_DISABLE_CACHES=1 ENABLE_PERSISTENT_TMA_MATMUL=1 TORCHINDUCTOR_MAX_AUTOTUNE_GEMM=1 buck2 run mode/{opt,inplace} pytorch/tritonbench:run -- --op fp8_gemm --only torch_fp8_gemm,pt2_fp8_gemm --metrics tflops,accuracy --m 256 --n 768 --k 512 --output="/home/jananisriram/personal/random_bench.csv" --scaling-pair=BlockWise1x128,BlockWise1x128 --atol=1e-2 --rtol=0.5
```

Differential Revision: D84025878

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165132
Approved by: https://github.com/eqy, https://github.com/drisspg, https://github.com/njriasan
2025-11-03 18:37:13 +00:00
fa0fd6be13 Revert "[FSDP][Replicate] final version integrating 1D device mesh replicate into fsdp (#166433)"
This reverts commit bcad4f2e68e2a93a2855c1c22f0856fbb7c729e2.

Reverted https://github.com/pytorch/pytorch/pull/166433 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/166433#issuecomment-3481929476))
2025-11-03 18:31:20 +00:00
2f3f88f445 Revert "[FSDP][Replicate] added two replicate overload declarations and changed device_mesh to mesh (#166459)"
This reverts commit d67d807270e070bbb873af61ea944ed98b52b9cf.

Reverted https://github.com/pytorch/pytorch/pull/166459 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/166433#issuecomment-3481929476))
2025-11-03 18:31:20 +00:00
d67d807270 [FSDP][Replicate] added two replicate overload declarations and changed device_mesh to mesh (#166459)
**Summary:** Just like in fully_shard, I added two overload replicate functions. The `@overload` declarations are necessary because the `@contract` decorator uses `ParamSpec` to capture function parameters, which creates a generic `_ContractFn` protocol signature (`*args: _P.args, **kwargs: _P.kwargs`) that Pyrefly cannot properly type-check when calling the function with explicit keyword arguments. In addition, to make the api cleaner I changed device_mesh input argument to mesh to match fully_shard formatting.

**Test Cases**
1.  pytest test/distributed/_composable/test_replicate_with_fsdp.py
2. pytest test/distributed/_composable/test_replicate_training.py
3. pytest test/distributed/_composable/test_composability/test_pp_composability.py -k test_replicate_pp

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166459
Approved by: https://github.com/weifengpy
ghstack dependencies: #166433
2025-11-03 18:20:07 +00:00
bcad4f2e68 [FSDP][Replicate] final version integrating 1D device mesh replicate into fsdp (#166433)
**Summary:** I have created a new composable replicate api that's integrated into FSDP's codebase with minimal changes. The key changes I made are when we use DDPMeshInfo, we use Replicate placements, prevent initial sharding of parameters, set worldsize to 1 to skip allgathers and reducescatter.

**Test Cases**
1. pytest test/distributed/_composable/test_replicate_training.py
2. pytest test_pp_composability.py
3. pytest test_replicate_with_fsdp.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166433
Approved by: https://github.com/weifengpy
2025-11-03 18:09:27 +00:00
5b17ef30d0 Update docs-build to c7i (#166727)
This updates the docs-build nightly configuration to match other uses of the _linux-build.yml workflow using `runner_prefix` rather than `runner` directly. The default runner defined in _linux-build.yml is the c7i variant so this also updates the runner appropriately.

Relates to pytorch/test-infra#7175. While moving to c7i costs 5% more, CPU intensive jobs should run roughly 15-20% faster resulting in a cost reduection of 10-15% for those jobs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166727
Approved by: https://github.com/huydhn
2025-11-03 18:02:09 +00:00
7b2992685b Update test jobs in pull workflow to c7i (#165646)
Relates to pytorch/test-infra#7175. While moving to c7i costs 5% more, CPU intensive jobs should run roughly 15-20% faster resulting in a cost reduection of 10-15% for those jobs.

This PR updates for the following test job suite that seem to benefit from the newer hardware:

* backwards_compat
* numpy_2_x
* ONNX default
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165646
Approved by: https://github.com/jordanconway, https://github.com/huydhn
2025-11-03 18:00:09 +00:00
f3fa560dec Integrate NVIDIA cuSolver backend into ATen/Linalg (initial implementation for eig/eigval) (#166715)
### Summary

Adds support for NVIDIA’s cuSolver backend to torch.linalg.eig and torch.linalg.eigvals within the ATen/Linalg framework.

### Motivation

Extending PyTorch’s Linalg backends with NVIDIA’s cuSolver enables faster execution of torch.linalg.eig and torch.linalg.eigvals, complementing existing MAGMA and CPU implementations.

The speedup observed on consumer hardware (RTX4070/Ryzen 5700x) is in the order of **2x**, with preliminary testing on HPC hardware (H100, EPYC 9454) suggesting **up to 10x speedup**.

### Details

- Implements cuSolver support for linalg_eig and linalg_eigvals using the interface described in [NVIDIA cuSolver documentation](https://docs.nvidia.com/cuda/cusolver/index.html#cusolverdnxgeev)  as introduced in CUDA 12.8 [CUDA 12.8 release notes](https://docs.nvidia.com/cuda/archive/12.8.0/cuda-toolkit-release-notes/index.html)
- Follows the existing MAGMA backend design, adapting it for cuSolver’s cusolverDnXgeev API.
- Integrates with existing eig/eigvals dispatch mechanism.
- No automatic CPU↔GPU backend switching. (Happy to discuss)
- Verified via existing Linalg test coverage; no new tests introduced in this PR.
- Tested successfully against both test_linalg.py including slow test suites.
- Tested MAGMA fallback successfully using CUDA 12.4. (observed unrelated test failures)

### Impact

- Enables much faster solving of eigenvalue problems
- Maintains numerical consistency and test stability across backends.
- No change to public API or user-facing behavior.

Special thanks to @AlbanD for prior feedback and discussions regarding the PR and @lezcano for feedback on the related testing PR [https://github.com/pytorch/pytorch/pull/166322](https://github.com/pytorch/pytorch/pull/166322).

Happy to discuss backend dispatch strategy, results from performance and stability testing can be seen here [https://dev-discuss.pytorch.org/](https://dev-discuss.pytorch.org/t/cusolver-dnxgeev-faster-cuda-eigenvalue-calculations/3248/7)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166715
Approved by: https://github.com/lezcano, https://github.com/albanD
2025-11-03 17:44:22 +00:00
984b096d10 [ROCm][CI] Change rocm.yml and inductor-rocm.yml cron schedule to run every hour (#166870)
Temporary PR to change the rocm.yml and inductor-rocm.yml workflows to run on an hourly basis rather than on every commit. This is caused by the following:

We are observing cirrascale network timeouts as of 11/03/2025. [HUD Link](94f2657c4b/1)
[SEV](https://github.com/pytorch/pytorch/issues/166866)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166870
Approved by: https://github.com/jeffdaily
2025-11-03 17:33:11 +00:00
104b868618 Fix build error by checking cuda version in CUDAGreenContext (#166800)
Fixes #166799
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166800
Approved by: https://github.com/mlazos, https://github.com/eqy, https://github.com/malfet
2025-11-03 16:41:38 +00:00
94f2657c4b [Inductor] addmm with bias -> unfuse bias if there is a pointwise/reduction consumer (#166165)
Prefer unfused addmm when there is at least a single elemwise/reduction consumer..

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166165
Approved by: https://github.com/eellison
2025-11-03 15:50:32 +00:00
3f6538febd Remove tools from BC linter (#166858)
Signed-off-by: Edward Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166858
Approved by: https://github.com/albanD
2025-11-03 15:42:54 +00:00
f33abae695 Switch to pyrefly as only type checker (#166197)
This formally switches pytorch over from MyPy as a type checker to Pyrefly, and should help reduce the noise in lint runner right now, I will fast follow with PR's silencing existing errors and will work over the weekend to ensure trunk stays in a clean slate while we roll this out.

test:

`lintrunner init`
`lintrunner`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166197
Approved by: https://github.com/ezyang, https://github.com/seemethere, https://github.com/albanD
2025-11-03 15:32:56 +00:00
73da7a40b6 [MPS] Error out when BatchNorm is called for Complex (#166215)
Or BatchNorm or LayerNorm for Long types

Discovered while trying to enable `test_ops.py` for MPS
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166215
Approved by: https://github.com/dcci, https://github.com/kulinseth, https://github.com/Skylion007
ghstack dependencies: #166214
2025-11-03 15:24:09 +00:00
cyy
335b5c7d4b Avoid std::copy_n in CopyKernel and IndexKernel (#143544)
This PR simplifies `std::copy_n` calls in CopyKernel and IndexKernel. `std::copy_n` is used to create a data pointer array from the input data pointers. However, more careful review reveals that the dest pointers are actually aliases of the original pointers. So we can removes the pointer manipulations.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143544
Approved by: https://github.com/albanD
2025-11-03 15:16:04 +00:00
76bb27e248 Revert "Back out "Do not decompose in functionalization/proxy tensor if autograd wouldn't have decomposed (#164939)" (#165910)" (#166812)
This reverts commit e6ba4d072510464c846f2013822f9388210eb907.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166812
Approved by: https://github.com/SherlockNoMad
2025-11-03 15:06:11 +00:00
a2da69385a Remove nightly pth check from pyrefly (#166857)
Signed-off-by: Edward Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166857
Approved by: https://github.com/albanD
2025-11-03 14:53:49 +00:00
d177900723 [Code Clean] Clean asserts in torch/ao/quantization (root, quantizer, backend_config) (#165433)
Replace assert statements with explicit if/raise patterns in:

- torch/ao/quantization/~
- torch/ao/quantization/quantizer/
- torch/ao/quantization/backend_config/

fix partialy #164878

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165433
Approved by: https://github.com/mlazos, https://github.com/fffrog, https://github.com/cyyever
2025-11-03 14:52:37 +00:00
61bcc8d75a Revert "Fixes torch.compile(nn.ModuleList()) changes bool() behavior (#159208)"
This reverts commit 21b48f8dfa7685699df4c97c0ba373d5364230d9.

Reverted https://github.com/pytorch/pytorch/pull/159208 on behalf of https://github.com/atalman due to Broke internal tests ([comment](https://github.com/pytorch/pytorch/pull/159208#issuecomment-3480743499))
2025-11-03 14:10:01 +00:00
1656b253c5 Revert "[MPS] Fix smooth_l1_loss backward for fp16 (#166687)"
This reverts commit 4e7232c5daf753e04e8f4189229e3c33888a33e5.

Reverted https://github.com/pytorch/pytorch/pull/166687 on behalf of https://github.com/atalman due to [GH job link](https://github.com/pytorch/pytorch/actions/runs/19027214755/job/54332952760) [HUD commit link](95ab09cb54) ([comment](https://github.com/pytorch/pytorch/pull/166687#issuecomment-3480694316))
2025-11-03 14:05:25 +00:00
5d6230779d Revert "Give full Dynamo stack traces in CI (#160417)"
This reverts commit e0791fc11dc0024a828495985898b29120dcc4c1.

Reverted https://github.com/pytorch/pytorch/pull/160417 on behalf of https://github.com/atalman due to test/dynamo/test_aot_compile.py::TestAOTCompile::test_aot_compile_graph_break_error_fmt [GH job link](https://github.com/pytorch/pytorch/actions/runs/19028849833/job/54339349886) [HUD commit link](e0791fc11d) ([comment](https://github.com/pytorch/pytorch/pull/160417#issuecomment-3480680049))
2025-11-03 14:00:20 +00:00
a4077b568f Revert "[MPS] Error out when BatchNorm is called for Complex (#166215)"
This reverts commit 9261a1fb128412201ef009d30844a2417364d73b.

Reverted https://github.com/pytorch/pytorch/pull/166215 on behalf of https://github.com/atalman due to sorry need to revert https://github.com/pytorch/pytorch/pull/166687 ([comment](https://github.com/pytorch/pytorch/pull/166215#issuecomment-3480661671))
2025-11-03 13:56:32 +00:00
ae038f871b [inductor] Collectives estimations: option to use nccl estimator for fx node (#166521)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166521
Approved by: https://github.com/eellison
2025-11-03 13:11:54 +00:00
defac66e39 [xla hash update] update the pinned xla hash (#166845)
This PR is auto-generated nightly by [this action](https://github.com/pytorch/pytorch/blob/main/.github/workflows/nightly.yml).
Update the pinned xla hash.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166845
Approved by: https://github.com/pytorchbot
2025-11-03 11:32:14 +00:00
061fa73c97 Reapply "Back out "Do not decompose in functionalization/proxy tensor if autograd wouldn't have decomposed (#164939)" (#165910)" (#166812)
This reverts commit 5a3930abbc19eac9a179455df82e206e69765ed2.

Reverted https://github.com/pytorch/pytorch/pull/166812 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/166812#issuecomment-3480004525))
2025-11-03 11:16:15 +00:00
9501405de6 [caffe2] Ignore -Wswitch-enum warnings (#166760)
Summary: Projects that use `-Wswitch-enum` will encounter issues when building and using *PyTorch* (`caffe2`).  Address these issues to empower more rigorous upstream compiler warnings/errors.

Test Plan: CI Pass

Differential Revision: D85893917

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166760
Approved by: https://github.com/atalman
2025-11-03 09:37:47 +00:00
e0791fc11d Give full Dynamo stack traces in CI (#160417)
Signed-off-by: Edward Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160417
Approved by: https://github.com/SherlockNoMad
2025-11-03 08:51:21 +00:00
e1d011d6eb [2/N] Change C-style casts to static_cast or reinterpret_cast (#165891)
A follow-up of #165750 to clean up C casts.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165891
Approved by: https://github.com/Skylion007

Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
2025-11-03 08:02:58 +00:00
3f5401020b [3/N] Add clang-tidy readability checks (#164692)
This PR adds two checks:
```
readability-static-definition-in-anonymous-namespace

Finds static function and variable definitions
in anonymous namespace.

readability-named-parameter

Find functions with unnamed arguments.

```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164692
Approved by: https://github.com/Skylion007
2025-11-03 07:28:21 +00:00
5a3930abbc Revert "Back out "Do not decompose in functionalization/proxy tensor if autograd wouldn't have decomposed (#164939)" (#165910)" (#166812)
This reverts commit e6ba4d072510464c846f2013822f9388210eb907.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166812
Approved by: https://github.com/SherlockNoMad
2025-11-03 07:21:20 +00:00
a5f00077fc torch.cond supports autograd now (#165908)
Signed-off-by: Edward Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/165908
Approved by: https://github.com/zou3519, https://github.com/ydwu4, https://github.com/Skylion007
2025-11-03 06:16:15 +00:00
69fb3ebb5d Fix: type promotion in FakeTensor (#166522)
Fixes #166042

common_dtype is being alloted first datatype even though one is passing some other value in type_promotions. Putting a condition around the same.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166522
Approved by: https://github.com/Lucaskabela
2025-11-03 06:11:35 +00:00
1c4ced2eaf [2/N] Correctly use test parameters (#166783)
This PR fixes unused test parameters.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166783
Approved by: https://github.com/mlazos
2025-11-03 05:36:52 +00:00
392acee68a [6/N] Remove unused loop variables in tests (#166785)
This PR removes unused loop variables in tests.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166785
Approved by: https://github.com/Skylion007, https://github.com/mlazos
2025-11-03 03:52:52 +00:00
fee1ac927d [DebugMode] add stack traces (#166440)
Captures stack trace for torch_dispatch calls, under `with DebugMode(record_stack_trace=True)`: Traces aren't rendered in debug string, but are in `.stack_trace` for each log.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166440
Approved by: https://github.com/yushangdi
2025-11-03 02:48:09 +00:00
4a7fefd7c7 [dynamo] fix pos-only names should can be collected in **kwargs (#166798)
See the new testcase for more details. It fails on trunk and is fixed by this PR.

```python
In [1]: def func(a, /, **kwargs):
   ...:     return a, kwargs

In [2]: func(1, a=2)
Out[2]: (1, {'a': 2})
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166798
Approved by: https://github.com/guilhermeleobas
2025-11-03 02:40:34 +00:00
3b4315940d [export] Fix static_input_indices for aot_export_joint (#166761)
`static_input_indices` is used for cudagraphs to determine which input indices are static and will not have changing addresses. Since export never integrated with cudagraphs this information was not necessary. But now we need it!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166761
Approved by: https://github.com/BoyuanFeng
2025-11-03 01:57:51 +00:00
3eddf04922 Revert "Add min/max support for barebones uint types (#166813)"
This reverts commit 9c22bbb2dce31b854e3387db77eaff501434f352.

Reverted https://github.com/pytorch/pytorch/pull/166813 on behalf of https://github.com/pytorch-auto-revert due to Reverted automatically by pytorch's autorevert, to avoid this behaviour add the tag autorevert: disable ([comment](https://github.com/pytorch/pytorch/pull/166813#issuecomment-3478450413))
2025-11-02 22:50:36 +00:00
7c203b8420 [BE] Using std::move to reduce copy constructor calls by one. (#163599)
inspired by https://github.com/pytorch/pytorch/pull/163416

Pull Request resolved: https://github.com/pytorch/pytorch/pull/163599
Approved by: https://github.com/Skylion007
2025-11-02 21:54:58 +00:00
3ca216ae17 Add claude skills for uint support and AT_DISPATCH_V2 (#166814)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166814
Approved by: https://github.com/Skylion007, https://github.com/malfet
ghstack dependencies: #166813
2025-11-02 21:36:19 +00:00
9c22bbb2dc Add min/max support for barebones uint types (#166813)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/166813
Approved by: https://github.com/Skylion007
2025-11-02 21:36:19 +00:00
6268883f9c [MPS] Refactor torch.cat and add fast path for contiguous inputs (#166556)
In many cases when the fast path is used, the performance is pretty similar to what it used to be. However, with tensors on the order of about 1000 elements there is a modest speedup, which increases as the number of input tensors increases and the number of dimensions increases.

This script was used for performance comparison: <1f04647bbf/cat/perf0.py>

Before change:

```
idx: cpu time, mps time, speedup, op, args, kwargs
-----------------------------------------
0: 0.000843 ms, 0.010431 ms, 0.08, cat, [[tensor(shape[5, 5]), tensor(shape[5, 5])]], {'dim': -1}
1: 0.000838 ms, 0.013467 ms, 0.06, cat, [[tensor(shape[5, 5]), tensor(shape[5, 5])]], {'dim': 1}
2: 0.000792 ms, 0.009457 ms, 0.08, cat, [[tensor(shape[10, 5]), tensor(shape[5, 5])]], {'dim': 0}
3: 0.000834 ms, 0.010694 ms, 0.08, cat, [[tensor(shape[1, 2, 3]), tensor(shape[1, 2, 3])]], {'dim': -2}
4: 0.000627 ms, 0.000641 ms, 0.98, cat, [[tensor(shape[0]), tensor(shape[0])]], {'dim': 0}
5: 0.001172 ms, 0.006493 ms, 0.18, cat, [[tensor(shape[0]), tensor(shape[5, 5])]], {'dim': 1}
6: 0.000812 ms, 0.006148 ms, 0.13, cat, [[tensor(shape[0, 5]), tensor(shape[5, 5])]], {'dim': 0}
7: 0.000686 ms, 0.009382 ms, 0.07, cat, [[tensor(shape[1]), tensor(shape[1])]], {}
8: 0.000738 ms, 0.006532 ms, 0.11, cat, [[tensor(shape[2, 2, 2, 2])], 1], {}
9: 0.003835 ms, 0.193963 ms, 0.02, cat, "[[tensor(shape[3, 1, 2]), tensor(shape[3, 2, 2]), tensor(shape[3, 3, 2]), tensor(shape[3, 1, 2]), te...", {'dim': 1}
10: 0.552435 ms, 0.690500 ms, 0.80, cat, "[[tensor(shape[3, 1, 2]), tensor(shape[3, 2, 2]), tensor(shape[3, 3, 2]), tensor(shape[3, 1, 2]), te...", {'dim': 1}
11: 0.488799 ms, 0.708988 ms, 0.69, cat, "[[tensor(shape[1, 3, 2]), tensor(shape[2, 3, 2]), tensor(shape[3, 3, 2]), tensor(shape[1, 3, 2]), te...", {'dim': 0}
12: 0.000799 ms, 0.005997 ms, 0.13, cat, [[tensor(shape[1000]), tensor(shape[1000])]], {'dim': 0}
13: 0.000916 ms, 0.011791 ms, 0.08, cat, [[tensor(shape[2, 2, 2, 2, 2, 2, 2, 2, 2, 2]), tensor(shape[2, 2, 2, 2, 2, 2, 2, 2, 2, 2])]], {'dim': 0}
14: 0.001028 ms, 0.012269 ms, 0.08, cat, "[[tensor(shape[1000]), tensor(shape[1000]), tensor(shape[1000]), tensor(shape[1000]), tensor(shape[1...", {'dim': 0}
15: 0.001127 ms, 0.025197 ms, 0.04, cat, "[[tensor(shape[2, 2, 2, 2, 2, 2, 2, 2, 2, 2]), tensor(shape[2, 2, 2, 2, 2, 2, 2, 2, 2, 2]), tensor(s...", {'dim': 0}
16: 0.321997 ms, 0.142815 ms, 2.25, cat, [[tensor(shape[1000000]), tensor(shape[1000000])]], {'dim': 0}
17: 1.989967 ms, 1.013615 ms, 1.96, cat, [[tensor(shape[1000000, 3, 2]), tensor(shape[1000000, 3, 2])]], {'dim': 0}
18: 3.161745 ms, 0.965378 ms, 3.28, cat, [[tensor(shape[3, 1000000, 2]), tensor(shape[3, 1000000, 2])]], {'dim': 1}
19: 3.416246 ms, 0.972278 ms, 3.51, cat, [[tensor(shape[3, 2, 1000000]), tensor(shape[3, 2, 1000000])]], {'dim': 2}
```

After change:

```
idx: cpu time, mps time, speedup, op, args, kwargs
-----------------------------------------
0: 0.000902 ms, 0.011074 ms, 0.08, cat, [[tensor(shape[5, 5]), tensor(shape[5, 5])]], {'dim': -1}
1: 0.000899 ms, 0.010453 ms, 0.09, cat, [[tensor(shape[5, 5]), tensor(shape[5, 5])]], {'dim': 1}
2: 0.000771 ms, 0.005843 ms, 0.13, cat, [[tensor(shape[10, 5]), tensor(shape[5, 5])]], {'dim': 0}
3: 0.000776 ms, 0.010449 ms, 0.07, cat, [[tensor(shape[1, 2, 3]), tensor(shape[1, 2, 3])]], {'dim': -2}
4: 0.000616 ms, 0.000600 ms, 1.03, cat, [[tensor(shape[0]), tensor(shape[0])]], {'dim': 0}
5: 0.001150 ms, 0.007624 ms, 0.15, cat, [[tensor(shape[0]), tensor(shape[5, 5])]], {'dim': 1}
6: 0.000728 ms, 0.007949 ms, 0.09, cat, [[tensor(shape[0, 5]), tensor(shape[5, 5])]], {'dim': 0}
7: 0.000671 ms, 0.005458 ms, 0.12, cat, [[tensor(shape[1]), tensor(shape[1])]], {}
8: 0.000770 ms, 0.006590 ms, 0.12, cat, [[tensor(shape[2, 2, 2, 2])], 1], {}
9: 0.003835 ms, 0.190193 ms, 0.02, cat, "[[tensor(shape[3, 1, 2]), tensor(shape[3, 2, 2]), tensor(shape[3, 3, 2]), tensor(shape[3, 1, 2]), te...", {'dim': 1}
10: 0.529047 ms, 0.734389 ms, 0.72, cat, "[[tensor(shape[3, 1, 2]), tensor(shape[3, 2, 2]), tensor(shape[3, 3, 2]), tensor(shape[3, 1, 2]), te...", {'dim': 1}
11: 0.512615 ms, 0.531172 ms, 0.97, cat, "[[tensor(shape[1, 3, 2]), tensor(shape[2, 3, 2]), tensor(shape[3, 3, 2]), tensor(shape[1, 3, 2]), te...", {'dim': 0}
12: 0.000740 ms, 0.004288 ms, 0.17, cat, [[tensor(shape[1000]), tensor(shape[1000])]], {'dim': 0}
13: 0.000955 ms, 0.004119 ms, 0.23, cat, [[tensor(shape[2, 2, 2, 2, 2, 2, 2, 2, 2, 2]), tensor(shape[2, 2, 2, 2, 2, 2, 2, 2, 2, 2])]], {'dim': 0}
14: 0.001037 ms, 0.004578 ms, 0.23, cat, "[[tensor(shape[1000]), tensor(shape[1000]), tensor(shape[1000]), tensor(shape[1000]), tensor(shape[1...", {'dim': 0}
15: 0.001115 ms, 0.004918 ms, 0.23, cat, "[[tensor(shape[2, 2, 2, 2, 2, 2, 2, 2, 2, 2]), tensor(shape[2, 2, 2, 2, 2, 2, 2, 2, 2, 2]), tensor(s...", {'dim': 0}
16: 0.334119 ms, 0.145008 ms, 2.30, cat, [[tensor(shape[1000000]), tensor(shape[1000000])]], {'dim': 0}
17: 2.419846 ms, 0.984192 ms, 2.46, cat, [[tensor(shape[1000000, 3, 2]), tensor(shape[1000000, 3, 2])]], {'dim': 0}
18: 3.117338 ms, 1.000345 ms, 3.12, cat, [[tensor(shape[3, 1000000, 2]), tensor(shape[3, 1000000, 2])]], {'dim': 1}
19: 3.047707 ms, 0.971730 ms, 3.14, cat, [[tensor(shape[3, 2, 1000000]), tensor(shape[3, 2, 1000000])]], {'dim': 2}
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166556
Approved by: https://github.com/malfet
2025-11-02 21:27:05 +00:00
16212f0d6b [Sparse] support for exp op (#166801)
support for exp op in Sparse tensors
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166801
Approved by: https://github.com/eqy
2025-11-02 21:14:43 +00:00
c8adc08b3b [Fix] Optimize max unpooling index validation using aminmax (#165394)
Replace separate min() and max() calls with single aminmax() call in max_unpool_out_mps_template to improve performance by reducing tensor traversals from O(2n) to O(n).

Changes:
- Use indices.aminmax() instead of separate indices.min()/max() calls
- Add required ATen/ops/aminmax.h header for AT_PER_OPERATOR_HEADERS
- Maintain identical bounds checking logic and error handling

This optimization is particularly beneficial for large indices tensors, improving cache locality and reducing computational overhead.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165394
Approved by: https://github.com/cyyever, https://github.com/Skylion007
2025-11-02 19:42:02 +00:00
23b57a445c Remove setup-env instructions; it's confusing (#166749)
Signed-off-by: Edward Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/166749
Approved by: https://github.com/mlazos
2025-11-02 19:22:53 +00:00
1297 changed files with 38749 additions and 12744 deletions

View File

@ -13,3 +13,4 @@ exclude:
- "**/benchmarks/**"
- "**/test_*.py"
- "**/*_test.py"
- "tools/**"

View File

@ -7,13 +7,13 @@ ENV LC_ALL en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US.UTF-8
ARG DEVTOOLSET_VERSION=11
ARG DEVTOOLSET_VERSION=13
RUN yum -y update
RUN yum -y install epel-release
# install glibc-langpack-en make sure en_US.UTF-8 locale is available
RUN yum -y install glibc-langpack-en
RUN yum install -y sudo wget curl perl util-linux xz bzip2 git patch which perl zlib-devel openssl-devel yum-utils autoconf automake make gcc-toolset-${DEVTOOLSET_VERSION}-toolchain
RUN yum install -y sudo wget curl perl util-linux xz bzip2 git patch which perl zlib-devel openssl-devel yum-utils autoconf automake make gcc-toolset-${DEVTOOLSET_VERSION}-gcc gcc-toolset-${DEVTOOLSET_VERSION}-gcc-c++ gcc-toolset-${DEVTOOLSET_VERSION}-gcc-gfortran gcc-toolset-${DEVTOOLSET_VERSION}-gdb
# Just add everything as a safe.directory for git since these will be used in multiple places with git
RUN git config --global --add safe.directory '*'
ENV PATH=/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/bin:$PATH
@ -41,6 +41,7 @@ RUN bash ./install_conda.sh && rm install_conda.sh
# Install CUDA
FROM base as cuda
ARG CUDA_VERSION=12.6
ARG DEVTOOLSET_VERSION=13
RUN rm -rf /usr/local/cuda-*
ADD ./common/install_cuda.sh install_cuda.sh
COPY ./common/install_nccl.sh install_nccl.sh
@ -50,7 +51,8 @@ ENV CUDA_HOME=/usr/local/cuda-${CUDA_VERSION}
# Preserve CUDA_VERSION for the builds
ENV CUDA_VERSION=${CUDA_VERSION}
# Make things in our path by default
ENV PATH=/usr/local/cuda-${CUDA_VERSION}/bin:$PATH
ENV PATH=/usr/local/cuda-${CUDA_VERSION}/bin:/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/bin:$PATH
FROM cuda as cuda12.6
RUN bash ./install_cuda.sh 12.6
@ -68,8 +70,22 @@ FROM cuda as cuda13.0
RUN bash ./install_cuda.sh 13.0
ENV DESIRED_CUDA=13.0
FROM ${ROCM_IMAGE} as rocm
FROM ${ROCM_IMAGE} as rocm_base
ARG DEVTOOLSET_VERSION=13
ENV LC_ALL en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US.UTF-8
# Install devtoolset on ROCm base image
RUN yum -y update && \
yum -y install epel-release && \
yum -y install glibc-langpack-en && \
yum install -y sudo wget curl perl util-linux xz bzip2 git patch which perl zlib-devel openssl-devel yum-utils autoconf automake make gcc-toolset-${DEVTOOLSET_VERSION}-gcc gcc-toolset-${DEVTOOLSET_VERSION}-gcc-c++ gcc-toolset-${DEVTOOLSET_VERSION}-gcc-gfortran gcc-toolset-${DEVTOOLSET_VERSION}-gdb
RUN git config --global --add safe.directory '*'
ENV PATH=/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/bin:$PATH
FROM rocm_base as rocm
ARG PYTORCH_ROCM_ARCH
ARG DEVTOOLSET_VERSION=13
ENV PYTORCH_ROCM_ARCH ${PYTORCH_ROCM_ARCH}
ADD ./common/install_mkl.sh install_mkl.sh
RUN bash ./install_mkl.sh && rm install_mkl.sh
@ -88,6 +104,7 @@ COPY --from=cuda13.0 /usr/local/cuda-13.0 /usr/local/cuda-13.0
# Final step
FROM ${BASE_TARGET} as final
ARG DEVTOOLSET_VERSION=13
COPY --from=openssl /opt/openssl /opt/openssl
COPY --from=patchelf /patchelf /usr/local/bin/patchelf
COPY --from=conda /opt/conda /opt/conda

View File

@ -36,11 +36,7 @@ case ${DOCKER_TAG_PREFIX} in
;;
rocm*)
BASE_TARGET=rocm
PYTORCH_ROCM_ARCH="gfx900;gfx906;gfx908;gfx90a;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
# add gfx950, gfx115x conditionally starting in ROCm 7.0
if [[ "$ROCM_VERSION" == *"7.0"* ]]; then
PYTORCH_ROCM_ARCH="${PYTORCH_ROCM_ARCH};gfx950;gfx1150;gfx1151"
fi
PYTORCH_ROCM_ARCH="gfx900;gfx906;gfx908;gfx90a;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201;gfx950;gfx1150;gfx1151"
EXTRA_BUILD_ARGS="${EXTRA_BUILD_ARGS} --build-arg PYTORCH_ROCM_ARCH=${PYTORCH_ROCM_ARCH}"
;;
*)
@ -63,7 +59,7 @@ docker build \
--target final \
--progress plain \
--build-arg "BASE_TARGET=${BASE_TARGET}" \
--build-arg "DEVTOOLSET_VERSION=11" \
--build-arg "DEVTOOLSET_VERSION=13" \
${EXTRA_BUILD_ARGS} \
-t ${tmp_tag} \
$@ \

View File

@ -168,6 +168,18 @@ case "$tag" in
VISION=yes
TRITON=yes
;;
pytorch-linux-jammy-py3.11-clang12)
ANACONDA_PYTHON_VERSION=3.11
CLANG_VERSION=12
VISION=no
TRITON=no
;;
pytorch-linux-jammy-py3.12-clang12)
ANACONDA_PYTHON_VERSION=3.12
CLANG_VERSION=12
VISION=no
TRITON=no
;;
pytorch-linux-jammy-rocm-n-py3 | pytorch-linux-jammy-rocm-n-py3-benchmarks | pytorch-linux-noble-rocm-n-py3)
if [[ $tag =~ "jammy" ]]; then
ANACONDA_PYTHON_VERSION=3.10
@ -195,9 +207,9 @@ case "$tag" in
NINJA_VERSION=1.9.0
TRITON=yes
;;
pytorch-linux-jammy-xpu-n-py3 | pytorch-linux-jammy-xpu-n-py3-inductor-benchmarks)
pytorch-linux-noble-xpu-n-py3 | pytorch-linux-noble-xpu-n-py3-inductor-benchmarks)
ANACONDA_PYTHON_VERSION=3.10
GCC_VERSION=11
GCC_VERSION=13
VISION=yes
XPU_VERSION=2025.2
NINJA_VERSION=1.9.0
@ -248,6 +260,12 @@ case "$tag" in
HALIDE=yes
TRITON=yes
;;
pytorch-linux-jammy-cuda12.8-py3.12-pallas)
CUDA_VERSION=12.8.1
ANACONDA_PYTHON_VERSION=3.12
GCC_VERSION=11
PALLAS=yes
;;
pytorch-linux-jammy-py3.12-triton-cpu)
CUDA_VERSION=12.6
ANACONDA_PYTHON_VERSION=3.12
@ -261,9 +279,9 @@ case "$tag" in
PYTHON_VERSION=3.10
CUDA_VERSION=12.8.1
;;
pytorch-linux-jammy-aarch64-py3.10-gcc11)
pytorch-linux-jammy-aarch64-py3.10-gcc13)
ANACONDA_PYTHON_VERSION=3.10
GCC_VERSION=11
GCC_VERSION=13
ACL=yes
VISION=yes
OPENBLAS=yes
@ -271,9 +289,19 @@ case "$tag" in
# from pytorch/llvm:9.0.1 is x86 specific
SKIP_LLVM_SRC_BUILD_INSTALL=yes
;;
pytorch-linux-jammy-aarch64-py3.10-gcc11-inductor-benchmarks)
pytorch-linux-jammy-aarch64-py3.10-clang21)
ANACONDA_PYTHON_VERSION=3.10
GCC_VERSION=11
CLANG_VERSION=21
ACL=yes
VISION=yes
OPENBLAS=yes
# snadampal: skipping llvm src build install because the current version
# from pytorch/llvm:9.0.1 is x86 specific
SKIP_LLVM_SRC_BUILD_INSTALL=yes
;;
pytorch-linux-jammy-aarch64-py3.10-gcc13-inductor-benchmarks)
ANACONDA_PYTHON_VERSION=3.10
GCC_VERSION=13
ACL=yes
VISION=yes
OPENBLAS=yes
@ -359,6 +387,7 @@ docker build \
--build-arg "INDUCTOR_BENCHMARKS=${INDUCTOR_BENCHMARKS}" \
--build-arg "EXECUTORCH=${EXECUTORCH}" \
--build-arg "HALIDE=${HALIDE}" \
--build-arg "PALLAS=${PALLAS}" \
--build-arg "XPU_VERSION=${XPU_VERSION}" \
--build-arg "UNINSTALL_DILL=${UNINSTALL_DILL}" \
--build-arg "ACL=${ACL:-}" \

View File

@ -0,0 +1 @@
0.8.0

View File

@ -1 +1 @@
7416ffcb92cdbe98d9f97e4e6f95247e46dfc9fd
bfeb066872bc1e8b2d2bc0a3b295b99dd77206e7

View File

@ -8,8 +8,8 @@ if [ -n "$CLANG_VERSION" ]; then
# work around ubuntu apt-get conflicts
sudo apt-get -y -f install
wget --no-check-certificate -O - https://apt.llvm.org/llvm-snapshot.gpg.key | sudo apt-key add -
if [[ $CLANG_VERSION == 18 ]]; then
apt-add-repository "deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy-18 main"
if [[ $CLANG_VERSION -ge 18 ]]; then
apt-add-repository "deb http://apt.llvm.org/jammy/ llvm-toolchain-jammy-${CLANG_VERSION} main"
fi
fi

View File

@ -7,11 +7,11 @@ if [ -n "$GCC_VERSION" ]; then
# Need the official toolchain repo to get alternate packages
add-apt-repository ppa:ubuntu-toolchain-r/test
apt-get update
apt-get install -y g++-$GCC_VERSION
apt-get install -y g++-$GCC_VERSION gfortran-$GCC_VERSION
update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-"$GCC_VERSION" 50
update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-"$GCC_VERSION" 50
update-alternatives --install /usr/bin/gcov gcov /usr/bin/gcov-"$GCC_VERSION" 50
update-alternatives --install /usr/bin/gfortran gfortran /usr/bin/gfortran-"$GCC_VERSION" 50
# Cleanup package manager
apt-get autoclean && apt-get clean

View File

@ -0,0 +1,40 @@
#!/bin/bash
set -ex
source "$(dirname "${BASH_SOURCE[0]}")/common_utils.sh"
# Get the pinned JAX version (same for all CUDA versions)
JAX_VERSION=$(get_pinned_commit /ci_commit_pins/jax)
function install_jax_12() {
echo "Installing JAX ${JAX_VERSION} with CUDA 12 support"
pip_install "jax[cuda12]==${JAX_VERSION}" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
# Verify installation
python -c "import jax" # check for errors
echo "JAX ${JAX_VERSION} installation completed successfully for CUDA 12"
}
function install_jax_13() {
echo "Installing JAX ${JAX_VERSION} with CUDA 13 support"
pip_install "jax[cuda13]==${JAX_VERSION}" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html
# Verify installation
python -c "import jax" # check for errors
echo "JAX ${JAX_VERSION} installation completed successfully for CUDA 13"
}
# idiomatic parameter and option handling in sh
while test $# -gt 0
do
case "$1" in
12.4|12.6|12.6.*|12.8|12.8.*|12.9|12.9.*) install_jax_12;
;;
13.0|13.0.*) install_jax_13;
;;
*) echo "bad argument $1"; exit 1
;;
esac
shift
done

View File

@ -0,0 +1,56 @@
#!/bin/bash
# Script used only in CD pipeline
set -ex
# install dependencies
dnf -y install gmp-devel libmpc-devel texinfo flex bison
cd /usr/local/src
# fetch source for gcc 13
git clone --depth 1 --single-branch -b releases/gcc-13.3.0 https://github.com/gcc-mirror/gcc.git gcc-13.3.0
mkdir -p gcc-13.3.0/build-gomp
cd gcc-13.3.0/build-gomp
# configure gcc build
# I got these flags by:
# 1. downloading the source rpm for gcc-11 on AlmaLinux 8 container
# dnf install -y dnf-plugins-core rpmdevtools
# dnf download --source libgomp
# 2. extracting the gcc.spec from the source.
# rpmdev-extract gcc-xx.src.rpm
# 3. extracting optflags and ld_flags from gcc.spec:
# rpm --eval '%{optflags}'
# rpm --eval '%{build_ldflags}'
#
# I had to remove the following flags because they didn't compile for this version of libgomp:
# -Werror=format-security
# -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1
# -specs=/usr/lib/rpm/redhat/redhat-annobin-cc1
#
# I added -march=armv8-a -mtune=generic to make them explicit. I don't think they're strictly needed.
OPT_FLAGS='-O2 -march=armv8-a -mtune=generic'\
' -fexceptions -g -grecord-gcc-switches -pipe -Wall'\
' -Wp,-D_FORTIFY_SOURCE=2 -Wp,-D_GLIBCXX_ASSERTIONS'\
' -fstack-protector-strong -fasynchronous-unwind-tables'\
' -fstack-clash-protection'
LDFLAGS='-Wl,-z,relro -Wl,--as-needed -Wl,-z,now'
CFLAGS="$OPT_FLAGS" \
CXXFLAGS="$OPT_FLAGS" \
LDFLAGS="$LDFLAGS" \
../configure \
--prefix=/usr \
--libdir=/usr/lib64 \
--enable-languages=c,c++ \
--disable-multilib \
--disable-bootstrap \
--enable-libgomp
# only build libgomp
make -j$(nproc) all-target-libgomp
make install-target-libgomp

View File

@ -10,6 +10,7 @@ git clone https://github.com/OpenMathLib/OpenBLAS.git -b "${OPENBLAS_VERSION}" -
OPENBLAS_CHECKOUT_DIR="OpenBLAS"
OPENBLAS_BUILD_FLAGS="
CC=gcc
NUM_THREADS=128
USE_OPENMP=1
NO_SHARED=0

View File

@ -9,7 +9,7 @@ set -xe
function install_ubuntu() {
. /etc/os-release
if [[ ! " jammy " =~ " ${VERSION_CODENAME} " ]]; then
if [[ ! " jammy noble " =~ " ${VERSION_CODENAME} " ]]; then
echo "Ubuntu version ${VERSION_CODENAME} not supported"
exit
fi
@ -35,25 +35,24 @@ function install_ubuntu() {
# The xpu-smi packages
apt-get install -y flex bison xpu-smi
if [[ "${XPU_DRIVER_TYPE,,}" == "lts" ]]; then
# Compute and Media Runtimes
# Compute and Media Runtimes
if [[ " ${VERSION_CODENAME} " =~ " noble " ]]; then
apt-get install -y \
intel-opencl-icd intel-level-zero-gpu level-zero \
intel-media-va-driver-non-free libmfx1 libmfxgen1 libvpl2 \
libegl-mesa0 libegl1-mesa libegl1-mesa-dev libgbm1 libgl1-mesa-dev libgl1-mesa-dri \
intel-opencl-icd libze-intel-gpu1 libze1 \
intel-media-va-driver-non-free libmfx-gen1 libvpl2 \
libegl-mesa0 libegl1-mesa-dev libgbm1 libgl1-mesa-dev libgl1-mesa-dri \
libglapi-mesa libgles2-mesa-dev libglx-mesa0 libigdgmm12 libxatracker2 mesa-va-drivers \
mesa-vdpau-drivers mesa-vulkan-drivers va-driver-all vainfo hwinfo clinfo
# Development Packages
apt-get install -y libigc-dev intel-igc-cm libigdfcl-dev libigfxcmrt-dev level-zero-dev
else # rolling driver
mesa-vdpau-drivers mesa-vulkan-drivers va-driver-all vainfo hwinfo clinfo intel-ocloc
else # jammy
apt-get install -y \
intel-opencl-icd libze-intel-gpu1 libze1 \
intel-media-va-driver-non-free libmfx-gen1 libvpl2 \
libegl-mesa0 libegl1-mesa libegl1-mesa-dev libgbm1 libgl1-mesa-dev libgl1-mesa-dri \
libglapi-mesa libglx-mesa0 libigdgmm12 libxatracker2 mesa-va-drivers \
mesa-vdpau-drivers mesa-vulkan-drivers va-driver-all vainfo hwinfo clinfo intel-ocloc
apt-get install -y libigc-dev intel-igc-cm libigdfcl-dev libigfxcmrt-dev libze-dev
fi
# Development Packages
apt-get install -y libigc-dev intel-igc-cm libigdfcl-dev libigfxcmrt-dev libze-dev
# Install Intel Support Packages
apt-get install -y ${XPU_PACKAGES}
@ -66,7 +65,7 @@ function install_ubuntu() {
function install_rhel() {
. /etc/os-release
if [[ "${ID}" == "rhel" ]]; then
if [[ ! " 8.8 8.9 9.0 9.2 9.3 " =~ " ${VERSION_ID} " ]]; then
if [[ ! " 8.8 8.10 9.0 9.2 9.3 " =~ " ${VERSION_ID} " ]]; then
echo "RHEL version ${VERSION_ID} not supported"
exit
fi
@ -147,7 +146,7 @@ function install_sles() {
XPU_DRIVER_VERSION=""
if [[ "${XPU_DRIVER_TYPE,,}" == "lts" ]]; then
# Use GPU driver LTS releases
XPU_DRIVER_VERSION="/lts/2350"
XPU_DRIVER_VERSION="/lts/2523"
fi
# Default use Intel® oneAPI Deep Learning Essentials 2025.1

View File

@ -49,11 +49,7 @@ case ${DOCKER_TAG_PREFIX} in
fi
BASE_TARGET=rocm
GPU_IMAGE=rocm/dev-ubuntu-22.04:${GPU_ARCH_VERSION}-complete
PYTORCH_ROCM_ARCH="gfx900;gfx906;gfx908;gfx90a;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
# add gfx950, gfx115x conditionally starting in ROCm 7.0
if [[ "$GPU_ARCH_VERSION" == *"7.0"* ]]; then
PYTORCH_ROCM_ARCH="${PYTORCH_ROCM_ARCH};gfx950;gfx1150;gfx1151"
fi
PYTORCH_ROCM_ARCH="gfx900;gfx906;gfx908;gfx90a;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201;gfx950;gfx1150;gfx1151"
DOCKER_GPU_BUILD_ARG="--build-arg PYTORCH_ROCM_ARCH=${PYTORCH_ROCM_ARCH} --build-arg ROCM_VERSION=${GPU_ARCH_VERSION}"
;;
*)

View File

@ -149,7 +149,7 @@ FROM cpu_final as rocm_final
ARG ROCM_VERSION=6.0
ARG PYTORCH_ROCM_ARCH
ENV PYTORCH_ROCM_ARCH ${PYTORCH_ROCM_ARCH}
ARG DEVTOOLSET_VERSION=11
ARG DEVTOOLSET_VERSION=13
ENV LDFLAGS="-Wl,-rpath=/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/lib64 -Wl,-rpath=/opt/rh/gcc-toolset-${DEVTOOLSET_VERSION}/root/usr/lib"
# Somewhere in ROCm stack, we still use non-existing /opt/rocm/hip path,
# below workaround helps avoid error

View File

@ -50,6 +50,10 @@ RUN rm install_ninja.sh
ENV PATH=/opt/rh/gcc-toolset-${GCCTOOLSET_VERSION}/root/usr/bin:$PATH
ENV LD_LIBRARY_PATH=/opt/rh/gcc-toolset-${GCCTOOLSET_VERSION}/root/usr/lib64:/opt/rh/gcc-toolset-${GCCTOOLSET_VERSION}/root/usr/lib:$LD_LIBRARY_PATH
# Build a newer version of libgomp than that supported in in Almalinux 8.
COPY ./common/install_libgomp.sh install_libgomp.sh
RUN bash ./install_libgomp.sh && rm install_libgomp.sh
# git236+ would refuse to run git commands in repos owned by other users
# Which causes version check to fail, as pytorch repo is bind-mounted into the image
# Override this behaviour by treating every folder as safe

View File

@ -87,11 +87,7 @@ case ${image} in
MANY_LINUX_VERSION="2_28"
DEVTOOLSET_VERSION="11"
GPU_IMAGE=rocm/dev-almalinux-8:${GPU_ARCH_VERSION}-complete
PYTORCH_ROCM_ARCH="gfx900;gfx906;gfx908;gfx90a;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201"
# add gfx950, gfx115x conditionally starting in ROCm 7.0
if [[ "$GPU_ARCH_VERSION" == *"7.0"* ]]; then
PYTORCH_ROCM_ARCH="${PYTORCH_ROCM_ARCH};gfx950;gfx1150;gfx1151"
fi
PYTORCH_ROCM_ARCH="gfx900;gfx906;gfx908;gfx90a;gfx942;gfx1030;gfx1100;gfx1101;gfx1102;gfx1200;gfx1201;gfx950;gfx1150;gfx1151"
DOCKER_GPU_BUILD_ARG="--build-arg ROCM_VERSION=${GPU_ARCH_VERSION} --build-arg PYTORCH_ROCM_ARCH=${PYTORCH_ROCM_ARCH} --build-arg DEVTOOLSET_VERSION=${DEVTOOLSET_VERSION}"
;;
manylinux2_28-builder:xpu)

View File

@ -1,15 +1,11 @@
sphinx==5.3.0
sphinx==7.2.6
#Description: This is used to generate PyTorch docs
#Pinned versions: 5.3.0
#Pinned versions: 7.2.6
standard-imghdr==3.13.0; python_version >= "3.13"
#Description: This is needed by Sphinx, so it needs to be added here.
# The reasons are as follows:
# 1) This module has been removed from the Python standard library since Python 3.13(https://peps.python.org/pep-0594/#imghdr);
# 2) The current version of Sphinx (5.3.0) is not compatible with Python 3.13.
# Once Sphinx is upgraded to a version compatible with Python 3.13 or later, we can remove this dependency.
pytorch_sphinx_theme2==0.2.0
#Description: This is needed to generate PyTorch docs
#Pinned versions: 0.2.0
-e git+https://github.com/pytorch/pytorch_sphinx_theme.git@71e55749be14ceb56e7f8211a9fb649866b87ad4#egg=pytorch_sphinx_theme2
# TODO: sphinxcontrib.katex 0.9.0 adds a local KaTeX server to speed up pre-rendering
# but it doesn't seem to work and hangs around idly. The initial thought that it is probably
# something related to Docker setup. We can investigate this later.
@ -36,17 +32,17 @@ tensorboard==2.18.0 ; python_version >= "3.13"
#Description: This is used to generate PyTorch docs
#Pinned versions: 2.13.0
breathe==4.34.0
breathe==4.36.0
#Description: This is used to generate PyTorch C++ docs
#Pinned versions: 4.34.0
#Pinned versions: 4.36.0
exhale==0.2.3
exhale==0.3.7
#Description: This is used to generate PyTorch C++ docs
#Pinned versions: 0.2.3
#Pinned versions: 0.3.7
docutils==0.16
docutils==0.20
#Description: This is used to generate PyTorch C++ docs
#Pinned versions: 0.16
#Pinned versions: 0.20
bs4==0.0.1
#Description: This is used to generate PyTorch C++ docs
@ -56,13 +52,13 @@ IPython==8.12.0
#Description: This is used to generate PyTorch functorch docs
#Pinned versions: 8.12.0
myst-nb==0.17.2
myst-nb==1.3.0
#Description: This is used to generate PyTorch functorch and torch.compile docs.
#Pinned versions: 0.17.2
#Pinned versions: 1.3.0
# The following are required to build torch.distributed.elastic.rendezvous.etcd* docs
python-etcd==0.4.5
sphinx-copybutton==0.5.0
sphinx-design==0.4.0
sphinx-design==0.6.1
sphinxcontrib-mermaid==1.0.0
myst-parser==0.18.1
myst-parser==4.0.1

View File

@ -1 +1 @@
3.5.0
3.5.1

View File

@ -143,6 +143,15 @@ COPY ci_commit_pins/halide.txt halide.txt
RUN if [ -n "${HALIDE}" ]; then bash ./install_halide.sh; fi
RUN rm install_halide.sh common_utils.sh halide.txt
ARG PALLAS
ARG CUDA_VERSION
# Install JAX with CUDA support (for Pallas)
COPY ./common/install_jax.sh install_jax.sh
COPY ./common/common_utils.sh common_utils.sh
COPY ./ci_commit_pins/jax.txt /ci_commit_pins/jax.txt
RUN if [ -n "${PALLAS}" ]; then bash ./install_jax.sh ${CUDA_VERSION}; fi
RUN rm -f install_jax.sh common_utils.sh /ci_commit_pins/jax.txt
ARG ONNX
# Install ONNX dependencies
COPY ./common/install_onnx.sh ./common/common_utils.sh ./

View File

@ -8,9 +8,11 @@ from abc import ABC, abstractmethod
try:
from typing import Any, Callable, Required, TypedDict # Python 3.11+
from collections.abc import Callable # Python 3.11+
from typing import Any, Required, TypedDict
except ImportError:
from typing import Any, Callable, TypedDict
from collections.abc import Callable
from typing import Any, TypedDict
from typing_extensions import Required # Fallback for Python <3.11

View File

@ -30,7 +30,6 @@ into a tarball, with the following structure:
More specifically, `build_magma.sh` copies over the relevant files from the `package_files` directory depending on the ROCm version.
Outputted binaries should be in the `output` folder.
## Pushing
Packages can be uploaded to an S3 bucket using:

View File

@ -6,8 +6,8 @@ set -eou pipefail
# The script expects DESIRED_CUDA and PACKAGE_NAME to be set
ROOT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
# post merge of https://github.com/icl-utk-edu/magma/pull/65
MAGMA_VERSION=c0792ae825fb36872784892ea643dd6f3456bc5f
# https://github.com/icl-utk-edu/magma/pull/65
MAGMA_VERSION=d6e4117bc88e73f06d26c6c2e14f064e8fc3d1ec
# Folders for the build
PACKAGE_FILES=${ROOT_DIR}/magma-rocm/package_files # metadata
@ -20,7 +20,7 @@ mkdir -p ${PACKAGE_DIR} ${PACKAGE_OUTPUT}/linux-64 ${PACKAGE_BUILD} ${PACKAGE_RE
# Fetch magma sources and verify checksum
pushd ${PACKAGE_DIR}
git clone https://github.com/icl-utk-edu/magma
git clone https://github.com/jeffdaily/magma
pushd magma
git checkout ${MAGMA_VERSION}
popd

View File

@ -168,14 +168,16 @@ if [[ "$BUILD_ENVIRONMENT" == *xpu* ]]; then
# shellcheck disable=SC1091
source /opt/intel/oneapi/compiler/latest/env/vars.sh
# shellcheck disable=SC1091
source /opt/intel/oneapi/umf/latest/env/vars.sh
# shellcheck disable=SC1091
source /opt/intel/oneapi/ccl/latest/env/vars.sh
# shellcheck disable=SC1091
source /opt/intel/oneapi/mpi/latest/env/vars.sh
# shellcheck disable=SC1091
source /opt/intel/oneapi/pti/latest/env/vars.sh
# Enable XCCL build
export USE_XCCL=1
export USE_MPI=0
# XPU kineto feature dependencies are not fully ready, disable kineto build as temp WA
export USE_KINETO=0
export TORCH_XPU_ARCH_LIST=pvc
fi

View File

@ -89,23 +89,41 @@ if [ "$is_main_doc" = true ]; then
make coverage
# Now we have the coverage report, we need to make sure it is empty.
# Count the number of lines in the file and turn that number into a variable
# $lines. The `cut -f1 ...` is to only parse the number, not the filename
# Skip the report header by subtracting 2: the header will be output even if
# there are no undocumented items.
# Sphinx 7.2.6+ format: python.txt contains a statistics table with a TOTAL row
# showing the undocumented count in the third column.
# Example: | TOTAL | 99.83% | 2 |
#
# Also: see docs/source/conf.py for "coverage_ignore*" items, which should
# be documented then removed from there.
lines=$(wc -l build/coverage/python.txt 2>/dev/null |cut -f1 -d' ')
undocumented=$((lines - 2))
if [ $undocumented -lt 0 ]; then
# Extract undocumented count from TOTAL row in Sphinx 7.2.6 statistics table
# The table format is: | Module | Coverage | Undocumented |
# Extract the third column (undocumented count) from the TOTAL row
undocumented=$(grep "| TOTAL" build/coverage/python.txt | awk -F'|' '{print $4}' | tr -d ' ')
if [ -z "$undocumented" ] || ! [[ "$undocumented" =~ ^[0-9]+$ ]]; then
echo coverage output not found
exit 1
elif [ $undocumented -gt 0 ]; then
echo undocumented objects found:
cat build/coverage/python.txt
elif [ "$undocumented" -gt 0 ]; then
set +x # Disable command echoing for cleaner output
echo ""
echo "====================="
echo "UNDOCUMENTED OBJECTS:"
echo "====================="
echo ""
# Find the line number of the TOTAL row and print only what comes after it
total_line=$(grep -n "| TOTAL" build/coverage/python.txt | cut -d: -f1)
if [ -n "$total_line" ]; then
# Print only the detailed list (skip the statistics table)
tail -n +$((total_line + 2)) build/coverage/python.txt
else
# Fallback to showing entire file if TOTAL line not found
cat build/coverage/python.txt
fi
echo ""
echo "Make sure you've updated relevant .rsts in docs/source!"
echo "You can reproduce locally by running 'cd docs && make coverage && cat build/coverage/python.txt'"
echo "You can reproduce locally by running 'cd docs && make coverage && tail -n +\$((grep -n \"| TOTAL\" build/coverage/python.txt | cut -d: -f1) + 2)) build/coverage/python.txt'"
set -x # Re-enable command echoing
exit 1
fi
else

View File

@ -208,6 +208,8 @@ if [[ "$BUILD_ENVIRONMENT" == *xpu* ]]; then
source /opt/intel/oneapi/ccl/latest/env/vars.sh
# shellcheck disable=SC1091
source /opt/intel/oneapi/mpi/latest/env/vars.sh
# shellcheck disable=SC1091
source /opt/intel/oneapi/pti/latest/env/vars.sh
# Check XPU status before testing
timeout 30 xpu-smi discovery || true
fi
@ -337,7 +339,7 @@ test_python() {
test_python_smoke() {
# Smoke tests for H100/B200
time python test/run_test.py --include test_matmul_cuda test_scaled_matmul_cuda inductor/test_fp8 inductor/test_max_autotune $PYTHON_TEST_EXTRA_OPTION --upload-artifacts-while-running
time python test/run_test.py --include test_matmul_cuda test_scaled_matmul_cuda inductor/test_fp8 inductor/test_max_autotune inductor/test_cutedsl_grouped_mm $PYTHON_TEST_EXTRA_OPTION --upload-artifacts-while-running
assert_git_not_dirty
}
@ -824,6 +826,11 @@ test_inductor_halide() {
assert_git_not_dirty
}
test_inductor_pallas() {
python test/run_test.py --include inductor/test_pallas.py --verbose
assert_git_not_dirty
}
test_inductor_triton_cpu() {
python test/run_test.py --include inductor/test_triton_cpu_backend.py inductor/test_torchinductor_strided_blocks.py --verbose
assert_git_not_dirty
@ -1653,7 +1660,7 @@ test_operator_microbenchmark() {
cd "${TEST_DIR}"/benchmarks/operator_benchmark
for OP_BENCHMARK_TESTS in matmul mm addmm bmm; do
for OP_BENCHMARK_TESTS in matmul mm addmm bmm conv; do
$TASKSET python -m pt.${OP_BENCHMARK_TESTS}_test --tag-filter long \
--output-json-for-dashboard "${TEST_REPORTS_DIR}/operator_microbenchmark_${OP_BENCHMARK_TESTS}_compile.json" \
--benchmark-name "PyTorch operator microbenchmark" --use-compile
@ -1724,6 +1731,8 @@ elif [[ "${TEST_CONFIG}" == *inductor_distributed* ]]; then
test_inductor_distributed
elif [[ "${TEST_CONFIG}" == *inductor-halide* ]]; then
test_inductor_halide
elif [[ "${TEST_CONFIG}" == *inductor-pallas* ]]; then
test_inductor_pallas
elif [[ "${TEST_CONFIG}" == *inductor-triton-cpu* ]]; then
test_inductor_triton_cpu
elif [[ "${TEST_CONFIG}" == *inductor-micro-benchmark* ]]; then

View File

@ -70,7 +70,7 @@ sccache --zero-stats
sccache --show-stats
# Build the wheel
python -m build --wheel --no-build-isolation
python -m build --wheel --no-isolation
if ($LASTEXITCODE -ne 0) { exit 1 }
# Install the wheel locally

View File

@ -60,9 +60,11 @@ performance-*,
readability-container-size-empty,
readability-delete-null-pointer,
readability-duplicate-include,
readability-named-parameter,
readability-misplaced-array-index,
readability-redundant*,
readability-simplify-subscript-expr,
readability-static-definition-in-anonymous-namespace
readability-string-compare,
-readability-redundant-access-specifiers,
-readability-redundant-control-flow,

View File

@ -0,0 +1,319 @@
---
name: add-uint-support
description: Add unsigned integer (uint) type support to PyTorch operators by updating AT_DISPATCH macros. Use when adding support for uint16, uint32, uint64 types to operators, kernels, or when user mentions enabling unsigned types, barebones unsigned types, or uint support.
---
# Add Unsigned Integer (uint) Support to Operators
This skill helps add support for unsigned integer types (uint16, uint32, uint64) to PyTorch operators by updating their AT_DISPATCH macros.
## When to use this skill
Use this skill when:
- Adding uint16, uint32, or uint64 support to an operator
- User mentions "unsigned types", "uint support", "barebones unsigned types"
- Enabling support for kUInt16, kUInt32, kUInt64 in kernels
- Working with operator implementations that need expanded type coverage
## Quick reference
**Add unsigned types to existing dispatch:**
```cpp
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES));
// After (method 1: add unsigned types explicitly)
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES));
// After (method 2: use V2 integral types if AT_INTEGRAL_TYPES present)
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES));
```
## Type group reference
**Unsigned type groups:**
- `AT_BAREBONES_UNSIGNED_TYPES`: kUInt16, kUInt32, kUInt64
- `AT_INTEGRAL_TYPES_V2`: AT_INTEGRAL_TYPES + AT_BAREBONES_UNSIGNED_TYPES
**Relationship:**
```cpp
AT_INTEGRAL_TYPES // kByte, kChar, kInt, kLong, kShort
AT_BAREBONES_UNSIGNED_TYPES // kUInt16, kUInt32, kUInt64
AT_INTEGRAL_TYPES_V2 // INTEGRAL_TYPES + BAREBONES_UNSIGNED_TYPES
```
## Instructions
### Step 1: Determine if conversion to V2 is needed
Check if the file uses AT_DISPATCH_V2:
**If using old AT_DISPATCH:**
- First convert to AT_DISPATCH_V2 using the at-dispatch-v2 skill
- Then proceed with adding uint support
**If already using AT_DISPATCH_V2:**
- Proceed directly to Step 2
### Step 2: Analyze the current dispatch macro
Identify what type groups are currently in use:
```cpp
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
// body
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
^^^^^^^^^^^^^^^^^^^^^^^^^
Current type coverage
```
Common patterns:
- `AT_EXPAND(AT_ALL_TYPES)` → includes AT_INTEGRAL_TYPES + AT_FLOATING_TYPES
- `AT_EXPAND(AT_INTEGRAL_TYPES)` → signed integers only
- `AT_EXPAND(AT_FLOATING_TYPES)` → floating point types
### Step 3: Choose the uint addition method
Two approaches:
**Method 1: Add AT_BAREBONES_UNSIGNED_TYPES explicitly**
- Use when: You want to be explicit about adding uint support
- Add `AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES)` to the type list
**Method 2: Substitute AT_INTEGRAL_TYPES with AT_INTEGRAL_TYPES_V2**
- Use when: The dispatch already uses `AT_EXPAND(AT_INTEGRAL_TYPES)`
- More concise: replaces one type group with its superset
- Only applicable if AT_INTEGRAL_TYPES is present
### Step 4: Apply the transformation
**Method 1 example:**
```cpp
// Before
AT_DISPATCH_V2(
dtype,
"min_values_cuda",
AT_WRAP([&]() {
kernel_impl<scalar_t>(iter);
}),
AT_EXPAND(AT_ALL_TYPES),
kBFloat16, kHalf, kBool
);
// After (add unsigned types)
AT_DISPATCH_V2(
dtype,
"min_values_cuda",
AT_WRAP([&]() {
kernel_impl<scalar_t>(iter);
}),
AT_EXPAND(AT_ALL_TYPES),
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES),
kBFloat16, kHalf, kBool
);
```
**Method 2 example:**
```cpp
// Before
AT_DISPATCH_V2(
dtype,
"integral_op",
AT_WRAP([&]() {
kernel<scalar_t>();
}),
AT_EXPAND(AT_INTEGRAL_TYPES)
);
// After (substitute with V2)
AT_DISPATCH_V2(
dtype,
"integral_op",
AT_WRAP([&]() {
kernel<scalar_t>();
}),
AT_EXPAND(AT_INTEGRAL_TYPES_V2)
);
```
### Step 5: Handle AT_ALL_TYPES vs individual type groups
If the dispatch uses `AT_EXPAND(AT_ALL_TYPES)`:
- `AT_ALL_TYPES` = `AT_INTEGRAL_TYPES` + `AT_FLOATING_TYPES`
- To add uint: add `AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES)` to the list
If the dispatch separately lists INTEGRAL and FLOATING:
```cpp
// Before
AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_FLOATING_TYPES)
// After (Method 2 preferred)
AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES)
```
### Step 6: Verify all dispatch sites
Check the file for ALL dispatch macros that need uint support:
- Some operators have multiple dispatch sites (CPU, CUDA, different functions)
- Apply the transformation consistently across all sites
- Ensure each gets the same type coverage updates
### Step 7: Validate the changes
Check that:
- [ ] AT_DISPATCH_V2 format is used (not old AT_DISPATCH)
- [ ] Unsigned types are added via one of the two methods
- [ ] All relevant dispatch sites in the file are updated
- [ ] Type groups use `AT_EXPAND()`
- [ ] Arguments are properly formatted and comma-separated
## Common patterns
### Pattern 1: AT_ALL_TYPES + extras
```cpp
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
```
### Pattern 2: Separate INTEGRAL + FLOATING
```cpp
// Before
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_FLOATING_TYPES));
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_INTEGRAL_TYPES_V2), AT_EXPAND(AT_FLOATING_TYPES));
```
### Pattern 3: Old dispatch needs conversion first
```cpp
// Before (needs v2 conversion first)
AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, dtype, "op", [&]() {
kernel<scalar_t>();
});
// After v2 conversion
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
// After adding uint support
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kHalf, kBFloat16);
```
## Multiple dispatch sites example
For a file with multiple functions:
```cpp
void min_values_kernel_cuda(TensorIterator& iter) {
AT_DISPATCH_V2(iter.dtype(), "min_values_cuda", AT_WRAP([&]() {
impl<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// Added uint support
}
void min_launch_kernel(TensorIterator &iter) {
AT_DISPATCH_V2(iter.input_dtype(), "min_cuda", AT_WRAP([&]() {
gpu_reduce_kernel<scalar_t>(iter);
}), AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES), kBFloat16, kHalf);
// ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
// Added uint support here too
}
```
## Decision tree
Use this decision tree to determine the approach:
```
Is the file using AT_DISPATCH_V2?
├─ No → Use at-dispatch-v2 skill first, then continue
└─ Yes
└─ Does it use AT_EXPAND(AT_INTEGRAL_TYPES)?
├─ Yes → Replace with AT_EXPAND(AT_INTEGRAL_TYPES_V2)
└─ No → Add AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES) to type list
```
## Edge cases
### Case 1: Dispatch with only floating types
If the operator only supports floating point types, don't add uint support:
```cpp
// Leave as-is - floating point only operator
AT_DISPATCH_V2(dtype, "float_op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_FLOATING_TYPES), kHalf);
```
### Case 2: Complex types present
Unsigned types work alongside complex types:
```cpp
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES),
AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES),
AT_EXPAND(AT_COMPLEX_TYPES),
kHalf, kBFloat16);
```
### Case 3: Already has uint support
Check if uint types are already present:
- If `AT_INTEGRAL_TYPES_V2` is used → already has uint support
- If `AT_BAREBONES_UNSIGNED_TYPES` is already in list → already has uint support
- Skip the file if uint support is already present
## Workflow
When asked to add uint support:
1. Read the target file
2. Check if using AT_DISPATCH_V2:
- If not → use at-dispatch-v2 skill first
3. Identify all dispatch macro sites
4. For each dispatch:
- Analyze current type groups
- Choose method (add BAREBONES_UNSIGNED or upgrade to V2)
- Apply transformation with Edit tool
5. Show the user the changes
6. Explain what was modified
## Important notes
- Always check if v2 conversion is needed first
- Apply changes consistently across all dispatch sites in the file
- Method 2 (AT_INTEGRAL_TYPES_V2) is cleaner when applicable
- Method 1 (explicit AT_BAREBONES_UNSIGNED_TYPES) is more explicit
- Unsigned types are: kUInt16, kUInt32, kUInt64 (not kByte which is uint8)
- Some operators may not semantically support unsigned types - use judgment
## Testing
After adding uint support, the operator should accept uint16, uint32, and uint64 tensors. The user is responsible for functional testing.

View File

@ -0,0 +1,305 @@
---
name: at-dispatch-v2
description: Convert PyTorch AT_DISPATCH macros to AT_DISPATCH_V2 format in ATen C++ code. Use when porting AT_DISPATCH_ALL_TYPES_AND*, AT_DISPATCH_FLOATING_TYPES*, or other dispatch macros to the new v2 API. For ATen kernel files, CUDA kernels, and native operator implementations.
---
# AT_DISPATCH to AT_DISPATCH_V2 Converter
This skill helps convert PyTorch's legacy AT_DISPATCH macros to the new AT_DISPATCH_V2 format, as defined in `aten/src/ATen/Dispatch_v2.h`.
## When to use this skill
Use this skill when:
- Converting AT_DISPATCH_* macros to AT_DISPATCH_V2
- Porting ATen kernels to use the new dispatch API
- Working with files in `aten/src/ATen/native/` that use dispatch macros
- User mentions "AT_DISPATCH", "dispatch v2", "Dispatch_v2.h", or macro conversion
## Quick reference
**Old format:**
```cpp
AT_DISPATCH_ALL_TYPES_AND3(kBFloat16, kHalf, kBool, dtype, "kernel_name", [&]() {
// lambda body
});
```
**New format:**
```cpp
AT_DISPATCH_V2(dtype, "kernel_name", AT_WRAP([&]() {
// lambda body
}), AT_EXPAND(AT_ALL_TYPES), kBFloat16, kHalf, kBool);
```
## Key transformations
1. **Reorder arguments**: `scalar_type` and `name` come first, then lambda, then types
2. **Wrap the lambda**: Use `AT_WRAP(lambda)` to handle internal commas
3. **Expand type groups**: Use `AT_EXPAND(AT_ALL_TYPES)` instead of implicit expansion
4. **List individual types**: Add extra types (kHalf, kBFloat16, etc.) after expanded groups
5. **Add include**: `#include <ATen/Dispatch_v2.h>` near other Dispatch includes
## Instructions
### Step 1: Add the Dispatch_v2.h include
Add the v2 header near the existing `#include <ATen/Dispatch.h>`:
```cpp
#include <ATen/Dispatch.h>
#include <ATen/Dispatch_v2.h>
```
Keep the old Dispatch.h include for now (other code may still need it).
### Step 2: Identify the old dispatch pattern
Common patterns to convert:
- `AT_DISPATCH_ALL_TYPES_AND{2,3,4}(type1, type2, ..., scalar_type, name, lambda)`
- `AT_DISPATCH_FLOATING_TYPES_AND{2,3}(type1, type2, ..., scalar_type, name, lambda)`
- `AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND{2,3}(type1, ..., scalar_type, name, lambda)`
- `AT_DISPATCH_FLOATING_AND_COMPLEX_TYPES_AND{2,3}(type1, ..., scalar_type, name, lambda)`
### Step 3: Map the old macro to type groups
Identify which type group macro corresponds to the base types:
| Old macro base | AT_DISPATCH_V2 type group |
|----------------|---------------------------|
| `ALL_TYPES` | `AT_EXPAND(AT_ALL_TYPES)` |
| `FLOATING_TYPES` | `AT_EXPAND(AT_FLOATING_TYPES)` |
| `INTEGRAL_TYPES` | `AT_EXPAND(AT_INTEGRAL_TYPES)` |
| `COMPLEX_TYPES` | `AT_EXPAND(AT_COMPLEX_TYPES)` |
| `ALL_TYPES_AND_COMPLEX` | `AT_EXPAND(AT_ALL_TYPES_AND_COMPLEX)` |
For combined patterns, use multiple `AT_EXPAND()` entries:
```cpp
// Old: AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND2(...)
// New: AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_COMPLEX_TYPES), type1, type2
```
### Step 4: Extract the individual types
From `AT_DISPATCH_*_AND2(type1, type2, ...)` or `AT_DISPATCH_*_AND3(type1, type2, type3, ...)`, extract the individual types (type1, type2, etc.).
These become the trailing arguments after the type group:
```cpp
AT_DISPATCH_V2(..., AT_EXPAND(AT_ALL_TYPES), kBFloat16, kHalf, kBool)
^^^^^^^^^^^^^^^^^^^^^^^^
Individual types from AND3
```
### Step 5: Transform to AT_DISPATCH_V2
Apply the transformation:
**Pattern:**
```cpp
AT_DISPATCH_V2(
scalar_type, // 1st: The dtype expression
"name", // 2nd: The debug string
AT_WRAP(lambda), // 3rd: The lambda wrapped in AT_WRAP
type_groups, // 4th+: Type groups with AT_EXPAND()
individual_types // Last: Individual types
)
```
**Example transformation:**
```cpp
// BEFORE
AT_DISPATCH_ALL_TYPES_AND3(
kBFloat16, kHalf, kBool,
iter.dtype(),
"min_values_cuda",
[&]() {
min_values_kernel_cuda_impl<scalar_t>(iter);
}
);
// AFTER
AT_DISPATCH_V2(
iter.dtype(),
"min_values_cuda",
AT_WRAP([&]() {
min_values_kernel_cuda_impl<scalar_t>(iter);
}),
AT_EXPAND(AT_ALL_TYPES),
kBFloat16, kHalf, kBool
);
```
### Step 6: Handle multi-line lambdas
For lambdas with internal commas or complex expressions, AT_WRAP is essential:
```cpp
AT_DISPATCH_V2(
dtype,
"complex_kernel",
AT_WRAP([&]() {
gpu_reduce_kernel<scalar_t, scalar_t>(
iter,
MinOps<scalar_t>{},
thrust::pair<scalar_t, int64_t>(upper_bound(), 0) // Commas inside!
);
}),
AT_EXPAND(AT_ALL_TYPES)
);
```
### Step 7: Verify the conversion
Check that:
- [ ] `AT_WRAP()` wraps the entire lambda
- [ ] Type groups use `AT_EXPAND()`
- [ ] Individual types don't have `AT_EXPAND()` (just `kBFloat16`, not `AT_EXPAND(kBFloat16)`)
- [ ] Argument order is: scalar_type, name, lambda, types
- [ ] Include added: `#include <ATen/Dispatch_v2.h>`
## Type group reference
Available type group macros (use with `AT_EXPAND()`):
```cpp
AT_INTEGRAL_TYPES // kByte, kChar, kInt, kLong, kShort
AT_FLOATING_TYPES // kDouble, kFloat
AT_COMPLEX_TYPES // kComplexDouble, kComplexFloat
AT_QINT_TYPES // kQInt8, kQUInt8, kQInt32
AT_ALL_TYPES // INTEGRAL_TYPES + FLOATING_TYPES
AT_ALL_TYPES_AND_COMPLEX // ALL_TYPES + COMPLEX_TYPES
AT_INTEGRAL_TYPES_V2 // INTEGRAL_TYPES + unsigned types
AT_BAREBONES_UNSIGNED_TYPES // kUInt16, kUInt32, kUInt64
AT_FLOAT8_TYPES // Float8 variants
```
## Common patterns
### Pattern: AT_DISPATCH_ALL_TYPES_AND2
```cpp
// Before
AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBFloat16, dtype, "op", [&]() {
kernel<scalar_t>(data);
});
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>(data);
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBFloat16);
```
### Pattern: AT_DISPATCH_FLOATING_TYPES_AND3
```cpp
// Before
AT_DISPATCH_FLOATING_TYPES_AND3(kHalf, kBFloat16, kFloat8_e4m3fn,
tensor.scalar_type(), "float_op", [&] {
process<scalar_t>(tensor);
});
// After
AT_DISPATCH_V2(tensor.scalar_type(), "float_op", AT_WRAP([&] {
process<scalar_t>(tensor);
}), AT_EXPAND(AT_FLOATING_TYPES), kHalf, kBFloat16, kFloat8_e4m3fn);
```
### Pattern: AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND2
```cpp
// Before
AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND2(
kComplexHalf, kHalf,
self.scalar_type(),
"complex_op",
[&] {
result = compute<scalar_t>(self);
}
);
// After
AT_DISPATCH_V2(
self.scalar_type(),
"complex_op",
AT_WRAP([&] {
result = compute<scalar_t>(self);
}),
AT_EXPAND(AT_ALL_TYPES),
AT_EXPAND(AT_COMPLEX_TYPES),
kComplexHalf,
kHalf
);
```
## Edge cases
### Case 1: No extra types (rare)
```cpp
// Before
AT_DISPATCH_ALL_TYPES(dtype, "op", [&]() { kernel<scalar_t>(); });
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES));
```
### Case 2: Many individual types (AND4, AND5, etc.)
```cpp
// Before
AT_DISPATCH_FLOATING_TYPES_AND4(kHalf, kBFloat16, kFloat8_e4m3fn, kFloat8_e5m2,
dtype, "float8_op", [&]() { kernel<scalar_t>(); });
// After
AT_DISPATCH_V2(dtype, "float8_op", AT_WRAP([&]() {
kernel<scalar_t>();
}), AT_EXPAND(AT_FLOATING_TYPES), kHalf, kBFloat16, kFloat8_e4m3fn, kFloat8_e5m2);
```
### Case 3: Lambda with no captures
```cpp
// Before
AT_DISPATCH_ALL_TYPES_AND2(kHalf, kBool, dtype, "op", []() {
static_kernel<scalar_t>();
});
// After
AT_DISPATCH_V2(dtype, "op", AT_WRAP([]() {
static_kernel<scalar_t>();
}), AT_EXPAND(AT_ALL_TYPES), kHalf, kBool);
```
## Benefits of AT_DISPATCH_V2
1. **No arity in macro name**: Don't need different macros for AND2, AND3, AND4
2. **Composable type sets**: Mix and match type groups with `AT_EXPAND()`
3. **Extensible**: Easy to add more types without hitting macro limits
4. **Clearer**: Type groups are explicit, not implicit in macro name
## Important notes
- Keep `#include <ATen/Dispatch.h>` - other code may need it
- The `AT_WRAP()` is mandatory - prevents comma parsing issues in the lambda
- Type groups need `AT_EXPAND()`, individual types don't
- The v2 API is in `aten/src/ATen/Dispatch_v2.h` - refer to it for full docs
- See the header file for the Python script to regenerate the macro implementation
## Workflow
When asked to convert AT_DISPATCH macros:
1. Read the file to identify all AT_DISPATCH uses
2. Add `#include <ATen/Dispatch_v2.h>` if not present
3. For each dispatch macro:
- Identify the pattern and extract components
- Map the base type group
- Extract individual types
- Construct the AT_DISPATCH_V2 call
- Apply with Edit tool
4. Show the user the complete converted file
5. Explain what was changed
Do NOT compile or test the code - focus on accurate conversion only.

View File

@ -1,11 +1,11 @@
name: 🚀 Release highlight for proposed Feature
name: 🚀 New Feature for Release
description: Submit a Release highlight for proposed Feature
labels: ["release-feature-request"]
body:
- type: textarea
attributes:
label: Release highlight for proposed Feature
label: New Feature for Release
description: >
Example: “A torch.special module, analogous to SciPy's special module.”
- type: input

View File

@ -38,9 +38,9 @@ runs:
run: |
python3 .github/scripts/pytest_cache.py \
--download \
--cache_dir $GITHUB_WORKSPACE/$CACHE_DIR \
--pr_identifier $GITHUB_REF \
--job_identifier $JOB_IDENTIFIER \
--temp_dir $RUNNER_TEMP \
--repo $REPO \
--bucket $BUCKET \
--cache_dir "$GITHUB_WORKSPACE/$CACHE_DIR" \
--pr_identifier "$GITHUB_REF" \
--job_identifier "$JOB_IDENTIFIER" \
--temp_dir "$RUNNER_TEMP" \
--repo "$REPO" \
--bucket "$BUCKET" \

View File

@ -47,11 +47,11 @@ runs:
run: |
python3 .github/scripts/pytest_cache.py \
--upload \
--cache_dir $GITHUB_WORKSPACE/$CACHE_DIR \
--pr_identifier $GITHUB_REF \
--job_identifier $JOB_IDENTIFIER \
--sha $SHA \
--test_config $TEST_CONFIG \
--shard $SHARD \
--repo $REPO \
--temp_dir $RUNNER_TEMP \
--cache_dir "$GITHUB_WORKSPACE/$CACHE_DIR" \
--pr_identifier "$GITHUB_REF" \
--job_identifier "$JOB_IDENTIFIER" \
--sha "$SHA" \
--test_config "$TEST_CONFIG" \
--shard "$SHARD" \
--repo "$REPO" \
--temp_dir "$RUNNER_TEMP" \

View File

@ -1 +1 @@
3b0e7a6f192ca2715e7e6cbe5db007aea7165fe2
ad5816f0eee1c873df1b7d371c69f1f811a89387

View File

@ -1 +1 @@
cfbc5c2f1c798991715a6b06bb3ce46478c4487c
ccb801b88af136454798b945175c4c87e636ac33

View File

@ -1 +1 @@
df6798dfb931ce7c7fe5bed2447cd1092a5981af
e4d25697f9dc5eedaf8f0a5bf085c62c5455a53a

125
.github/copilot-instructions.md vendored Normal file
View File

@ -0,0 +1,125 @@
# PyTorch Copilot Instructions
This is the PyTorch machine learning framework codebase. These instructions help AI agents navigate and contribute effectively.
## Architecture Overview
### Core Components
- **c10/** - Core library (C++-10 compatible) for essential, binary-size-conscious functionality
- **aten/** - ATen tensor library (C++), PyTorch's foundation without autograd
- `aten/src/ATen/native/` - Modern operator implementations (CPU/CUDA/MPS/sparse)
- `aten/src/ATen/native/native_functions.yaml` - **Critical**: Declarative operator registry
- **torch/** - Python bindings and public API
- `torch/csrc/` - C++ Python bindings (hand-written and generated)
- `torch/csrc/autograd/` - Reverse-mode automatic differentiation
- `torch/csrc/jit/` - TorchScript JIT compiler
- **torchgen/** - Code generation tooling that reads `native_functions.yaml`
- **tools/** - Build scripts, autograd derivatives, code generation
### The Code Generation Workflow
**Most operator changes require editing `native_functions.yaml`**, not direct C++ files. This YAML file:
1. Declares operator signatures, variants (function/method), and dispatch behavior
2. Gets processed by `torchgen/` to generate C++/Python bindings
3. Produces headers in `build/aten/src/ATen/` during compilation
Example entry structure:
```yaml
- func: my_op(Tensor self, Scalar alpha=1) -> Tensor
variants: function, method
dispatch:
CPU: my_op_cpu
CUDA: my_op_cuda
```
After editing `native_functions.yaml`, implement kernels in `aten/src/ATen/native/` (see `aten/src/ATen/native/README.md`).
## Development Workflows
### Building from Source
**Never run `setup.py` directly** - use pip with editable install:
```bash
python -m pip install --no-build-isolation -v -e .
```
Speed up builds:
- `DEBUG=1` - Debug symbols with `-g -O0`
- `USE_CUDA=0` - Skip CUDA compilation
- `BUILD_TEST=0` - Skip C++ test binaries
- Install `ninja` (`pip install ninja`) for faster builds
- Use `ccache` for incremental compilation caching
Rebuild specific targets: `(cd build && ninja <target>)`
### Testing
**Critical**: DO NOT run entire test suites. Run specific tests only:
```bash
python test/test_torch.py TestTorch.test_specific_case
```
**Test structure**: All tests use `torch.testing._internal.common_utils`:
```python
from torch.testing._internal.common_utils import run_tests, TestCase
class TestFeature(TestCase):
def test_something(self):
# Use self.assertEqual for tensor comparisons
pass
if __name__ == "__main__":
run_tests()
```
**For bug fixes**: Create a standalone reproduction script first, verify it fails, then fix and add to appropriate test file.
### Linting
Run linter (not pre-commit): `lintrunner -a` (auto-applies fixes)
## Project-Specific Conventions
### Memory and Storage
- **Storage is never nullptr** (but `StorageImpl.data` may be nullptr for unallocated outputs)
- CUDA device info lives in storage objects
### Python-C++ Integration (`torch/csrc/`)
- Always include `Python.h` **first** to avoid `_XOPEN_SOURCE` redefinition errors
- Use `pybind11::gil_scoped_acquire` before calling Python API or using `THPObjectPtr`
- Wrap entry points with `HANDLE_TH_ERRORS` / `END_HANDLE_TH_ERRORS` for exception conversion
### Dispatch System
- PyTorch uses operator dispatch to route calls to backend-specific kernels
- Prefer `CompositeExplicitAutograd` dispatch when writing device-agnostic compound ops
- See `aten/src/ATen/native/README.md` for dispatch keyword guidance
## Git Workflow (AI Agent Specific)
When preparing PRs from this environment:
```bash
git stash -u
git reset --hard $(cat /tmp/orig_work.txt) # Reset to LOCAL branch
git stash pop
# Resolve conflicts if necessary
```
## Common Gotchas
1. **Editing generated files** - If it's in `build/`, don't edit it. Edit the source template or `native_functions.yaml`
2. **NVCC template compilation** - NVCC is stricter about C++ than gcc/clang; code working on Linux may fail Windows CI
3. **Windows symbol visibility** - Use `TORCH_API` macros for exported symbols (required on Windows, optional on Linux)
4. **No internet access** - DO NOT attempt to install dependencies during development
## Key Files Reference
- `AGENTS.md` - Instructions specific to AI coding agents
- `CONTRIBUTING.md` - Comprehensive human contributor guide
- `GLOSSARY.md` - Terminology (ATen, kernels, operations, JIT, TorchScript)
- `aten/src/ATen/native/README.md` - Operator implementation guide
- `tools/autograd/derivatives.yaml` - Gradient definitions for autograd
## Performance Debugging
Use `TORCH_SHOW_CPP_STACKTRACES=1` for C++ traces in Python errors. For profiling, prefer `py-spy` over manual instrumentation.

22
.github/labeler.yml vendored
View File

@ -138,7 +138,8 @@
- test/test_matmul_cuda.py
- test/test_scaled_matmul_cuda.py
- test/inductor/test_fp8.py
- aten/src/ATen/native/cuda/Blas.cpp
- aten/src/ATen/native/cuda/*Blas.cpp
- aten/src/ATen/cuda/CUDA*Blas.*
- torch/**/*cublas*
- torch/_inductor/kernel/mm.py
- test/inductor/test_max_autotune.py
@ -148,7 +149,8 @@
- test/test_matmul_cuda.py
- test/test_scaled_matmul_cuda.py
- test/inductor/test_fp8.py
- aten/src/ATen/native/cuda/Blas.cpp
- aten/src/ATen/native/cuda/*Blas.cpp
- aten/src/ATen/cuda/CUDA*Blas.*
- torch/**/*cublas*
- torch/_inductor/kernel/mm.py
- test/inductor/test_max_autotune.py
@ -158,7 +160,21 @@
- test/test_matmul_cuda.py
- test/test_scaled_matmul_cuda.py
- test/inductor/test_fp8.py
- aten/src/ATen/native/cuda/Blas.cpp
- aten/src/ATen/native/cuda/*Blas.cpp
- aten/src/ATen/cuda/CUDA*Blas.*
- torch/_inductor/kernel/mm.py
- test/inductor/test_max_autotune.py
- third_party/fbgemm
"ciflow/mps":
- aten/src/ATen/mps/**
- aten/src/ATen/native/mps/**
- torch/_inductor/codegen/mps.py
- test/test_mps.py
- test/inductor/test_mps_basic.py
"ciflow/h100-symm-mem":
- torch/csrc/distributed/c10d/symm_mem/**
- torch/distributed/_symmetric_memory/**
- test/distributed/**/*mem*
- test/distributed/**/*mem*/**

View File

@ -10,3 +10,4 @@
pathFilter:
- 'torch/csrc/inductor/aoti_torch/c/*'
- 'torch/csrc/inductor/aoti_torch/generated/*'
- 'torch/csrc/stable/c/*'

View File

@ -2,8 +2,8 @@ tracking_issue: 24422
ciflow_tracking_issue: 64124
ciflow_push_tags:
- ciflow/b200
- ciflow/b200-symm-mem
- ciflow/b200-distributed
- ciflow/b200-symm-mem
- ciflow/binaries
- ciflow/binaries_libtorch
- ciflow/binaries_wheel
@ -22,6 +22,8 @@ ciflow_push_tags:
- ciflow/inductor-perf-test-nightly-xpu
- ciflow/inductor-periodic
- ciflow/inductor-rocm
- ciflow/inductor-rocm-mi200
- ciflow/inductor-rocm-mi300
- ciflow/linux-aarch64
- ciflow/mps
- ciflow/nightly
@ -33,11 +35,13 @@ ciflow_push_tags:
- ciflow/quantization-periodic
- ciflow/riscv64
- ciflow/rocm
- ciflow/rocm-mi200
- ciflow/rocm-mi300
- ciflow/rocm-mi355
- ciflow/rocm-navi31
- ciflow/s390
- ciflow/slow
- ciflow/slow-rocm-mi200
- ciflow/torchbench
- ciflow/triton_binaries
- ciflow/trunk

View File

@ -1,10 +1,11 @@
# Delete old branches
import os
import re
from collections.abc import Callable
from datetime import datetime
from functools import lru_cache
from pathlib import Path
from typing import Any, Callable
from typing import Any
from github_utils import gh_fetch_json_dict, gh_graphql
from gitutils import GitRepo

View File

@ -8,10 +8,11 @@ import re
import subprocess
import sys
import warnings
from collections.abc import Callable
from enum import Enum
from functools import cache
from logging import info
from typing import Any, Callable, Optional
from typing import Any, Optional
from urllib.request import Request, urlopen
import yaml

View File

@ -28,7 +28,7 @@ CUDA_ARCHES_FULL_VERSION = {
"12.6": "12.6.3",
"12.8": "12.8.1",
"12.9": "12.9.1",
"13.0": "13.0.2",
"13.0": "13.0.0",
}
CUDA_ARCHES_CUDNN_VERSION = {
"12.6": "9",

View File

@ -11,7 +11,8 @@ import sys
import time
import urllib
import urllib.parse
from typing import Any, Callable, Optional
from collections.abc import Callable
from typing import Any, Optional
from urllib.request import Request, urlopen

View File

@ -3,8 +3,9 @@
import json
import os
import warnings
from collections.abc import Callable
from dataclasses import dataclass
from typing import Any, Callable, cast, Optional, Union
from typing import Any, cast, Optional, Union
from urllib.error import HTTPError
from urllib.parse import quote
from urllib.request import Request, urlopen

View File

@ -4,10 +4,10 @@ import os
import re
import tempfile
from collections import defaultdict
from collections.abc import Iterator
from collections.abc import Callable, Iterator
from datetime import datetime
from functools import wraps
from typing import Any, Callable, cast, Optional, TypeVar, Union
from typing import Any, cast, Optional, TypeVar, Union
T = TypeVar("T")

View File

@ -34,6 +34,9 @@ python3 torch/utils/data/datapipes/gen_pyi.py
# Also check generated pyi files
find torch -name '*.pyi' -exec git add --force -- "{}" +
# Print current environment
python3 -m pip freeze
RC=0
# Run lintrunner on all files
if ! lintrunner --force-color --tee-json=lint.json ${ADDITIONAL_LINTRUNNER_ARGS} 2> /dev/null; then

View File

@ -17,12 +17,12 @@ import re
import time
import urllib.parse
from collections import defaultdict
from collections.abc import Iterable
from collections.abc import Callable, Iterable
from dataclasses import dataclass
from functools import cache
from pathlib import Path
from re import Pattern
from typing import Any, Callable, cast, NamedTuple, Optional
from typing import Any, cast, NamedTuple, Optional
from warnings import warn
import yaml

View File

@ -97,8 +97,8 @@ jobs:
shell: bash
run: |
ngpu=$(rocminfo | grep -c -E 'Name:.*\sgfx')
if [[ $ngpu -lt 4 ]]; then
echo "Error: only $ngpu GPU(s) detected, at least 4 GPUs are needed for distributed jobs"
if [[ $ngpu -lt 2 ]]; then #We are temporarily reducing this down to 2 from 4 so that we can run tests on nodes with less gpus.
echo "Error: only $ngpu GPU(s) detected, at least 2 GPUs are needed for distributed jobs"
exit 1
fi

View File

@ -344,5 +344,21 @@ jobs:
if-no-files-found: ignore
path: ./**/core.[1-9]*
- name: Authenticate with AWS
uses: aws-actions/configure-aws-credentials@ececac1a45f3b08a01d2dd070d28d111c5fe6722 # v4.1.0
with:
role-to-assume: arn:aws:iam::308535385114:role/gha_workflow_upload-benchmark-results
# The max duration enforced by the server side
role-duration-seconds: 18000
aws-region: us-east-1
- name: Upload the benchmark results
uses: pytorch/test-infra/.github/actions/upload-benchmark-results@main
with:
benchmark-results-dir: test/test-reports
dry-run: false
schema-version: v3
github-token: ${{ secrets.GITHUB_TOKEN }}
- name: Teardown XPU
uses: ./.github/actions/teardown-xpu

View File

@ -37,7 +37,6 @@ jobs:
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runner: linux.12xlarge.memory
build-environment: linux-jammy-cuda12.8-py3.10-gcc11-distributed-b200
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '10.0'

View File

@ -37,7 +37,6 @@ jobs:
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runner: linux.12xlarge.memory
build-environment: linux-jammy-cuda12.8-py3.10-gcc11-sm100-symm
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '10.0'

View File

@ -56,6 +56,8 @@ jobs:
pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc9,
pytorch-linux-jammy-cuda12.4-cudnn9-py3-gcc11,
pytorch-linux-jammy-py3.10-clang12,
pytorch-linux-jammy-py3.11-clang12,
pytorch-linux-jammy-py3.12-clang12,
pytorch-linux-jammy-py3.13-clang12,
pytorch-linux-jammy-py3.14-clang12,
pytorch-linux-jammy-rocm-n-py3,
@ -65,9 +67,10 @@ jobs:
pytorch-linux-jammy-py3.10-gcc11,
pytorch-linux-jammy-py3-gcc11-inductor-benchmarks,
pytorch-linux-jammy-py3.12-halide,
pytorch-linux-jammy-cuda12.8-py3.12-pallas,
pytorch-linux-jammy-xpu-n-1-py3,
pytorch-linux-jammy-xpu-n-py3,
pytorch-linux-jammy-xpu-n-py3-inductor-benchmarks,
pytorch-linux-noble-xpu-n-py3,
pytorch-linux-noble-xpu-n-py3-inductor-benchmarks,
pytorch-linux-jammy-py3-clang18-asan,
pytorch-linux-jammy-py3-clang12-onnx,
pytorch-linux-jammy-linter,
@ -77,9 +80,11 @@ jobs:
pytorch-linux-noble-riscv64-py3.12-gcc14
]
include:
- docker-image-name: pytorch-linux-jammy-aarch64-py3.10-gcc11
- docker-image-name: pytorch-linux-jammy-aarch64-py3.10-gcc13
runner: linux.arm64.m7g.4xlarge
- docker-image-name: pytorch-linux-jammy-aarch64-py3.10-gcc11-inductor-benchmarks
- docker-image-name: pytorch-linux-jammy-aarch64-py3.10-clang21
runner: linux.arm64.m7g.4xlarge
- docker-image-name: pytorch-linux-jammy-aarch64-py3.10-gcc13-inductor-benchmarks
runner: linux.arm64.m7g.4xlarge
timeout-minutes: 600
# Docker uploads fail from LF runners, see https://github.com/pytorch/pytorch/pull/137358

View File

@ -8,6 +8,7 @@ on:
- docker.Makefile
- .github/workflows/docker-release.yml
- .github/scripts/generate_docker_release_matrix.py
- .github/scripts/generate_binary_build_matrix.py
push:
branches:
- nightly

View File

@ -37,7 +37,6 @@ jobs:
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runner: "linux.c7i.12xlarge"
build-environment: linux-jammy-cuda12.8-py3.10-gcc11-sm90-dist
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '9.0'

View File

@ -72,7 +72,7 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runner: linux.arm64.m7g.4xlarge
build-environment: linux-jammy-aarch64-py3.10
docker-image-name: ci-image:pytorch-linux-jammy-aarch64-py3.10-gcc11-inductor-benchmarks
docker-image-name: ci-image:pytorch-linux-jammy-aarch64-py3.10-gcc13-inductor-benchmarks
test-matrix: |
{ include: [
{ config: "inductor_huggingface_perf_cpu_aarch64", shard: 1, num_shards: 9, runner: "linux.arm64.m7g.metal" },

View File

@ -83,8 +83,8 @@ jobs:
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build-environment: linux-jammy-xpu-n-py3.10
docker-image-name: ci-image:pytorch-linux-jammy-xpu-n-py3-inductor-benchmarks
build-environment: linux-noble-xpu-n-py3.10
docker-image-name: ci-image:pytorch-linux-noble-xpu-n-py3-inductor-benchmarks
runner: linux.c7i.12xlarge
test-matrix: |
{ include: [
@ -117,7 +117,7 @@ jobs:
uses: ./.github/workflows/_xpu-test.yml
needs: xpu-n-py3_10-inductor-benchmark-build
with:
build-environment: linux-jammy-xpu-n-py3.10
build-environment: linux-noble-xpu-n-py3.10
dashboard-tag: training-true-inference-true-default-true-dynamic-true-cudagraphs-false-cppwrapper-true-aotinductor-true-freezing_cudagraphs-false-cudagraphs_low_precision-false
docker-image: ${{ needs.xpu-n-py3_10-inductor-benchmark-build.outputs.docker-image }}
test-matrix: ${{ needs.xpu-n-py3_10-inductor-benchmark-build.outputs.test-matrix }}
@ -137,7 +137,7 @@ jobs:
uses: ./.github/workflows/_xpu-test.yml
needs: xpu-n-py3_10-inductor-benchmark-build
with:
build-environment: linux-jammy-xpu-n-py3.10
build-environment: linux-noble-xpu-n-py3.10
dashboard-tag: training-${{ inputs.training }}-inference-${{ inputs.inference }}-default-${{ inputs.default }}-dynamic-${{ inputs.dynamic }}-cudagraphs-${{ inputs.cudagraphs }}-cppwrapper-${{ inputs.cppwrapper }}-aotinductor-${{ inputs.aotinductor }}-maxautotune-${{ inputs.maxautotune }}-freezing_cudagraphs-${{ inputs.freezing_cudagraphs }}-cudagraphs_low_precision-${{ inputs.cudagraphs }}
docker-image: ${{ needs.xpu-n-py3_10-inductor-benchmark-build.outputs.docker-image }}
test-matrix: ${{ needs.xpu-n-py3_10-inductor-benchmark-build.outputs.test-matrix }}

View File

@ -1,12 +1,13 @@
name: inductor-rocm
name: inductor-rocm-mi200
on:
schedule:
- cron: 0 */3 * * *
push:
branches:
- main
- release/*
tags:
- ciflow/inductor-rocm/*
- ciflow/inductor-rocm-mi200/*
workflow_dispatch:
concurrency:

View File

@ -7,6 +7,7 @@ on:
- release/*
tags:
- ciflow/inductor-rocm/*
- ciflow/inductor-rocm-mi300/*
workflow_dispatch:
concurrency:

View File

@ -81,6 +81,32 @@ jobs:
test-matrix: ${{ needs.inductor-halide-build.outputs.test-matrix }}
secrets: inherit
inductor-pallas-build:
name: inductor-pallas-build
uses: ./.github/workflows/_linux-build.yml
needs: get-label-type
with:
build-environment: linux-jammy-cuda12.8-py3.12-gcc11
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-py3.12-pallas
cuda-arch-list: '8.9'
runner: linux.8xlarge.memory
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
test-matrix: |
{ include: [
{ config: "inductor-pallas", shard: 1, num_shards: 1, runner: "${{ needs.get-label-type.outputs.label-type }}linux.g5.12xlarge.nvidia.gpu" },
]}
secrets: inherit
inductor-pallas-test:
name: inductor-pallas-test
uses: ./.github/workflows/_linux-test.yml
needs: inductor-pallas-build
with:
build-environment: linux-jammy-py3.12-gcc11
docker-image: ${{ needs.inductor-pallas-build.outputs.docker-image }}
test-matrix: ${{ needs.inductor-pallas-build.outputs.test-matrix }}
secrets: inherit
inductor-triton-cpu-build:
name: inductor-triton-cpu-build
uses: ./.github/workflows/_linux-build.yml
@ -115,10 +141,10 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
test-matrix: |
{ include: [
{ config: "inductor_amx", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.8xlarge.amx" },
{ config: "inductor_amx", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.8xlarge.amx" },
{ config: "inductor_avx2", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.10xlarge.avx2" },
{ config: "inductor_avx2", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.10xlarge.avx2" },
{ config: "inductor_amx", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge.amx" },
{ config: "inductor_amx", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge.amx" },
{ config: "inductor_avx2", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge.avx2" },
{ config: "inductor_avx2", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge.avx2" },
]}
secrets: inherit

View File

@ -84,13 +84,13 @@ jobs:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
test-matrix: |
{ include: [
{ config: "cpu_inductor_torchbench", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.8xlarge.amx" },
{ config: "cpu_inductor_torchbench", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.8xlarge.amx" },
{ config: "dynamic_cpu_inductor_huggingface", shard: 1, num_shards: 1, runner: "${{ needs.get-label-type.outputs.label-type }}linux.8xlarge.amx" },
{ config: "dynamic_cpu_inductor_timm", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.8xlarge.amx" },
{ config: "dynamic_cpu_inductor_timm", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.8xlarge.amx" },
{ config: "dynamic_cpu_inductor_torchbench", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.8xlarge.amx" },
{ config: "dynamic_cpu_inductor_torchbench", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.8xlarge.amx" },
{ config: "cpu_inductor_torchbench", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge.amx" },
{ config: "cpu_inductor_torchbench", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge.amx" },
{ config: "dynamic_cpu_inductor_huggingface", shard: 1, num_shards: 1, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge.amx" },
{ config: "dynamic_cpu_inductor_timm", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge.amx" },
{ config: "dynamic_cpu_inductor_timm", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge.amx" },
{ config: "dynamic_cpu_inductor_torchbench", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge.amx" },
{ config: "dynamic_cpu_inductor_torchbench", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge.amx" },
{ config: "inductor_torchbench_cpu_smoketest_perf", shard: 1, num_shards: 1, runner: "${{ needs.get-label-type.outputs.label-type }}linux.24xl.spr-metal" },
]}
build-additional-packages: "vision audio torchao"

View File

@ -76,11 +76,12 @@ jobs:
# NOTE: mypy needs its own job because it depends on --all-files, without assessing all files it sometimes
# fails to find types when it should
lintrunner-mypy:
# NOTE: We should be able to disable this and consolidate with Pyrefly
lintrunner-pyrefly:
uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@main
name: lintrunner-mypy-${{ needs.get-changed-files.outputs.changed-files == '*' && 'all' || 'partial' }}
name: lintrunner-pyrefly-${{ needs.get-changed-files.outputs.changed-files == '*' && 'all' || 'partial' }}
needs: [get-label-type, get-changed-files]
# Only run if there are changed files relevant to mypy
# Only run if there are changed files relevant to pyrefly
if: |
github.repository_owner == 'pytorch' && (
needs.get-changed-files.outputs.changed-files == '*' ||
@ -98,8 +99,8 @@ jobs:
ref: ${{ github.event_name == 'pull_request' && github.event.pull_request.head.sha || github.sha }}
script: |
CHANGED_FILES="${{ needs.get-changed-files.outputs.changed-files }}"
echo "Running mypy"
ADDITIONAL_LINTRUNNER_ARGS="--take MYPY,MYPYSTRICT --all-files" .github/scripts/lintrunner.sh
echo "Running pyrefly"
ADDITIONAL_LINTRUNNER_ARGS="--take PYREFLY --all-files" .github/scripts/lintrunner.sh
lintrunner-noclang:
uses: pytorch/test-infra/.github/workflows/linux_job_v2.yml@main
@ -118,9 +119,9 @@ jobs:
CHANGED_FILES="${{ needs.get-changed-files.outputs.changed-files }}"
echo "Running all other linters"
if [ "$CHANGED_FILES" = '*' ]; then
ADDITIONAL_LINTRUNNER_ARGS="--skip CLANGTIDY,CLANGFORMAT,MYPY,MYPYSTRICT,PYREFLY --all-files" .github/scripts/lintrunner.sh
ADDITIONAL_LINTRUNNER_ARGS="--skip CLANGTIDY,CLANGFORMAT,PYREFLY --all-files" .github/scripts/lintrunner.sh
else
ADDITIONAL_LINTRUNNER_ARGS="--skip CLANGTIDY,CLANGFORMAT,MYPY,MYPYSTRICT,PYREFLY ${CHANGED_FILES}" .github/scripts/lintrunner.sh
ADDITIONAL_LINTRUNNER_ARGS="--skip CLANGTIDY,CLANGFORMAT,PYREFLY ${CHANGED_FILES}" .github/scripts/lintrunner.sh
fi
quick-checks:

View File

@ -33,7 +33,7 @@ jobs:
with:
runner_prefix: ${{ needs.get-label-type.outputs.label-type }}
build-environment: linux-jammy-aarch64-py3.10
docker-image-name: ci-image:pytorch-linux-jammy-aarch64-py3.10-gcc11
docker-image-name: ci-image:pytorch-linux-jammy-aarch64-py3.10-gcc13
runner: linux.arm64.m7g.4xlarge
test-matrix: |
{ include: [

View File

@ -5,9 +5,11 @@ on:
- cron: 0 0 * * *
push:
tags:
# NOTE: Doc build pipelines should only get triggered on release candidate builds
# Release candidate tags look like: v1.11.0-rc1
- v[0-9]+.[0-9]+.[0-9]+-rc[0-9]+
# NOTE: Doc build pipelines should only get triggered on:
# Major or minor release candidates builds
- v[0-9]+.[0-9]+.0+-rc[0-9]+
# Final RC for major, minor and patch releases
- v[0-9]+.[0-9]+.[0-9]+
- ciflow/nightly/*
workflow_dispatch:
@ -41,7 +43,7 @@ jobs:
uses: ./.github/workflows/_linux-build.yml
needs: get-label-type
with:
runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge"
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build-environment: linux-jammy-py3.10-gcc11
docker-image-name: ci-image:pytorch-linux-jammy-py3.10-gcc11
secrets: inherit

View File

@ -60,7 +60,7 @@ jobs:
with:
build-environment: linux-jammy-aarch64-py3.10
runner: linux.arm64.m7g.4xlarge
docker-image-name: ci-image:pytorch-linux-jammy-aarch64-py3.10-gcc11
docker-image-name: ci-image:pytorch-linux-jammy-aarch64-py3.10-gcc13
test-matrix: |
{ include: [
{ config: "cpu_operator_benchmark_short", shard: 1, num_shards: 1, runner: "linux.arm64.m8g.4xlarge" },

View File

@ -11,7 +11,6 @@ on:
- cron: 29 8 * * * # about 1:29am PDT, for mem leak check and rerun disabled tests
push:
tags:
- ciflow/periodic/*
- ciflow/periodic-rocm-mi200/*
branches:
- release/*

View File

@ -11,6 +11,7 @@ on:
- cron: 29 8 * * * # about 1:29am PDT, for mem leak check and rerun disabled tests
push:
tags:
- ciflow/periodic/*
- ciflow/periodic-rocm-mi300/*
branches:
- release/*

View File

@ -66,10 +66,10 @@ jobs:
{ config: "default", shard: 5, num_shards: 5, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "docs_test", shard: 1, num_shards: 1, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "jit_legacy", shard: 1, num_shards: 1, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "backwards_compat", shard: 1, num_shards: 1, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "backwards_compat", shard: 1, num_shards: 1, runner: "${{ needs.get-label-type.outputs.label-type }}linux.c7i.2xlarge" },
{ config: "distributed", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "distributed", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "numpy_2_x", shard: 1, num_shards: 1, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "numpy_2_x", shard: 1, num_shards: 1, runner: "${{ needs.get-label-type.outputs.label-type }}linux.c7i.2xlarge" },
]}
secrets: inherit
@ -167,8 +167,8 @@ jobs:
docker-image-name: ci-image:pytorch-linux-jammy-py3-clang12-onnx
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "default", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.2xlarge" },
{ config: "default", shard: 1, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.c7i.2xlarge" },
{ config: "default", shard: 2, num_shards: 2, runner: "${{ needs.get-label-type.outputs.label-type }}linux.c7i.2xlarge" },
]}
secrets: inherit
@ -342,16 +342,16 @@ jobs:
test-matrix: ${{ needs.linux-jammy-cuda12_8-py3_10-gcc9-inductor-build.outputs.test-matrix }}
secrets: inherit
linux-jammy-xpu-n-py3_10-build:
name: linux-jammy-xpu-n-py3.10
linux-noble-xpu-n-py3_10-build:
name: linux-noble-xpu-n-py3.10
uses: ./.github/workflows/_linux-build.yml
needs: get-label-type
with:
# This should sync with the build in xpu.yml but xpu uses a larger runner
# sync-tag: linux-xpu-n-build
runner_prefix: ${{ needs.get-label-type.outputs.label-type }}
build-environment: linux-jammy-xpu-n-py3.10
docker-image-name: ci-image:pytorch-linux-jammy-xpu-n-py3
build-environment: linux-noble-xpu-n-py3.10
docker-image-name: ci-image:pytorch-linux-noble-xpu-n-py3
test-matrix: |
{ include: [
{ config: "default", shard: 1, num_shards: 4, runner: "linux.idc.xpu" },

View File

@ -1,15 +1,16 @@
name: rocm
name: rocm-mi200
on:
push:
branches:
- main
- release/*
tags:
- ciflow/rocm/*
- ciflow/rocm-mi200/*
workflow_dispatch:
schedule:
- cron: 29 8 * * * # about 1:29am PDT
- cron: 0 */3 * * *
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}

View File

@ -6,6 +6,7 @@ on:
- main
- release/*
tags:
- ciflow/rocm/*
- ciflow/rocm-mi300/*
workflow_dispatch:
schedule:

81
.github/workflows/slow-rocm-mi200.yml vendored Normal file
View File

@ -0,0 +1,81 @@
# This workflow is dedicated to host slow jobs that are run only periodically because
# they are too slow to run in every commit. The list of slow tests can be found in
# https://github.com/pytorch/test-infra/blob/generated-stats/stats/slow-tests.json
name: slow-rocm-mi200
on:
push:
branches:
- release/*
tags:
- ciflow/slow/*
- ciflow/slow-rocm-mi200/*
schedule:
- cron: 0 */3 * * *
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref_name }}-${{ github.ref_type == 'branch' && github.sha }}-${{ github.event_name == 'workflow_dispatch' }}-${{ github.event_name == 'schedule' }}-${{ github.event.schedule }}
cancel-in-progress: true
permissions:
id-token: write
contents: read
jobs:
llm-td:
if: github.repository_owner == 'pytorch'
name: before-test
uses: ./.github/workflows/llm_td_retrieval.yml
permissions:
id-token: write
contents: read
target-determination:
name: before-test
uses: ./.github/workflows/target_determination.yml
needs: llm-td
permissions:
id-token: write
contents: read
get-label-type:
name: get-label-type
uses: pytorch/pytorch/.github/workflows/_runner-determinator.yml@main
if: ${{ (github.event_name != 'schedule' || github.repository == 'pytorch/pytorch') && github.repository_owner == 'pytorch' }}
with:
triggering_actor: ${{ github.triggering_actor }}
issue_owner: ${{ github.event.pull_request.user.login || github.event.issue.user.login }}
curr_branch: ${{ github.head_ref || github.ref_name }}
curr_ref_type: ${{ github.ref_type }}
linux-jammy-rocm-py3_10-build:
name: linux-jammy-rocm-py3.10
uses: ./.github/workflows/_linux-build.yml
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build-environment: linux-jammy-rocm-py3.10
docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3
sync-tag: rocm-build
test-matrix: |
{ include: [
{ config: "slow", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.2", owners: ["module:rocm"] },
{ config: "slow", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.2", owners: ["module:rocm"] },
]}
secrets: inherit
linux-jammy-rocm-py3_10-test:
permissions:
id-token: write
contents: read
name: linux-jammy-rocm-py3.10
uses: ./.github/workflows/_rocm-test.yml
needs:
- linux-jammy-rocm-py3_10-build
- target-determination
with:
build-environment: linux-jammy-rocm-py3.10
docker-image: ${{ needs.linux-jammy-rocm-py3_10-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-rocm-py3_10-build.outputs.test-matrix }}
secrets: inherit

View File

@ -105,36 +105,6 @@ jobs:
test-matrix: ${{ needs.linux-jammy-py3_10-clang12-build.outputs.test-matrix }}
secrets: inherit
linux-jammy-rocm-py3_10-build:
name: linux-jammy-rocm-py3.10
uses: ./.github/workflows/_linux-build.yml
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
build-environment: linux-jammy-rocm-py3.10
docker-image-name: ci-image:pytorch-linux-jammy-rocm-n-py3
test-matrix: |
{ include: [
{ config: "slow", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.2", owners: ["module:rocm"] },
{ config: "slow", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.2", owners: ["module:rocm"] },
]}
secrets: inherit
linux-jammy-rocm-py3_10-test:
permissions:
id-token: write
contents: read
name: linux-jammy-rocm-py3.10
uses: ./.github/workflows/_rocm-test.yml
needs:
- linux-jammy-rocm-py3_10-build
- target-determination
with:
build-environment: linux-jammy-rocm-py3.10
docker-image: ${{ needs.linux-jammy-rocm-py3_10-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-rocm-py3_10-build.outputs.test-matrix }}
secrets: inherit
linux-jammy-py3_10-clang18-asan-build:
name: linux-jammy-py3.10-clang18-asan
uses: ./.github/workflows/_linux-build.yml

View File

@ -52,7 +52,6 @@ jobs:
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runner: linux.12xlarge.memory
build-environment: linux-jammy-cuda12.8-py3.10-gcc11-sm100
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '10.0'
@ -73,4 +72,4 @@ jobs:
docker-image: ${{ needs.linux-jammy-cuda12_8-py3_10-gcc11-sm100-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-cuda12_8-py3_10-gcc11-sm100-build.outputs.test-matrix }}
aws-role-to-assume: arn:aws:iam::308535385114:role/gha_workflow_s3_and_ecr_read_only
secrets: inherit
secrets: inherit

View File

@ -41,7 +41,6 @@ jobs:
needs: get-label-type
with:
runner_prefix: "${{ needs.get-label-type.outputs.label-type }}"
runner: linux.12xlarge.memory
build-environment: linux-jammy-cuda12.8-py3.10-gcc11-sm90
docker-image-name: ci-image:pytorch-linux-jammy-cuda12.8-cudnn9-py3-gcc11
cuda-arch-list: '9.0'

View File

@ -204,6 +204,7 @@ jobs:
{ include: [
{ config: "default", shard: 1, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "default", shard: 2, num_shards: 2, runner: "linux.rocm.gpu.gfx942.1" },
{ config: "distributed", shard: 1, num_shards: 1, runner: "linux.rocm.gpu.gfx942.4" },
]}
secrets: inherit
@ -221,7 +222,7 @@ jobs:
build-environment: linux-jammy-rocm-py3.10
docker-image: ${{ needs.linux-jammy-rocm-py3_10-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-rocm-py3_10-build.outputs.test-matrix }}
tests-to-include: "test_nn test_torch test_cuda test_ops test_unary_ufuncs test_binary_ufuncs test_autograd inductor/test_torchinductor"
tests-to-include: "test_nn test_torch test_cuda test_ops test_unary_ufuncs test_binary_ufuncs test_autograd inductor/test_torchinductor distributed/test_c10d_common distributed/test_c10d_nccl"
secrets: inherit
inductor-build:

View File

@ -11,15 +11,16 @@ on:
- inductor
- unstable
- slow
- slow-rocm-mi200
- unstable-periodic
- inductor-periodic
- rocm
- rocm-mi200
- rocm-mi300
- rocm-mi355
- inductor-micro-benchmark
- inductor-micro-benchmark-x86
- inductor-cu124
- inductor-rocm
- inductor-rocm-mi200
- inductor-rocm-mi300
- mac-mps
- linux-aarch64

View File

@ -47,15 +47,15 @@ jobs:
]}
secrets: inherit
linux-jammy-xpu-n-py3_10-build:
name: linux-jammy-xpu-n-py3.10
linux-noble-xpu-n-py3_10-build:
name: linux-noble-xpu-n-py3.10
uses: ./.github/workflows/_linux-build.yml
needs: get-label-type
with:
sync-tag: linux-xpu-n-build
runner_prefix: ${{ needs.get-label-type.outputs.label-type }}
build-environment: linux-jammy-xpu-n-py3.10
docker-image-name: ci-image:pytorch-linux-jammy-xpu-n-py3
build-environment: linux-noble-xpu-n-py3.10
docker-image-name: ci-image:pytorch-linux-noble-xpu-n-py3
runner: linux.c7i.12xlarge
test-matrix: |
{ include: [
@ -74,17 +74,17 @@ jobs:
]}
secrets: inherit
linux-jammy-xpu-n-py3_10-test:
name: linux-jammy-xpu-n-py3.10
linux-noble-xpu-n-py3_10-test:
name: linux-noble-xpu-n-py3.10
uses: ./.github/workflows/_xpu-test.yml
needs: linux-jammy-xpu-n-py3_10-build
needs: linux-noble-xpu-n-py3_10-build
permissions:
id-token: write
contents: read
with:
build-environment: linux-jammy-xpu-n-py3.10
docker-image: ${{ needs.linux-jammy-xpu-n-py3_10-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-jammy-xpu-n-py3_10-build.outputs.test-matrix }}
build-environment: linux-noble-xpu-n-py3.10
docker-image: ${{ needs.linux-noble-xpu-n-py3_10-build.outputs.docker-image }}
test-matrix: ${{ needs.linux-noble-xpu-n-py3_10-build.outputs.test-matrix }}
secrets: inherit
windows-xpu-n-1-build:

2
.gitignore vendored
View File

@ -127,6 +127,7 @@ torch/test/
torch/utils/benchmark/utils/valgrind_wrapper/callgrind.h
torch/utils/benchmark/utils/valgrind_wrapper/valgrind.h
torch/version.py
torch/_inductor/kernel/vendored_templates/*
minifier_launcher.py
aten/src/ATen/native/transformers/hip/flash_attn/ck/fmha_fwd_d*
aten/src/ATen/native/transformers/hip/flash_attn/ck/fmha_bwd_d*
@ -398,3 +399,4 @@ CLAUDE.local.md
/test_*.py
/debug_*.py
CLAUDE_CONTEXT/
/.claude/settings.local.json

View File

@ -121,94 +121,6 @@ command = [
]
is_formatter = true
[[linter]]
code = 'MYPY'
include_patterns = [
'setup.py',
'functorch/dim/**/*.py',
'torch/**/*.py',
'torch/**/*.pyi',
'caffe2/**/*.py',
'caffe2/**/*.pyi',
'test/test_bundled_images.py',
'test/test_bundled_inputs.py',
'test/test_complex.py',
'test/test_datapipe.py',
'test/test_futures.py',
'test/test_numpy_interop.py',
'test/test_torch.py',
'test/test_type_hints.py',
'test/test_type_info.py',
'test/test_utils.py',
]
exclude_patterns = [
'**/fb/**',
]
command = [
'python3',
'tools/linter/adapters/mypy_linter.py',
'--config=mypy.ini',
'--',
'@{{PATHSFILE}}'
]
init_command = [
'python3',
'tools/linter/adapters/pip_init.py',
'--dry-run={{DRYRUN}}',
'numpy==1.26.4 ; python_version >= "3.10" and python_version <= "3.11"',
'numpy==2.1.0 ; python_version >= "3.12"',
'expecttest==0.3.0',
'mypy==1.16.0',
'sympy==1.13.3',
'types-requests==2.27.25',
'types-pyyaml==6.0.2',
'types-tabulate==0.8.8',
'types-protobuf==5.29.1.20250403',
'types-setuptools==79.0.0.20250422',
'types-jinja2==2.11.9',
'types-colorama==0.4.6',
'filelock==3.18.0',
'junitparser==2.1.1',
'rich==14.1.0',
'pyyaml==6.0.2',
'optree==0.13.0',
'dataclasses-json==0.6.7',
'pandas==2.2.3',
]
[[linter]]
code = 'MYPYSTRICT'
include_patterns = [
'.github/**/*.py',
'benchmarks/instruction_counts/**/*.py',
'tools/**/*.py',
'torchgen/**/*.py',
'torch/utils/_pytree.py',
'torch/utils/_cxx_pytree.py',
'torch/utils/benchmark/utils/common.py',
'torch/utils/benchmark/utils/timer.py',
'torch/utils/benchmark/utils/valgrind_wrapper/**/*.py',
]
exclude_patterns = [
# (linbinyu) copied from internal repo
'**/fb/**',
'tools/code_analyzer/gen_operators_yaml.py',
'tools/dynamo/verify_dynamo.py',
'tools/gen_vulkan_spv.py',
'tools/test/gen_operators_yaml_test.py',
'tools/test/gen_oplist_test.py',
'tools/test/test_selective_build.py',
'tools/experimental/torchfuzz/**',
]
command = [
'python3',
'tools/linter/adapters/mypy_linter.py',
'--config=mypy-strict.ini',
'--code=MYPYSTRICT',
'--',
'@{{PATHSFILE}}'
]
[[linter]]
code = 'PYREFLY'
@ -230,7 +142,9 @@ init_command = [
'python3',
'tools/linter/adapters/pip_init.py',
'--dry-run={{DRYRUN}}',
'numpy==2.1.0 ; python_version >= "3.12"',
'numpy==1.26.4 ; python_version >= "3.10" and python_version <= "3.11"',
'numpy==2.1.0 ; python_version >= "3.12" and python_version <= "3.13"',
'numpy==2.3.4 ; python_version >= "3.14"',
'expecttest==0.3.0',
'pyrefly==0.36.2',
'sympy==1.13.3',
@ -272,6 +186,8 @@ include_patterns = [
'aten/src/ATen/native/nested/cuda/*.h',
'aten/src/ATen/native/nested/*.cpp',
'aten/src/ATen/native/nested/*.h',
'aten/src/ATen/xpu/**/*.h',
'aten/src/ATen/xpu/**/*.cpp',
'c10/**/*.cpp',
'c10/**/*.h',
'torch/*.h',
@ -298,7 +214,6 @@ exclude_patterns = [
'**/*pb.h',
'**/*inl.h',
'aten/src/ATen/cpu/FlushDenormal.cpp',
'aten/src/ATen/cpu/Utils.cpp',
'aten/src/ATen/cpu/vml.h',
'aten/src/ATen/CPUFixedAllocator.h',
'aten/src/ATen/Parallel*.h',
@ -317,8 +232,6 @@ exclude_patterns = [
'c10/util/win32-headers.h',
'c10/test/**/*.h',
'third_party/**/*',
'torch/csrc/api/include/torch/nn/modules/common.h',
'torch/csrc/api/include/torch/linalg.h',
'torch/csrc/autograd/generated/**',
'torch/csrc/distributed/**/*.cu',
'torch/csrc/distributed/c10d/WinSockUtils.hpp',
@ -330,7 +243,6 @@ exclude_patterns = [
'torch/csrc/utils/generated_serialization_types.h',
'torch/csrc/utils/pythoncapi_compat.h',
'torch/csrc/inductor/aoti_runtime/sycl_runtime_wrappers.h',
'aten/src/ATen/ExpandBase.h',
]
init_command = [
'python3',
@ -1492,7 +1404,7 @@ init_command = [
'--dry-run={{DRYRUN}}',
'usort==1.0.8.post1',
'isort==6.0.1',
'ruff==0.13.1', # sync with RUFF
'ruff==0.14.4', # sync with RUFF
]
is_formatter = true
@ -1627,7 +1539,7 @@ init_command = [
'python3',
'tools/linter/adapters/pip_init.py',
'--dry-run={{DRYRUN}}',
'ruff==0.13.1', # sync with PYFMT
'ruff==0.14.4', # sync with PYFMT
]
is_formatter = true

View File

@ -234,7 +234,17 @@ option(USE_COLORIZE_OUTPUT "Colorize output during compilation" ON)
option(USE_ASAN "Use Address+Undefined Sanitizers" OFF)
option(USE_LSAN "Use Leak Sanitizer" OFF)
option(USE_TSAN "Use Thread Sanitizer" OFF)
# Track whether USE_CUDA was explicitly set by the user (before option() is called)
# If USE_CUDA is already defined in cache, it means user explicitly set it
if(DEFINED CACHE{USE_CUDA})
set(_USE_CUDA_EXPLICITLY_SET TRUE)
else()
set(_USE_CUDA_EXPLICITLY_SET FALSE)
endif()
option(USE_CUDA "Use CUDA" ON)
option(USE_XPU "Use XPU" ON)
cmake_dependent_option(
BUILD_LAZY_CUDA_LINALG "Build cuda linalg ops as separate library" ON
@ -726,6 +736,44 @@ if(NOT DEFINED USE_BLAS)
set(USE_BLAS ON)
endif()
# Prioritized Text Linker Optimization
if(USE_PRIORITIZED_TEXT_FOR_LD)
set(LINKER_SCRIPT_FILE_IN "${CMAKE_SOURCE_DIR}/cmake/prioritized_text.txt")
set(LINKER_SCRIPT_FILE_OUT "${CMAKE_SOURCE_DIR}/cmake/linker_script.ld")
execute_process(
COMMAND ${Python_EXECUTABLE}
${CMAKE_SOURCE_DIR}/tools/setup_helpers/generate_linker_script.py
--filein "${LINKER_SCRIPT_FILE_IN}"
--fout "${LINKER_SCRIPT_FILE_OUT}"
RESULT_VARIABLE _gen_result
OUTPUT_VARIABLE _gen_output
ERROR_VARIABLE _gen_error
)
if(NOT _gen_result EQUAL 0)
message(FATAL_ERROR
"Failed to generate linker script:\n${_gen_output}\n${_gen_error}")
endif()
append_cxx_flag_if_supported("-ffunction-sections" CMAKE_CXX_FLAGS)
append_cxx_flag_if_supported("-fdata-sections" CMAKE_CXX_FLAGS)
append_c_flag_if_supported("-ffunction-sections" CMAKE_C_FLAGS)
append_c_flag_if_supported("-fdata-sections" CMAKE_C_FLAGS)
set(CMAKE_SHARED_LINKER_FLAGS "${CMAKE_SHARED_LINKER_FLAGS} -T${LINKER_SCRIPT_FILE_OUT}")
set(CMAKE_MODULE_LINKER_FLAGS "${CMAKE_MODULE_LINKER_FLAGS} -T${LINKER_SCRIPT_FILE_OUT}")
else()
if(LINUX AND CPU_AARCH64)
message(WARNING [[
It is strongly recommend to enable linker script optimization for all AArch64 Linux builds.
To do so please export USE_PRIORITIZED_TEXT_FOR_LD=1
]])
endif()
endif()
# Build libtorch mobile library, which contains ATen/TH ops and native support
# for TorchScript model, but doesn't contain not-yet-unified caffe2 ops;
if(INTERN_BUILD_MOBILE)
@ -1392,9 +1440,6 @@ if(BUILD_JNI)
add_subdirectory(android/pytorch_android)
endif()
include(cmake/Summary.cmake)
caffe2_print_configuration_summary()
# Parse custom debug info
if(DEFINED USE_CUSTOM_DEBINFO)
string(REPLACE ";" " " SOURCE_FILES "${USE_CUSTOM_DEBINFO}")
@ -1434,56 +1479,5 @@ if(BUILD_BUNDLE_PTXAS AND USE_CUDA)
DESTINATION "${CMAKE_INSTALL_BINDIR}")
endif()
if(USE_PRIORITIZED_TEXT_FOR_LD)
add_compile_options(
$<$<COMPILE_LANGUAGE:C,CXX>:-ffunction-sections>
$<$<COMPILE_LANGUAGE:C,CXX>:-fdata-sections>
)
set(LINKER_SCRIPT_FILE_OUT "${CMAKE_SOURCE_DIR}/cmake/linker_script.ld")
set(LINKER_SCRIPT_FILE_IN "${CMAKE_SOURCE_DIR}/cmake/prioritized_text.txt")
add_custom_command(
OUTPUT "${LINKER_SCRIPT_FILE_OUT}"
COMMAND ${Python_EXECUTABLE} ${CMAKE_SOURCE_DIR}/tools/setup_helpers/generate_linker_script.py --filein "${LINKER_SCRIPT_FILE_IN}" --fout "${LINKER_SCRIPT_FILE_OUT}"
DEPENDS ${CMAKE_SOURCE_DIR}/tools/setup_helpers/generate_linker_script.py "${LINKER_SCRIPT_FILE_IN}"
COMMENT "Generating prioritized text linker files"
VERBATIM
)
add_custom_target(generate_linker_script DEPENDS "${LINKER_SCRIPT_FILE_OUT}")
if(BUILD_PYTHON)
set(LINKER_OPT_TARGETS torch_python)
endif()
if(NOT BUILD_LIBTORCHLESS)
list(APPEND LINKER_OPT_TARGETS torch_cpu c10)
if(USE_CUDA)
list(APPEND LINKER_OPT_TARGETS torch_cuda c10_cuda)
endif()
if(USE_XPU)
list(APPEND LINKER_OPT_TARGETS torch_xpu c10_xpu)
endif()
if(USE_ROCM)
list(APPEND LINKER_OPT_TARGETS torch_hip c10_hip)
endif()
endif()
foreach(tgt IN LISTS LINKER_OPT_TARGETS)
if(TARGET ${tgt})
add_dependencies("${tgt}" generate_linker_script)
target_link_options_if_supported(${tgt} "-T,${LINKER_SCRIPT_FILE_OUT}")
set_property(TARGET ${tgt} APPEND PROPERTY LINK_DEPENDS "${LINKER_SCRIPT_FILE_OUT}")
else()
message(WARNING "Requested target '${tgt}' for linker script optimization was not found.")
endif()
endforeach()
else()
if(LINUX AND CPU_AARCH64)
message(WARNING [[
It is strongly recommend to enable linker script optimization for all AArch64 Linux builds.
To do so please export USE_PRIORITIZED_TEXT_FOR_LD=1
]])
endif()
endif()
include(cmake/Summary.cmake)
caffe2_print_configuration_summary()

View File

@ -210,8 +210,12 @@ torch/backends/cudnn/ @eqy @syed-ahmed @Aidyn-A
/test/inductor/test_flex_attention.py @drisspg
/test/inductor/test_flex_decoding.py @drisspg
# Low Precision GEMMs
# Low Precision & Grouped GEMMs
/aten/src/ATen/native/cuda/Blas.cpp @drisspg @slayton58
/aten/src/ATen/native/cuda/GroupedBlas.cpp @drisspg @slayton58
/aten/src/ATen/native/cuda/ScaledBlas.cpp @drisspg @slayton58
/aten/src/ATen/cuda/CUDABlas.cpp @drisspg @slayton58
/aten/src/ATen/cuda/CUDABlas.h @drisspg @slayton58
/aten/src/ATen/cuda/CUDAScaledBlas.cpp @drisspg @slayton58
/aten/src/ATen/cuda/CUDAScaledBlas.h @drisspg @slayton58
/test/test_scaled_matmul_cuda.py @drisspg @slayton58

View File

@ -11,7 +11,6 @@ aspects of contributing to PyTorch.
<!-- toc -->
- [Developing PyTorch](#developing-pytorch)
- [Setup the development environment](#setup-the-development-environment)
- [Tips and Debugging](#tips-and-debugging)
- [Nightly Checkout & Pull](#nightly-checkout--pull)
- [Codebase structure](#codebase-structure)
@ -19,7 +18,7 @@ aspects of contributing to PyTorch.
- [Python Unit Testing](#python-unit-testing)
- [Better local unit tests with `pytest`](#better-local-unit-tests-with-pytest)
- [Local linting](#local-linting)
- [Running `mypy`](#running-mypy)
- [Running `pyrefly`](#running-pyrefly)
- [C++ Unit Testing](#c-unit-testing)
- [Run Specific CI Jobs](#run-specific-ci-jobs)
- [Merging your Change](#merging-your-change)
@ -67,23 +66,6 @@ aspects of contributing to PyTorch.
Follow the instructions for [installing PyTorch from source](https://github.com/pytorch/pytorch#from-source). If you get stuck when developing PyTorch on your machine, check out the [tips and debugging](#tips-and-debugging) section below for common solutions.
### Setup the development environment
First, you need to [fork the PyTorch project on GitHub](https://github.com/pytorch/pytorch/fork) and follow the instructions at [Connecting to GitHub with SSH](https://docs.github.com/en/authentication/connecting-to-github-with-ssh) to setup your SSH authentication credentials.
Then clone the PyTorch project and setup the development environment:
```bash
git clone git@github.com:<USERNAME>/pytorch.git
cd pytorch
git remote add upstream git@github.com:pytorch/pytorch.git
make setup-env
# Or run `make setup-env-cuda` for pre-built CUDA binaries
# Or run `make setup-env-rocm` for pre-built ROCm binaries
source venv/bin/activate # or `. .\venv\Scripts\activate` on Windows
```
### Tips and Debugging
* If you want to have no-op incremental rebuilds (which are fast), see [Make no-op build fast](#make-no-op-build-fast) below.
@ -299,7 +281,7 @@ dependencies as well as the nightly binaries into the repo directory.
**Prerequisites**:
The following packages should be installed with `pip`:
- `expecttest` and `hypothesis` - required to run tests
- `mypy` - recommended for linting
- `pyrefly` - recommended for type checking. [Pyrefly](https://pyrefly.org/)
- `pytest` - recommended to run tests more selectively
Running
```
@ -368,15 +350,32 @@ make lint
Learn more about the linter on the [lintrunner wiki page](https://github.com/pytorch/pytorch/wiki/lintrunner)
#### Running `mypy`
#### Running `pyrefly`
`mypy` is an optional static type checker for Python. We have multiple `mypy`
configs for the PyTorch codebase that are automatically validated against whenever the linter is run.
[Pyrefly](https://pyrefly.org/) is a high-performance static type checker for Python. It provides fast type checking along with IDE features like autocomplete and instant error feedback.
PyTorch uses Pyrefly for type checking across the codebase. The configuration is managed in `pyrefly.toml` at the root of the repository.
**Getting Started with Pyrefly:**
To run type checking on the PyTorch codebase:
```bash
pyrefly check
```
For more detailed error information with summaries:
```bash
pyrefly check --summarize-errors
```
**Learn More:**
- [Pyrefly Configuration](https://pyrefly.org/en/docs/configuration/) - Detailed configuration options
- [Pyrefly IDE Features](https://pyrefly.org/en/docs/IDE-features/) - Set up Pyrefly in your editor for real-time type checking
- [Python Typing Tutorial](https://pyrefly.org/en/docs/typing-for-python-developers/) - Learn about Python type annotations
See [Guide for adding type annotations to
PyTorch](https://github.com/pytorch/pytorch/wiki/Guide-for-adding-type-annotations-to-PyTorch)
for more information on how to set up `mypy` and tackle type annotation
tasks.
for PyTorch-specific guidance on how to set up `pyrefly` and tackle type annotation tasks in this codebase.
### C++ Unit Testing

View File

@ -37,7 +37,7 @@ Copyright (c) 2024 Tri Dao.
All rights reserved.
All contributions by Arm:
Copyright (c) 2021, 2023-2024 Arm Limited and/or its affiliates
Copyright (c) 2021, 2023-2025 Arm Limited and/or its affiliates
All contributions from Caffe:
Copyright(c) 2013, 2014, 2015, the respective contributors

View File

@ -1,7 +1,7 @@
# Security Policy
- [**Reporting a Vulnerability**](#reporting-a-vulnerability)
- [**Using Pytorch Securely**](#using-pytorch-securely)
- [**Using PyTorch Securely**](#using-pytorch-securely)
- [Untrusted models](#untrusted-models)
- [TorchScript models](#torchscript-models)
- [Untrusted inputs](#untrusted-inputs)
@ -10,28 +10,30 @@
- [**CI/CD security principles**](#cicd-security-principles)
## Reporting Security Issues
Beware that none of the topics under [Using Pytorch Securely](#using-pytorch-securely) are considered vulnerabilities of Pytorch.
Beware that none of the topics under [Using PyTorch Securely](#using-pytorch-securely) are considered vulnerabilities of PyTorch.
However, if you believe you have found a security vulnerability in PyTorch, we encourage you to let us know right away. We will investigate all legitimate reports and do our best to quickly fix the problem.
Please report security issues using https://github.com/pytorch/pytorch/security/advisories/new
All reports submitted thru the security advisories mechanism would **either be made public or dismissed by the team within 90 days of the submission**. If advisory has been closed on the grounds that it is not a security issue, please do not hesitate to create an [new issue](https://github.com/pytorch/pytorch/issues/new?template=bug-report.yml) as it is still likely a valid issue within the framework.
All reports submitted through the security advisories mechanism would **either be made public or dismissed by the team within 90 days of the submission**. If advisory has been closed on the grounds that it is not a security issue, please do not hesitate to create an [new issue](https://github.com/pytorch/pytorch/issues/new?template=bug-report.yml) as it is still likely a valid issue within the framework.
**Note on crashes and out of bounds access**: PyTorch is a computational framework that performs operations on behalf of the caller. Like many low-level libraries, PyTorch generally does not validate all inputs to every function—the responsibility for providing valid arguments lies with the calling code. While crashes and out of bounds memory access should be reported as bugs, they are generally not considered security vulnerabilities in PyTorch's threat model.
Please refer to the following page for our responsible disclosure policy, reward guidelines, and those things that should not be reported:
https://www.facebook.com/whitehat
## Using Pytorch Securely
**Pytorch models are programs**, so treat its security seriously -- running untrusted models is equivalent to running untrusted code. In general we recommend that model weights and the python code for the model are distributed independently. That said, be careful about where you get the python code from and who wrote it (preferentially check for a provenance or checksums, do not run any pip installed package).
## Using PyTorch Securely
**PyTorch models are programs**, so treat its security seriously -- running untrusted models is equivalent to running untrusted code. In general we recommend that model weights and the python code for the model are distributed independently. That said, be careful about where you get the python code from and who wrote it (preferentially check for a provenance or checksums, do not run any pip installed package).
### Untrusted models
Be careful when running untrusted models. This classification includes models created by unknown developers or utilizing data obtained from unknown sources[^data-poisoning-sources].
**Prefer to execute untrusted models within a secure, isolated environment such as a sandbox** (e.g., containers, virtual machines). This helps protect your system from potentially malicious code. You can find further details and instructions in [this page](https://developers.google.com/code-sandboxing).
**Be mindful of risky model formats**. Give preference to share and load weights with the appropriate format for your use case. [safetensors](https://huggingface.co/docs/safetensors/en/index) gives the most safety but is the most restricted in what it supports. [`torch.load`](https://pytorch.org/docs/stable/generated/torch.load.html#torch.load) has a significantly larger surface of attack but is more flexible in what it can serialize. See the documentation for more details.
**Be mindful of risky model formats**. Give preference to share and load weights with the appropriate format for your use case. [Safetensors](https://huggingface.co/docs/safetensors/en/index) gives the most safety but is the most restricted in what it supports. [`torch.load`](https://pytorch.org/docs/stable/generated/torch.load.html#torch.load) has a significantly larger surface of attack but is more flexible in what it can serialize. See the documentation for more details.
Even for more secure serialization formats, unexpected inputs to the downstream system can cause diverse security threats (e.g. denial of service, out of bound reads/writes) and thus we recommend extensive validation of any untrusted inputs.
@ -43,7 +45,7 @@ Important Note: The trustworthiness of a model is not binary. You must always de
### TorchScript models
TorchScript models should treated the same way as locally executable code from an unknown source. Only run TorchScript models if you trust the provider. Please note, that tools for introspecting TorchScript models (such as `torch.utils.model_dump`) may also execute partial or full code stored in those models, therefore they should be used only if you trust the provider of the binary you are about to load.
TorchScript models should be treated the same way as locally executable code from an unknown source. Only run TorchScript models if you trust the provider. Please note, that tools for introspecting TorchScript models (such as `torch.utils.model_dump`) may also execute partial or full code stored in those models, therefore they should be used only if you trust the provider of the binary you are about to load.
### Untrusted inputs during training and prediction
@ -59,9 +61,9 @@ If applicable, prepare your model against bad inputs and prompt injections. Some
### Data privacy
**Take special security measures if your model if you train models with sensitive data**. Prioritize [sandboxing](https://developers.google.com/code-sandboxing) your models and:
- Do not feed sensitive data to untrusted model (even if runs in a sandboxed environment)
- If you consider publishing a model that was partially trained with sensitive data, be aware that data can potentially be recovered from the trained weights (especially if model overfits).
**Take special security measures if you train your models with sensitive data**. Prioritize [sandboxing](https://developers.google.com/code-sandboxing) your models and:
- Do not feed sensitive data to an untrusted model (even if runs in a sandboxed environment)
- If you consider publishing a model that was partially trained with sensitive data, be aware that data can potentially be recovered from the trained weights (especially if the model overfits).
### Using distributed features

View File

@ -260,7 +260,7 @@ IF(USE_FBGEMM_GENAI)
if(USE_CUDA)
# To avoid increasing the build time/binary size unnecessarily, use an allow-list of kernels to build.
# If you want to integrate a kernel from FBGEMM into torch, you have to add it here.
set(FBGEMM_CUTLASS_KERNELS_REGEX ".*(mx8mx8bf16_grouped|f4f4bf16_grouped).*")
set(FBGEMM_CUTLASS_KERNELS_REGEX ".*(mx8mx8bf16_grouped|f4f4bf16_grouped|f4f4bf16).*")
file(GLOB_RECURSE fbgemm_genai_native_cuda_cu
"${FBGEMM_GENAI_SRCS}/cutlass_extensions/*.cu"
"${FBGEMM_GENAI_SRCS}/cutlass_extensions/**/*.cu")

View File

@ -181,7 +181,7 @@ c10::intrusive_ptr<c10::TensorImpl> CPUGeneratorImpl::get_state() const {
static const size_t size = sizeof(CPUGeneratorImplState);
static_assert(std::is_standard_layout_v<CPUGeneratorImplState>, "CPUGeneratorImplState is not a PODType");
auto state_tensor = at::detail::empty_cpu({(int64_t)size}, ScalarType::Byte, std::nullopt, std::nullopt, std::nullopt, std::nullopt);
auto state_tensor = at::detail::empty_cpu({static_cast<int64_t>(size)}, ScalarType::Byte, std::nullopt, std::nullopt, std::nullopt, std::nullopt);
auto rng_state = state_tensor.data_ptr();
// accumulate generator data to be copied into byte tensor

View File

@ -23,8 +23,6 @@ C10_DIAGNOSTIC_POP()
#endif
namespace at {
namespace {
/*
These const variables defined the fp32 precisions for different backend
We have "generic", "cuda", "mkldnn" backend now and we can choose fp32
@ -41,16 +39,6 @@ namespace {
->rnn
*/
C10_ALWAYS_INLINE void warn_deprecated_fp32_precision_api(){
TORCH_WARN_ONCE(
"Please use the new API settings to control TF32 behavior, such as torch.backends.cudnn.conv.fp32_precision = 'tf32' "
"or torch.backends.cuda.matmul.fp32_precision = 'ieee'. Old settings, e.g, torch.backends.cuda.matmul.allow_tf32 = True, "
"torch.backends.cudnn.allow_tf32 = True, allowTF32CuDNN() and allowTF32CuBLAS() will be deprecated after Pytorch 2.9. Please see "
"https://pytorch.org/docs/main/notes/cuda.html#tensorfloat-32-tf32-on-ampere-and-later-devices"
);
}
} // namespace
Float32Backend str2backend(const std::string& name) {
if (name == "generic")
return Float32Backend::GENERIC;
@ -206,7 +194,6 @@ bool Context::allowTF32CuDNN(std::optional<Float32Op> op) const {
} else {
return float32Precision(Float32Backend::CUDA, op.value()) == Float32Precision::TF32;
}
warn_deprecated_fp32_precision_api();
return allow_tf32_cudnn;
}
@ -214,7 +201,6 @@ void Context::setAllowTF32CuDNN(bool b) {
setFloat32Precision(Float32Backend::CUDA, Float32Op::RNN, b ? Float32Precision::TF32 : Float32Precision::NONE);
setFloat32Precision(Float32Backend::CUDA, Float32Op::CONV, b ? Float32Precision::TF32 : Float32Precision::NONE);
allow_tf32_cudnn = b;
warn_deprecated_fp32_precision_api();
}
void Context::setSDPPriorityOrder(const std::vector<int64_t>& order) {
@ -223,7 +209,7 @@ void Context::setSDPPriorityOrder(const std::vector<int64_t>& order) {
"setSDPPriority order expected ", sdp_priority_order.size() - 1, " but got ",
at::num_sdp_backends, " unique backends specified in priority order.");
for (uint32_t i = 0; i < order.size(); i++) {
sdp_priority_order[i] = (at::SDPBackend) order[i];
sdp_priority_order[i] = static_cast<at::SDPBackend>(order[i]);
}
}
@ -325,7 +311,6 @@ bool Context::allowTF32CuBLAS() const {
"Current status indicate that you have used mix of the legacy and new APIs to set the TF32 status for cublas matmul. ",
"We suggest only using the new API to set the TF32 flag. See also: ",
"https://pytorch.org/docs/main/notes/cuda.html#tensorfloat-32-tf32-on-ampere-and-later-devices");
warn_deprecated_fp32_precision_api();
return allow_tf32_new;
}
@ -349,7 +334,6 @@ Float32MatmulPrecision Context::float32MatmulPrecision() const {
"Current status indicate that you have used mix of the legacy and new APIs to set the matmul precision. ",
"We suggest only using the new API for matmul precision. See also: ",
"https://pytorch.org/docs/main/notes/cuda.html#tensorfloat-32-tf32-on-ampere-and-later-devices");
warn_deprecated_fp32_precision_api();
return float32_matmul_precision;
}
@ -377,7 +361,6 @@ Float32Precision Context::float32Precision(Float32Backend backend, Float32Op op)
void Context::setFloat32MatmulPrecision(const std::string &s) {
auto match = [this](const std::string & s_) {
warn_deprecated_fp32_precision_api();
// TODO: consider if CuDNN field needs to also be set for potential future CuDNN ops like multi-headed attention
if (s_ == "highest") {
float32_matmul_precision = at::Float32MatmulPrecision::HIGHEST;

View File

@ -174,6 +174,12 @@ class TORCH_API Context {
static long versionCuDNN() {
return detail::getCUDAHooks().versionCuDNN();
}
static long versionRuntimeCuDNN() {
return detail::getCUDAHooks().versionRuntimeCuDNN();
}
static long versionCuDNNFrontend() {
return detail::getCUDAHooks().versionCuDNNFrontend();
}
static bool hasCuSOLVER() {
return detail::getCUDAHooks().hasCuSOLVER();
}

View File

@ -94,6 +94,11 @@ TORCH_API inline void resetPeakStats(c10::DeviceIndex device_index) {
at::getDeviceAllocator(device_type)->resetPeakStats(device_index);
}
TORCH_API inline std::pair<size_t, size_t> getMemoryInfo(
c10::DeviceIndex device_index) {
const auto device_type = getAccelerator(true).value();
return at::getDeviceAllocator(device_type)->getMemoryInfo(device_index);
}
} // namespace at::accelerator
namespace at {

View File

@ -6,6 +6,7 @@
#include <c10/util/Half.h>
#include <c10/util/Metaprogramming.h>
#include <c10/util/complex.h>
#include <torch/headeronly/core/Dispatch.h>
#ifdef __CUDACC__
#include <cuda.h> // For CUDA_VERSION
@ -61,12 +62,9 @@ TORCH_API void record_kernel_function_dtype(std::string name);
} \
} while (0)
#define AT_PRIVATE_CASE_TYPE_USING_HINT(enum_type, HINT, ...) \
case enum_type: { \
AT_PRIVATE_CHECK_SELECTIVE_BUILD(enum_type); \
using HINT [[maybe_unused]] = c10::impl::ScalarTypeToCPPTypeT<enum_type>; \
return __VA_ARGS__(); \
}
#define AT_PRIVATE_CASE_TYPE_USING_HINT(enum_type, HINT, ...) \
THO_PRIVATE_CASE_TYPE_USING_HINT_TMPL( \
AT_PRIVATE_CHECK_SELECTIVE_BUILD, enum_type, HINT, __VA_ARGS__)
#define AT_DISPATCH_CASE(enum_type, ...) \
AT_PRIVATE_CASE_TYPE_USING_HINT(enum_type, scalar_t, __VA_ARGS__)
@ -95,14 +93,6 @@ TORCH_API void record_kernel_function_dtype(std::string name);
return __VA_ARGS__(); \
}
namespace detail {
inline at::ScalarType scalar_type(at::ScalarType s) {
return s;
}
} // namespace detail
// The AT_DISPATCH_* family of macros provides the ability to
// conveniently generate specializations of a kernel over all of the
// dtypes we care about in PyTorch. We call it "dispatch" because
@ -190,25 +180,13 @@ inline at::ScalarType scalar_type(at::ScalarType s) {
// but we're just being safe (and it doesn't hurt.) Note we must
// use it to shut up warnings about unused store.
#define AT_DISPATCH_SWITCH(TYPE, NAME, ...) \
[&] { \
const auto& the_type = TYPE; \
constexpr const char* at_dispatch_name = NAME; \
/* don't use TYPE again in case it is an expensive or side-effect op */ \
at::ScalarType _st = ::detail::scalar_type(the_type); \
RECORD_KERNEL_FUNCTION_DTYPE(at_dispatch_name, _st); \
switch (_st) { \
__VA_ARGS__ \
default: \
TORCH_CHECK_NOT_IMPLEMENTED( \
false, \
'"', \
at_dispatch_name, \
"\" not implemented for '", \
toString(_st), \
"'"); \
} \
}()
#define AT_DISPATCH_SWITCH(TYPE, NAME, ...) \
THO_DISPATCH_SWITCH_TMPL( \
RECORD_KERNEL_FUNCTION_DTYPE, \
TORCH_CHECK_NOT_IMPLEMENTED, \
TYPE, \
NAME, \
__VA_ARGS__)
#define AT_DISPATCH_CASE_FLOATING_TYPES(...) \
AT_DISPATCH_CASE(at::ScalarType::Double, __VA_ARGS__) \

View File

@ -1,3 +1,8 @@
#pragma once
#include <torch/headeronly/core/Dispatch_v2.h>
// Get AT_DISPATCH_SWITCH and AT_DISPATCH_CASE:
#include <ATen/Dispatch.h>
// This is a new implementation of the AT_DISPATCH macro family from
@ -74,41 +79,19 @@
// macro expansion occurs, mediated with AT_EXPAND and AT_GUARD. I mostly
// relied on GPT4 to help me get it right.
// Public API macros
// See documentation above
#define AT_DISPATCH_V2(TYPE, NAME, BODY, ...) \
AT_DISPATCH_SWITCH(TYPE, NAME, AT_AP_VAR(AT_WRAP(BODY), TYPE, __VA_ARGS__))
// This macro lets you pass an arbitrary expression that may contain internal
// commas to another macro without having the commas causing the expression
// to be interpreted as being multiple arguments
#define AT_WRAP(...) __VA_ARGS__
#define AT_FLOAT8_TYPES \
c10::kFloat8_e5m2, c10::kFloat8_e5m2fnuz, c10::kFloat8_e4m3fn, \
c10::kFloat8_e4m3fnuz, c10::kFloat8_e8m0fnu
#define AT_INTEGRAL_TYPES \
c10::kByte, c10::kChar, c10::kInt, c10::kLong, c10::kShort
#define AT_FLOATING_TYPES c10::kDouble, c10::kFloat
#define AT_BAREBONES_UNSIGNED_TYPES c10::kUInt16, c10::kUInt32, c10::kUInt64
#define AT_INTEGRAL_TYPES_V2 \
AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_BAREBONES_UNSIGNED_TYPES)
#define AT_COMPLEX_TYPES c10::kComplexDouble, c10::kComplexFloat
#define AT_QINT_TYPES c10::kQInt8, c10::kQUInt8, c10::kQInt32
// NB: not *actually* all types
#define AT_ALL_TYPES AT_EXPAND(AT_INTEGRAL_TYPES), AT_EXPAND(AT_FLOATING_TYPES)
#define AT_ALL_TYPES_AND_COMPLEX \
AT_EXPAND(AT_ALL_TYPES), AT_EXPAND(AT_COMPLEX_TYPES)
// Helper macros
THO_DISPATCH_V2_TMPL( \
AT_DISPATCH_SWITCH, \
AT_DISPATCH_CASE, \
TYPE, \
NAME, \
AT_WRAP(BODY), \
__VA_ARGS__)
// Unused helper macros, kept for BC:
#define AT_AP_VAR(N, T, ...) \
AT_EXPAND(AT_CONCAT(AT_AP, AT_NUM_ARGS(__VA_ARGS__))(AT_WRAP(N), __VA_ARGS__))
#define AT_CONCAT(a, b) AT_CONCAT_AUX(a, b)
#define AT_CONCAT_AUX(a, b) a##b
#define AT_EXPAND(X) X
// Ensure we never have too many scalar types for the expansion here to
// support. To bump this, you must regenerate the macros below.
@ -119,12 +102,6 @@ static_assert(static_cast<int>(c10::ScalarType::NumOptions) < 60);
num_args = 60
nums = ', '.join(str(i) for i in reversed(range(num_args+1)))
args = ', '.join(f'_{i}' for i in range(1, num_args+1))
print(f'#define AT_NUM_ARGS(...) AT_EXPAND(AT_NUM_ARGS_AUX(__VA_ARGS__, {nums}))')
print(f'#define AT_NUM_ARGS_AUX({args}, N, ...) N')
for i in range(1, num_args+1):
args = ', '.join(f'_{i}' for i in range(1, i+1))
cases = ' '.join([f'AT_DISPATCH_CASE(_{j}, N)' for j in range(1, i+1)])
@ -135,8 +112,6 @@ for i in range(1, num_args+1):
// Begin generated code
// clang-format off
#define AT_NUM_ARGS(...) AT_EXPAND(AT_NUM_ARGS_AUX(__VA_ARGS__, 60, 59, 58, 57, 56, 55, 54, 53, 52, 51, 50, 49, 48, 47, 46, 45, 44, 43, 42, 41, 40, 39, 38, 37, 36, 35, 34, 33, 32, 31, 30, 29, 28, 27, 26, 25, 24, 23, 22, 21, 20, 19, 18, 17, 16, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1, 0))
#define AT_NUM_ARGS_AUX(_1, _2, _3, _4, _5, _6, _7, _8, _9, _10, _11, _12, _13, _14, _15, _16, _17, _18, _19, _20, _21, _22, _23, _24, _25, _26, _27, _28, _29, _30, _31, _32, _33, _34, _35, _36, _37, _38, _39, _40, _41, _42, _43, _44, _45, _46, _47, _48, _49, _50, _51, _52, _53, _54, _55, _56, _57, _58, _59, _60, N, ...) N
#define AT_AP1(N, _1) AT_DISPATCH_CASE(_1, N)
#define AT_AP2(N, _1, _2) AT_DISPATCH_CASE(_1, N) AT_DISPATCH_CASE(_2, N)
#define AT_AP3(N, _1, _2, _3) AT_DISPATCH_CASE(_1, N) AT_DISPATCH_CASE(_2, N) AT_DISPATCH_CASE(_3, N)

View File

@ -252,13 +252,13 @@ MapAllocator::MapAllocator(WithFd /*unused*/, std::string_view filename, int fd,
if (!(flags_ & ALLOCATOR_MAPPED_FROMFD)) {
if (flags_ & ALLOCATOR_MAPPED_SHARED) {
// NOLINTNEXTLINE(bugprone-assignment-in-if-condition)
if ((fd = open(filename_.c_str(), flags, (mode_t)0600)) == -1) {
if ((fd = open(filename_.c_str(), flags, static_cast<mode_t>(0600))) == -1) {
TORCH_CHECK(false, "unable to open file <", filename_, "> in read-write mode: ", c10::utils::str_error(errno), " (", errno, ")");
}
} else if (flags_ & ALLOCATOR_MAPPED_SHAREDMEM) {
#ifdef HAVE_SHM_OPEN
// NOLINTNEXTLINE(bugprone-assignment-in-if-condition)
if((fd = shm_open(filename_.c_str(), flags, (mode_t)0600)) == -1) {
if((fd = shm_open(filename_.c_str(), flags, static_cast<mode_t>(0600))) == -1) {
TORCH_CHECK(false, "unable to open shared memory object <", filename_, "> in read-write mode: ", c10::utils::str_error(errno), " (", errno, ")");
}
#else
@ -503,7 +503,7 @@ RefcountedMapAllocator::RefcountedMapAllocator(WithFd /*unused*/, const char *fi
void RefcountedMapAllocator::initializeAlloc() {
TORCH_CHECK(base_ptr_, "base_ptr_ is null");
MapInfo *map_info = (MapInfo*)base_ptr_;
MapInfo *map_info = static_cast<MapInfo*>(base_ptr_);
#ifdef _WIN32
ReleaseContext* r_ctx = new ReleaseContext;
@ -539,7 +539,7 @@ void RefcountedMapAllocator::close() {
}
#else /* _WIN32 */
MapInfo *info = (MapInfo*)(data);
MapInfo *info = static_cast<MapInfo*>(data);
if (--info->refcount == 0) {
#ifdef HAVE_SHM_UNLINK
if (shm_unlink(filename_.c_str()) == -1) {

View File

@ -862,7 +862,7 @@ void TensorIteratorBase::narrow(int dim, int64_t start, int64_t size) {
shape_[dim] = size;
view_offsets_[dim] += start;
for (auto& op : operands_) {
op.data = ((char*)op.data) + op.stride_bytes[dim] * start;
op.data = (static_cast<char*>(op.data)) + op.stride_bytes[dim] * start;
}
if (size == 1 && !is_reduction_) {
coalesce_dimensions();
@ -873,7 +873,7 @@ void TensorIteratorBase::select_all_keeping_dim(int start_dim, IntArrayRef indic
TORCH_INTERNAL_ASSERT(start_dim <= ndim());
for (const auto i : c10::irange(start_dim, ndim())) {
for (auto& op : operands_) {
op.data = ((char*)op.data) + op.stride_bytes[i] * indices[i - start_dim];
op.data = (static_cast<char*>(op.data)) + op.stride_bytes[i] * indices[i - start_dim];
}
shape_[i] = 1;
}

View File

@ -41,7 +41,7 @@ inline void serial_for_each(
IntArrayRef strides,
char** base_ptrs,
size_t ntensors,
typename TensorIteratorBase::loop2d_t loop,
TensorIteratorBase::loop2d_t loop,
Range range) {
const auto ndim = shape.size();
TORCH_INTERNAL_ASSERT_DEBUG_ONLY(

View File

@ -72,10 +72,16 @@ TORCH_LIBRARY_IMPL(aten, VmapMode, m) {
m.impl("random_", unsupportedRandomOp_<Tensor&, std::optional<Generator>>);
m.impl("rand_like", unsupportedRandomOp<const Tensor&, TENSOROPTIONS, std::optional<MemoryFormat>>);
m.impl("rand_like.generator", unsupportedRandomOp<const Tensor&, std::optional<Generator>, TENSOROPTIONS, std::optional<MemoryFormat>>);
m.impl("randn_like", unsupportedRandomOp<const Tensor&, TENSOROPTIONS, std::optional<MemoryFormat>>);
m.impl("randn_like.generator", unsupportedRandomOp<const Tensor&, std::optional<Generator>, TENSOROPTIONS, std::optional<MemoryFormat>>);
m.impl("randint_like", unsupportedRandomOp<const Tensor&, int64_t, TENSOROPTIONS, std::optional<MemoryFormat>>);
m.impl("randint_like.Tensor", unsupportedRandomOp<const Tensor&, const Tensor&, TENSOROPTIONS, std::optional<MemoryFormat>>);
m.impl("randint_like.low_dtype", unsupportedRandomOp<const Tensor&, int64_t, int64_t, TENSOROPTIONS, std::optional<MemoryFormat>>);
m.impl("randint_like.generator", unsupportedRandomOp<const Tensor&, int64_t, std::optional<Generator>, TENSOROPTIONS, std::optional<MemoryFormat>>);
m.impl("randint_like.Tensor_generator", unsupportedRandomOp<const Tensor&, const Tensor&, std::optional<Generator>, TENSOROPTIONS, std::optional<MemoryFormat>>);
m.impl("randint_like.low_generator_dtype", unsupportedRandomOp<const Tensor&, int64_t, int64_t, std::optional<Generator>, TENSOROPTIONS, std::optional<MemoryFormat>>);
m.impl("rand", unsupportedRandomOp<IntArrayRef, TENSOROPTIONS>);
m.impl("rand.generator", unsupportedRandomOp<IntArrayRef, std::optional<Generator>, TENSOROPTIONS>);

View File

@ -226,8 +226,8 @@ template <
typename B = HostBlock<S>>
struct CachingHostAllocatorImpl {
virtual ~CachingHostAllocatorImpl() {
active_ = false;
if (pinned_use_background_threads()) {
if (active_) {
active_ = false;
getBackgroundThreadPool()->waitWorkComplete();
}
}
@ -260,6 +260,7 @@ struct CachingHostAllocatorImpl {
if (pinned_use_background_threads()) {
// Launch the background thread and process events in a loop.
static bool background_thread_flag [[maybe_unused]] = [this] {
active_ = true;
getBackgroundThreadPool()->run([&]() {
while (active_) {
process_events();
@ -683,9 +684,9 @@ struct CachingHostAllocatorImpl {
alignas(hardware_destructive_interference_size) std::mutex events_mutex_;
std::deque<std::pair<E, B*>> events_; // event queue paired with block
// Indicates whether the object is active.
// Indicates whether the event-processing thread pool is active.
// Set to false in the destructor to signal background threads to stop.
std::atomic<bool> active_{true};
std::atomic<bool> active_{false};
protected:
alignas(hardware_destructive_interference_size) HostStatsStaged stats_;
};

View File

@ -190,12 +190,14 @@ class IListRef;
* it to a function (e.g. `ImplT::<dispatch-function>(this_)`).
*/
#define TORCH_ILISTREF_UNWRAP(TAG, BODY) \
C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED("-Wswitch-enum") \
switch (TAG) { \
TORCH_ILISTREF_FORALL_TAGS(TORCH_ILISTREF_UNWRAP_CASE, BODY) \
break; \
default: \
TORCH_INTERNAL_ASSERT(false, "invalid IListRef tag."); \
}
} \
C10_DIAGNOSTIC_POP()
enum class IListRefTag {
#define DEFINE_TAG(tag, ...) tag,

View File

@ -56,7 +56,7 @@ C10_HOST_DEVICE inline T uniform_int_full_range(V val) {
* in this overloaded version
*/
template <typename T, typename V>
C10_HOST_DEVICE inline std::enable_if_t<!(std::is_floating_point_v<T>), T>uniform_int(V val) {
C10_HOST_DEVICE inline std::enable_if_t<!std::is_floating_point_v<T>, T>uniform_int(V val) {
if constexpr (std::is_same_v<T, bool>) {
return static_cast<bool>(val & 1);
} else if constexpr (std::is_same_v<T, int64_t>) {

View File

@ -114,25 +114,25 @@ inline typename remove_symint<T>::type unpackSymInt(T x) {
}
template <>
inline typename remove_symint<c10::SymInt>::type unpackSymInt(c10::SymInt x) {
inline remove_symint<c10::SymInt>::type unpackSymInt(c10::SymInt x) {
return x.guard_int(__FILE__, __LINE__);
}
template <>
inline typename remove_symint<c10::SymIntArrayRef>::type unpackSymInt(
inline remove_symint<c10::SymIntArrayRef>::type unpackSymInt(
c10::SymIntArrayRef x) {
return C10_AS_INTARRAYREF_SLOW(x);
}
template <>
inline typename remove_symint<std::optional<c10::SymInt>>::type unpackSymInt(
inline remove_symint<std::optional<c10::SymInt>>::type unpackSymInt(
std::optional<c10::SymInt> x) {
return x.has_value() ? std::make_optional(x->guard_int(__FILE__, __LINE__))
: std::nullopt;
}
template <>
inline typename remove_symint<at::OptionalSymIntArrayRef>::type unpackSymInt(
inline remove_symint<at::OptionalSymIntArrayRef>::type unpackSymInt(
at::OptionalSymIntArrayRef x) {
return x.has_value() ? std::make_optional(C10_AS_INTARRAYREF_SLOW(*x))
: std::nullopt;

View File

@ -631,8 +631,8 @@ call_functor_with_args_from_stack_(
Stack* stack,
std::index_sequence<ivalue_arg_indices...> /*unused*/,
guts::typelist::typelist<ArgTypes...>* /*unused*/) {
(void)(stack); // when sizeof...(ivalue_arg_indices) == 0, this argument would
// be unused and we have to silence the compiler warning.
(void)stack; // when sizeof...(ivalue_arg_indices) == 0, this argument would
// be unused and we have to silence the compiler warning.
// We're explicitly filtering out DispatchKeySet from the argument list.
// Some kernels take a DispatchKeySet as their first argument in order to

View File

@ -18,6 +18,7 @@ struct TORCH_API EnumType : public NamedType {
TypePtr value,
std::vector<EnumNameValue> enum_names_values,
std::weak_ptr<::torch::jit::CompilationUnit> cu) {
C10_DIAGNOSTIC_PUSH_AND_IGNORED_IF_DEFINED("-Wswitch-enum")
switch (value->kind()) {
case TypeKind::IntType:
case TypeKind::FloatType:
@ -34,6 +35,7 @@ struct TORCH_API EnumType : public NamedType {
value->str(),
"', only int, float and string are supported");
}
C10_DIAGNOSTIC_POP()
}
std::string str() const override {

Some files were not shown because too many files have changed in this diff Show More