Commit Graph

31 Commits

Author SHA1 Message Date
2e0e08588e [BE][PYFMT] migrate PYFMT for torch/[e-n]*/ to ruff format (#144553)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/144553
Approved by: https://github.com/ezyang
ghstack dependencies: #144551
2025-06-17 08:18:47 +00:00
12a95390ae [Minimizer] allow overriding of ShapeProp logic by subclasses of _MinimizerBase (#148784)
Summary:
The changes contained in this diff
- allow subclass Minimizer implementations to override the default shape propagation logic with custom logic
- copies over the meta attribute on get_attr graph nodes during the graph splitting step
- for both changes, behavior for existing classes do not change

Test Plan: CI

Differential Revision: D70799942

Pull Request resolved: https://github.com/pytorch/pytorch/pull/148784
Approved by: https://github.com/blaine-rister
2025-03-10 22:22:16 +00:00
d5a2e4c754 [oncall] Change error message to be more readable (#146934)
Summary:
During oncall, got a debug, where the error message is a bit ambiguous, due to multiple colons, and full line cutoff
```
AssertionError: Expected order: 1 for the component: remote_request_only to be >= 2, the max order for all its
```

Update the error message to something like
```
AssertionError: Component remote_request_only order must be >= max order of its upstream components, got component order=1 and max=2
```

Test Plan: CI

Differential Revision: D69482789

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146934
Approved by: https://github.com/ColinPeppler
2025-02-12 23:33:09 +00:00
0b2a3687b9 PEP585 update - torch/fx (#145166)
See #145101 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/145166
Approved by: https://github.com/bobrenjc93
2025-01-20 18:11:54 +00:00
729b7c0a84 [TGIF][Easy] Slightly improve the logging for tgif split pass (#143771)
Summary:
1. Added more details for some of the assert statements.
2. Moved assert statements to use tgif_assert

Test Plan: all unit tests should pass

Reviewed By: jingsh

Differential Revision: D67608251

Pull Request resolved: https://github.com/pytorch/pytorch/pull/143771
Approved by: https://github.com/jingsh
2025-01-06 21:00:15 +00:00
abbd71d29d [BE][Easy] enable PYFMT for torch.fx (#138443)
Reproduce command:

```bash
ghstack checkout https://github.com/pytorch/pytorch/pull/138443
git checkout HEAD~1 torch/
lintrunner -a --take "PYFMT" --all-files
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138443
Approved by: https://github.com/ezyang
2024-10-21 19:15:49 +00:00
ed86ac2f25 [BE] typing for decorators - fx/_compatibility (#134054)
Summary: See #131429

Test Plan: unit tests pass

Differential Revision: D61493706

Pull Request resolved: https://github.com/pytorch/pytorch/pull/134054
Approved by: https://github.com/oulgen
2024-08-26 04:00:27 +00:00
72d2dba992 Add None return type to init (#132335)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132335
Approved by: https://github.com/albanD
2024-08-01 15:26:45 +00:00
945bf78894 Revert "[BE] typing for decorators - fx/_compatibility (#131568)"
This reverts commit 193f62fde91ee20deb5ddcd9ff4593cd78d74c64.

Reverted https://github.com/pytorch/pytorch/pull/131568 on behalf of https://github.com/clee2000 due to same as https://github.com/pytorch/pytorch/pull/131572#issuecomment-2254328359 but I clicked the wrong link by accident.  This is where it actually starts ([comment](https://github.com/pytorch/pytorch/pull/131568#issuecomment-2254330781))
2024-07-28 03:43:39 +00:00
193f62fde9 [BE] typing for decorators - fx/_compatibility (#131568)
See #131429

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131568
Approved by: https://github.com/justinchuby, https://github.com/oulgen, https://github.com/zou3519
2024-07-25 22:24:19 +00:00
5a0068cc69 [BE] mypy: disallow untyped decorators (#131428)
Untyped decorators strip the types from their decorated function so even if the underlying function is fully typed then callers to it don't get any benefit from type annotations.

Step 1 - Enable the error and override in all the offending files.

#131429

Pull Request resolved: https://github.com/pytorch/pytorch/pull/131428
Approved by: https://github.com/justinchuby, https://github.com/oulgen
2024-07-23 21:50:55 +00:00
038b927590 Flip default value for mypy disallow_untyped_defs [7/11] (#127844)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127844
Approved by: https://github.com/oulgen
ghstack dependencies: #127842, #127843
2024-06-08 18:49:45 +00:00
15add24bf2 fix: set codegen in _SplitterBase partitioner (#120361)
For graphs with single output, the expectation of torch.export / torch.compile graph_module output type is a single torch.tensor instead of a tuple.
However,  after using `_SplitterBase` partitioner on these graph_module (obtained from torch.export/torch.compile), the resultant graph module will return a tuple of tensors, in this case `(output,)`.

This PR adds codegen to the graphs produced by `_SplitterBase` partitioner. Setting this will ensure pytree unflatten nodes will be added automatically to handle unflattening of the output to return single outputs directly.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120361
Approved by: https://github.com/angelayi
2024-02-24 02:27:20 +00:00
2b48891e62 [AOTInductor] Add Runtime Constant-folding for AOTInductor (#118765)
Summary:
Add Runtime Constant-folding for AOTInductor.
This also include the invocation of constant folding at load time.

The constant folding lowering is a 2-step process.
First, we split the graph into 2 modules, one of it is the constant module, which doesn't depend on any input and the whole module could be inferred (constant-folded) one-time and be reused. The constant module, is lowered, and being codegen-ed as usual and cached (let's call this constant code). The constant code reuses the whole lowering/profiling/etc. process, only difference is that we do not generate any headers or initialization for the constant code.
Second, after handling the constant module, we take care of the main module (which is the part that would depend on the user input.) For the main module, we take in one additional component, the constant code, compare with a normal lowering. Addition step we do here is that, we inject the constant code into the codegen-ed main module, and create the caller for the main module to consume the result of the constant module.

Test Plan: Unit tests included in commit.

Differential Revision: D53274382

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118765
Approved by: https://github.com/chenyang78
2024-02-01 04:54:25 +00:00
675df7520a [tgif][multiforward] allow codegen to generate different func name (#111446)
Summary: see Shiyan's design doc for ATM TS publish weights dedupe https://fb.quip.com/HnUVAjUMaXMQ

Test Plan: tested in N4454041 after D50341352 that multiforward method is working for ts model

Differential Revision: D45750812

Pull Request resolved: https://github.com/pytorch/pytorch/pull/111446
Approved by: https://github.com/842974287
2023-10-19 21:19:30 +00:00
393fe9339a Back out "Revert D49107540: [pytorch][PR] split by tag" (#109332)
Summary:
Original commit changeset: 6391a068640b

Original Phabricator Diff: D49107540

Test Plan: same as D49107540

Differential Revision: D49297522

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109332
Approved by: https://github.com/842974287
2023-09-16 05:29:16 +00:00
bf5622e965 Revert "split by tag (#108892)"
This reverts commit 89b6276be9f1b04491625cc0d05de01c15f75597.

Reverted https://github.com/pytorch/pytorch/pull/108892 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/108892#issuecomment-1720249148))
2023-09-14 22:43:03 +00:00
89b6276be9 split by tag (#108892)
Differential Revision: D49107540

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108892
Approved by: https://github.com/842974287
2023-09-14 21:49:11 +00:00
6a448816f5 [fx][split] Copy node metadata for placeholders (#107981)
- Follow-up to #107248 which copies metadata for placeholder nodes in the top-level FX graph
- Currently, top-level placeholders do not have their metadata copied over, causing loss of `TensorMetadata` in some `torch.compile` backends

Fixes https://github.com/pytorch/TensorRT/issues/2258
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107981
Approved by: https://github.com/angelayi
2023-09-07 04:44:17 +00:00
2e73c86d45 [fx][split] make sure we copy node.meta over during split (#107248)
Summary: Previously when we create placeholder nodes for sub graph modules, we didn't copy node.meta over.

Test Plan: CI

Differential Revision: D48330866

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107248
Approved by: https://github.com/zyan0, https://github.com/houseroad, https://github.com/Neilblaze
2023-08-22 00:06:45 +00:00
105ef68f72 Fix typos under torch/fx directory (#97596)
This PR fixes typos in comments and messages of `.py` files under `torch/fx` directory

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97596
Approved by: https://github.com/dagitses, https://github.com/kit1980
2023-04-10 21:57:36 +00:00
20018aa766 modify split_by_tags to retain output order (#84136)
Summary: Currently `split_by_tags` determines submodule output order by iterating over `used_in_main`. Since this is a `Set`, insertion order is not retained so we run into problems with submodule output order being "randomized" & inconsistent between splits. By using `Dict[Node, None]` we can implement `used_in_main` as an ordered set so that output order is consistent when splitting the same model.

Test Plan: CI

Differential Revision: D39039268

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84136
Approved by: https://github.com/houseroad
2022-08-30 20:36:33 +00:00
ac5a94789f Refactor lift_subgraph_as_module as a fx.passes.util function (#80292)
lift_subgraph_as_module can be shared between fuser_utils.py and spliter_utils.py
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80292
Approved by: https://github.com/jjsjann123, https://github.com/842974287
2022-06-29 22:35:39 +00:00
3bcc19b29a Add __all__ to various submodules in torch.fx, distributions, distributed, package (#80367)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80367
Approved by: https://github.com/albanD
2022-06-27 21:27:30 +00:00
5a559a547d [FX] fix split_util by using getattr_recursive instead of getattr (#80011)
Summary: If the model contains ModuleList, it's possible that we got some of the weight attributes as module.sub.0.weight. `getattr` doesn't work in this case and we have a dedicated function `getattrt_recursive` for that. Just use that.

Reviewed By: houseroad

Differential Revision: D37326955

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80011
Approved by: https://github.com/houseroad
2022-06-23 03:35:46 +00:00
5e86505693 Move util functions to a more common place (#73519)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73519

Move `getattr_recursive()` and `setattr_recursive()` to fx main.

Test Plan: CI

Reviewed By: khabinov

Differential Revision: D34524723

fbshipit-source-id: a656e821d9dc1d446aa80cdc03a923bf0c05aeb5
(cherry picked from commit 4835965ac72d299487be14687823ea62394f4079)
2022-03-01 01:33:30 +00:00
84d4087874 Fix trt const_fold as output use case (#71194)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/71194

Reviewed By: jfix71, khabinov

Differential Revision: D33541168

fbshipit-source-id: dd5787430b272977963323a6ce38b3e15e979278
2022-01-12 16:57:19 -08:00
0559cb37cd [FX] Ensure BC coverage for all of torch.fx.passes (#65081)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/65081

Test Plan: Imported from OSS

Reviewed By: jbschlosser, khabinov

Differential Revision: D30967428

Pulled By: jamesr66a

fbshipit-source-id: 2ff83da728dc469f086cf504e71b43396db612d8
2021-09-17 09:32:43 -07:00
36a22967b7 [fx ir] Handle the case when output consumes get_attr directly (#57844)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/57844

Reviewed By: 842974287

Differential Revision: D28294298

fbshipit-source-id: db337fadca9f10f208324c9da6d95620178a189b
2021-05-10 22:04:43 -07:00
d896d1f4ce [fx splitter] Fix fusion group utility (#57280)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57280

We've found an issue that fusion group would results in circular dependency. For example
```
a -> b -> c -> d
|              ^
+ -------------+

Only a has non tensor output and currently we would create a fusion group (a, b, d). This results in circular dependency because now the fusion group depends on c while c depends on the fusion group as well.
```

This diff implement the solution discussed before. When we add a node to fusion group, we add all the nodes that are in the middle of the fusion group and this newly added node.

Use the same logic in minimizer to build fusion group.

Test Plan: split_tests and net_min_tests

Reviewed By: khabinov

Differential Revision: D27917432

fbshipit-source-id: a3d99fe5929dbc9f8eb0f45bccd83fd7b173795a
2021-04-30 10:18:01 -07:00
45692fbef0 [fx splitter][fx net_min] Move Splitter, Minimizer and necessary deps to OSS (#56201)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56201

Refactor Splitter and Minimizer to superclass `_SplitterBase` and `_MinimizerBase` and move them to OSS. This is needed to create an OSS example of GPU lowering with those tools.

Test Plan: CI

Reviewed By: jackm321

Differential Revision: D27629598

fbshipit-source-id: 0d4da02105ca509b31f1a6c4a39b1122c2bc7bf0
2021-04-24 15:19:12 -07:00