25 Commits

Author SHA1 Message Date
d5cdc36943 [BE][10/16] fix typos in torch/ (torch/csrc/jit/) (#156320)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/156320
Approved by: https://github.com/albanD
ghstack dependencies: #156318
2025-07-02 22:55:29 +00:00
cyy
70d7638b0d Fix clang-tidy suppression in torch/csrc/jit (#152271)
Remove some clang-tidy suppression in torch/csrc/jit by applying fixes or refactoring.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/152271
Approved by: https://github.com/Skylion007, https://github.com/malfet

Co-authored-by: Aaron Gokaslan <aaronGokaslan@gmail.com>
2025-04-27 21:18:39 +00:00
cyy
8f291e8c00 Fix clang-tidy warnings in torch/jit (#146963)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/146963
Approved by: https://github.com/davidberard98
2025-02-15 03:36:59 +00:00
cyy
1a73255102 Concat namespaces in jit code (#138976)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138976
Approved by: https://github.com/Skylion007
2024-10-26 17:41:27 +00:00
ed327876f5 [codemod] c10:optional -> std::optional (#126135)
Generated by running the following from PyTorch root:
```
find . -regex ".*\.\(cpp\|h\|cu\|hpp\|cc\|cxx\)$" | grep -v "build/" | xargs -n 50 -P 4 perl -pi -e 's/c10::optional/std::optional/'
```

`c10::optional` is just an alias for `std::optional`. This removes usages of that alias in preparation for eliminating it entirely.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126135
Approved by: https://github.com/Skylion007, https://github.com/malfet, https://github.com/albanD, https://github.com/aaronenyeshi
2024-05-14 19:35:51 +00:00
cyy
483f748dd5 [BE] Enforce missing override keyword (#104032)
This PR enables `-Winconsistent-missing-destructor-override` and `-Winconsistent-missing-override`
and fixes violations.

<!--
copilot:summary
-->
### <samp>🤖 Generated by Copilot at 47e904e</samp>

This pull request updates the code of various classes and operators in the `caffe2` and `aten` subdirectories to use the `override` specifier instead of the `virtual` keyword for destructors and other virtual functions that override a base class function. This improves the code readability, quality, and consistency with C++ best practices. It also modifies the `./CMakeLists.txt` file to enable warnings for these specifiers, but disable errors.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/104032
Approved by: https://github.com/malfet
2023-06-24 02:34:24 +00:00
fb18c29486 [BE] Tweak Meta copyright headers (#90805)
s/Facebook, Inc./Meta Platforms, Inc/
s/Confidential and proprietary./This source code is licensed under the BSD-style license/

Per https://www.internalfb.com/intern/wiki/Open_Source/Licenses/Straight_BSD/

Also, add linter that prevents adding those in the future

Fixes https://github.com/pytorch/pytorch/issues/90187
Pull Request resolved: https://github.com/pytorch/pytorch/pull/90805
Approved by: https://github.com/zpao
2022-12-14 20:30:31 +00:00
496c8ae760 [xnnpack][lite-int] Handle Constant Data (#89445)
Handling constant data for xnnpack delegation. This allows us to handle new modules like such:

```
class Module(torch.nn.Module):
            def __init__(self):
                super().__init__()
                self._constant = torch.ones(4, 4, 4)

            def forward(self, x):
                return x + self._constant
```

this is the precursor work to handling convolution, as we need to serialize constant data(weights)

Differential Revision: [D41050349](https://our.internmc.facebook.com/intern/diff/D41050349/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89445
Approved by: https://github.com/digantdesai
2022-11-22 02:20:54 +00:00
7beb151889 [xnnpack][executorch] remove unordered_set from xnn_compiler (#89231)
Removing unrodered_set from xnncompiler for executorch.

While some STL libraries are unavoidable, and I think it should be ok for delegate to pull these libraries, unordered_set wasn't really needed, and we should be serializing the number of external ids anyways

After this, the backend classes should be good to hg copy into executorch

Differential Revision: [D41227391](https://our.internmc.facebook.com/intern/diff/D41227391/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89231
Approved by: https://github.com/salilsdesai, https://github.com/cccclai
2022-11-18 07:07:19 +00:00
637e764ec5 [xnnpack][executorch] Pass xnnexecutor pointer to compileModel() (#89090)
Here we pass XNNExecutor* to compile model so that XNNExecutor can be allocated by runtime. This signature change is for executorch:

```
XNNExecutor compileModel(void* buffer) --> void compileModel(void* buffer, XNNExecutor* executor)
```

The intended usecase for allocating Executor and Compiling the serialized flatbuffer:

```
XNNExecutor* executor = runtime_allocator->allocateList<jit::xnnpack::delegate::XNNExecutor>(1);
XNNCompiler::compileModel(processed.buffer, executor);

```

Differential Revision: [D41208387](https://our.internmc.facebook.com/intern/diff/D41208387/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89090
Approved by: https://github.com/digantdesai
2022-11-17 04:29:25 +00:00
d1f48f05ce [xnnpack][Bug Fix] Pass serialized model by reference (#89089)
Two changes
- Remove XNNCompiler Dependence on std::string by passing void*
- Grab ser_model by reference: This bug was causing data pointers given to xnn_runtime to be freed because ser_model was on the stack.

Differential Revision: [D41208380](https://our.internmc.facebook.com/intern/diff/D41208380/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89089
Approved by: https://github.com/digantdesai
2022-11-17 04:17:23 +00:00
366f1b2c2f [xnnpack][lite-int] Freeze/Inline module to remove reference to self (#88863)
We need to inline graph before converting from torchscript to xnnpack flatubuffer. Remove graph dependence on self.

This will later help us work with constant data.

Differential Revision: [D41049858](https://our.internmc.facebook.com/intern/diff/D41049858/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88863
Approved by: https://github.com/digantdesai
2022-11-17 04:14:57 +00:00
2452e3f99a Update xnnpack graph schema to use xnode and xvalue (#89036)
There are different nodes definition like [Node in autograd](https://www.internalfb.com/code/fbsource/fbcode/caffe2/torch/csrc/autograd/function.h?lines=108-609&reveal=108-609) and onnxnodes and etc. Understand namespace can be used where nodes from definition are used together, however it's still better to slightly differentiate the name.

Differential Revision: [D41002324](https://our.internmc.facebook.com/intern/diff/D41002324/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89036
Approved by: https://github.com/mcr229
2022-11-15 10:34:45 +00:00
8c46a5de3a Add debug handle to xnnpack schema (#89033)
As title, add three things to the schema
1. debug handle for each node
2. file identifier, so we can sanity check we are getting the xnnpack schema flatbuffers file, instead of other random binary
3. extension, so the dumped binary will end up with its own extension like `myschema.xnnpack` (maybe can have a better name) instead of the default extension `.bin`

Differential Revision: [D40906970](https://our.internmc.facebook.com/intern/diff/D40906970/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/89033
Approved by: https://github.com/mcr229
2022-11-15 09:49:54 +00:00
e0c194f10b Fix typos in messages under torch (#88961)
This PR fixes typos of messages and parms in c++ source and head files under `torch` directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88961
Approved by: https://github.com/albanD
2022-11-14 19:06:41 +00:00
37b468ac77 [xnnpack][lite-int][on-device] rebuild serialized modules at runtime (#88780)
This is the on-device runtime work. We modify the compile and execute from our hacky solution from before to what will actually be running at runtime.

First we rebuild our graph from the serialized flatbuffer string. We also introduce a runtime wrapper that inherits CustomClassHolder that allows us to forward along the built xnngraph runtime to our execute function

Once the subgraph object has been rebuilt by our we pass it along to the runtime wrapper for us to forward along to execute

At execute we prep the input/outputs and invoke the runtime using our runtime wrapper. Finally we forward those results to our execution

Differential Revision: [D39413031](https://our.internmc.facebook.com/intern/diff/D39413031/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39413031/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88780
Approved by: https://github.com/digantdesai
2022-11-10 21:35:28 +00:00
3a4e8736ad [xnnpack][on-device] compiler --> executor object (#88779)
#### XNN Compiler Object
This is purely to abstract away the subgraph rebuild from the flatbuffer object. CompileModel return an executor object which we can use to setup inputs and run forward with.

#### Executorch Considerations
We Include ATen/utils for torch_check, this will be changed when moving to executorch

Differential Revision: [D40733163](https://our.internmc.facebook.com/intern/diff/D40733163/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88779
Approved by: https://github.com/digantdesai
2022-11-10 21:09:22 +00:00
d5e1e2f0fc [xnnpack][on-device] executor class (#88778)
# Executor Class

Executor object used to wrap our xnn_runtime object. The ideal flow of this object looks as such:

```
executor.set_inputs(vector<tensor> inputs, vector<tensor> outputs)
executor.forward()
```

This will likely be returned by our delegate compile and given over to execute in order to run inference using the xnn runtime

##### Executorch Considerations
```
#include <ATen/Functions.h>
#include <ATen/Utils.h>
```
These Aten functions are included in order to use at::Tensor when setting the inputs, this will change when used for Executorch because we will be switching from at::Tensor to whatever tensor abstraction is used for ET. Seems like they have the same call for `.data_ptr<float>()`, so realistically all logic here will be the same.

ATen/Utils is used for TORCH_CHECK. We will switch to ET_CHECK_MESSAGE for executorch.

Differential Revision: [D40733121](https://our.internmc.facebook.com/intern/diff/D40733121/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/88778
Approved by: https://github.com/digantdesai
2022-11-10 21:01:46 +00:00
3aa7a52855 [xnnpack][lite-int][4/n] introduce serialization to delegate (#87908)
We introduced the serializer we created in the previous diff to our XNNGraph builder, the purpose of this is to serialize parts of the graph as we build this. At the end, we are able to finish and serialize the xnngraph into a std::string for use when we forward this along to on-device runtime.

The next diff will rebuild the xnngraph from the serialization we introduce here, so testing the serialization of the graph will be done in the next diff

Differential Revision: [D39335580](https://our.internmc.facebook.com/intern/diff/D39335580/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39335580/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87908
Approved by: https://github.com/digantdesai
2022-11-01 01:48:32 +00:00
8287c1d964 [xnnpack][lite-int][3/n] flatbuffer serializer class (#87907)
Creating a serializer class that allows us to serialize the xnnpack graph creation arguments. This essentially abstracts away the flatbuffer api manipulation and serialization that we deal with.

As a result we can call
```
XNNSerializer::serializeAddNode()
XNNSerializer::serializeTensorValue()
XNNSerializer::finishAndSerialize
```
to serialize the graph

Differential Revision: [D39196312](https://our.internmc.facebook.com/intern/diff/D39196312/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39196312/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87907
Approved by: https://github.com/digantdesai
2022-11-01 01:44:18 +00:00
7bf819b181 [xnnpack]lite-int][2/n] flatbuffer xnn_value schema (#87906)
serializer schema for xnnpack graphs

Differential Revision: [D39003170](https://our.internmc.facebook.com/intern/diff/D39003170/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87906
Approved by: https://github.com/digantdesai
2022-11-01 01:39:41 +00:00
905d532d39 [xnnpack][lite-int][1/n] flatbuffer buck rules (#87826)
Writing a placeholder schema.fbs file for now to setup the buck gen rules. The generated schema file will be used in the xnnpack name space and be reserved for serialization/deserialization of our xnnpack lowered graph

Steps Accomplished

- Buck rules to compile flatbuffer schema
- added header file to preprocess
- everything compiles correctly

Differential Revision: [D38999169](https://our.internmc.facebook.com/intern/diff/D38999169/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38999169/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87826
Approved by: https://github.com/digantdesai
2022-11-01 01:36:52 +00:00
aa1f9a1bd7 [xnnpack][lite-int][graph-build] torchscript -> xnnpack graph (#87824)
This point we perform conversion for Torchscript IR to XNNPack graph. Currently we only support converting Add Nodes and fp32 tensor values.

As a caveat, we are not building this at runtime. So for testing we just run the xnn graph once ahead of time and with sample inputs and forward it to execute. This is only for testing, and will be changed in a later diff. This will allow us to check that graph creation is sound.

Differential Revision: [D39838851](https://our.internmc.facebook.com/intern/diff/D39838851/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87824
Approved by: https://github.com/digantdesai, https://github.com/salilsdesai
2022-11-01 01:24:56 +00:00
b013eb5447 [xnnpack][lite-int][graph-build] graph passes and op checking (#87128)
Beginning of building the xnnpack graph from the torchscript IR. We first massage the torchscript graph using a few graph passes that perform things such as unused self argument removal and constant propagation.
This also performs tracing for us so that the model does not have to be prepped by tracing before being lowered by us.

The other check we perform is through the torchscript IR to identify any nodes that are not lowerable/supported, and throwing an error to spit out the specific nodes that are not lowerable.

Differential Revision: [D39838338](https://our.internmc.facebook.com/intern/diff/D39838338/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39838338/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87128
Approved by: https://github.com/salilsdesai
2022-10-25 22:08:29 +00:00
155b885806 [xnnpack][lite-int] preprocess (#86980)
Split up original preprocess diff:

This diff introduces the skeleton structure of the delegate APIs. first introducing the method compile spec error handling. For now it just outputs an empty tensor object upon execute. But just proves that delegate apis is working and a new xnnpack delegate backend has been added.

Differential Revision: [D38562918](https://our.internmc.facebook.com/intern/diff/D38562918/)

**NOTE FOR REVIEWERS**: This PR has internal Meta-specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38562918/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86980
Approved by: https://github.com/salilsdesai, https://github.com/cccclai
2022-10-14 22:07:12 +00:00