140 Commits

Author SHA1 Message Date
3cef87f9fd [aarch64] add SLEEF dependency for aten_cpu (#89475)
Reviewed By: kimishpatel, dmm-fb

Differential Revision: D41350031

Pull Request resolved: https://github.com/pytorch/pytorch/pull/89475
Approved by: https://github.com/kimishpatel, https://github.com/ezyang
2022-11-29 15:17:58 +00:00
ced71e8e82 [Pytorch] add an option to disable TORCH_WARN and TORCH_WARN_ONCE log (#87188)
Summary: Add an option to disable TORCH_WARN, some op could trigger spammy TOCH_WARN log which is not desired under certain scenario.

Test Plan:
Tested with
-pt.disable_warn = 1 and -pt.disable_warn = 0

verified TORCH_WARN and TORCH_WARN_ONCE are properly handled

tested with
-pt.strip_error_messages = 1, -pt.disable_warn = 0

verified strip error message is respected when warn is printed

Differential Revision: D40321550

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87188
Approved by: https://github.com/kurtamohler, https://github.com/ezyang
2022-11-08 04:49:45 +00:00
5c3666cb81 [codev] Make backport work with flatbuffer models (#88127)
Summary: By adding flatbuffer as dependency of backport.

Differential Revision: D40865452

Pull Request resolved: https://github.com/pytorch/pytorch/pull/88127
Approved by: https://github.com/cccclai
2022-11-01 16:11:30 +00:00
4b23905172 [torch] Add torch cpp cpu target for torch/csrc/api/src files (#87327)
Summary: Duplicating fbcode target `fbcode//caffe2:torch-cpp-cpu` target in xplat. In D40460749 our user wants to use `torch::kNearest` enum which is defined in `torch/csrc/api/src/enum.cpp`. Adding this target to support it.

Test Plan: Rely on CI

Differential Revision: D40532087

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87327
Approved by: https://github.com/ezyang
2022-10-27 06:04:22 +00:00
0c1dec375f Revert "Back out "Revert D40198461: [pytorch][PR] Backport currently dont work with some models if:" (#87124)"
This reverts commit a42fbfa0cb467b582799a5132561c82a3d33b1b7.

Reverted https://github.com/pytorch/pytorch/pull/87124 on behalf of https://github.com/ZainRizvi due to This is causing periodic jobs to fail
2022-10-21 16:03:00 +00:00
a42fbfa0cb Back out "Revert D40198461: [pytorch][PR] Backport currently dont work with some models if:" (#87124)
Summary:
reland after fixing windows build failure for OVR.

Notable change:
```
#if defined(FBCODE_CAFFE2) or defined(FB_XPLAT_BUILD)
```
changed to
```#if defined(FBCODE_CAFFE2) || defined(FB_XPLAT_BUILD)
```
Appearently `-DFB_XPLAT_BUILD` wasn't getting picked up in windows if using `or `to connect

Original commit changeset: 7a31fc4b455f

Original Phabricator Diff: D40198461

Test Plan: waitforsandcastle

Reviewed By: davidberard98, cccclai

Differential Revision: D40290932

Pull Request resolved: https://github.com/pytorch/pytorch/pull/87124
Approved by: https://github.com/gmagogsfm
2022-10-20 23:02:10 +00:00
e0d6898cbd Revert "Backport currently dont work with some models if: (#86510)"
This reverts commit 4bfb7341819b3bfcaf65ddc136f25d23983740a7.

Reverted https://github.com/pytorch/pytorch/pull/86510 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-10-12 04:12:43 +00:00
4bfb734181 Backport currently dont work with some models if: (#86510)
Backport currently dont work with some models if:

* model is originally exported with interface call enabled (backport would disable it)
* model is flatbuffer (flatbuffer support is soft enabled via link time registry), so we manually trigger it

Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86510
Approved by: https://github.com/cccclai
2022-10-12 00:39:25 +00:00
ea50e7f262 fix ovrsource pytorch build from D39769513 (#85708)
Test Plan: Tested locally, verifying with CI.

Reviewed By: h-friederich

Differential Revision: D39851831

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85708
Approved by: https://github.com/zou3519
2022-09-27 23:31:51 +00:00
c4a5255df7 [Mobile Tracer] Use unified source file list for BUCK build (#84770)
Currently, the source list `torch_mobile_tracer_sources` in `build_variables.bzl` is used only for OSS build. This resulted in a regression for OSS builds when `TRACING_BASED=1` was used to build the OSS model tracer binary. To prevent this from happening in the future, it makes sense to re-use this list for internal BUCK builds as well. This change does that.

#accept2ship

Differential Revision: [D39392010](https://our.internmc.facebook.com/intern/diff/D39392010/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39392010/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84770
Approved by: https://github.com/cccclai
2022-09-09 21:28:50 +00:00
0a89bdf989 Set up aten/src/ATen/functorch directory; move some files there (#84648)
This PR:
- sets up aten/src/ATen/functorch in PyTorch's build system
- Moves {BatchedTensorImpl.h, and BatchedTensorImpl.cpp}
there as a test.

Test Plan:
- functorch build and test should pass

Differential Revision: [D39315051](https://our.internmc.facebook.com/intern/diff/D39315051)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84648
Approved by: https://github.com/ezyang
2022-09-09 15:22:57 +00:00
673b35c847 Better reshape with autograd support (#82754) (#84154)
The original author is @YifanShenSZ  and the original PR is: #82754
# Summary:
Previous reshape [https://github.com/pytorch/pytorch/issues/80981](https://github.com/pytorch/pytorch/pull/80981) is ok for forward, but needs improvement for backward: need to handle "sometimes view sometimes copy" behavior.

This pull request fixes it by:
1. add a new alias dispatch key `CompositeImplicitAutogradNestedTensor`, which ideally would work as nested-tensor version of `CompositeImplicitAutograd`
2. register `reshape_nested` to `reshape` by `CompositeImplicitAutogradNestedTensor`

Side changes:
* add contiguous memory format support to `clone_nested`
* add `view_nested`
* add `reshape_as_nested`

Fix issue [https://github.com/pytorch/pytorch/issues/83041](https://github.com/pytorch/pytorch/issues/83041)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82754

Test Plan:
Imported from GitHub, without a `Test Plan:` line.

**Static Docs Preview: executorch**
|[Full Site](https://our.intern.facebook.com/intern/staticdocs/eph/D39023822/V13/executorch/)|

|**Modified Pages**|

Reviewed By: albanD

Differential Revision: D39023822

Pulled By: drisspg

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84154
Approved by: https://github.com/bdhirsh, https://github.com/albanD
2022-09-01 20:01:39 +00:00
cfd18e105f [Pytorch][Ondevice quantization] Add device side API to convert model (#83807)
Summary:
This diff adds device side API which will convert the model to its
quantized equivalent. THe input model must have been prepared AOT for
quantization.

API is implemented by:
- Running reset obervers
- Running observe method
- Running quantize method
- And replacing method, e.g. forward, with its quantized equivalent.

Test Plan:
test/quantization/jit/test_ondevice_quantization.py

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: [D38889818](https://our.internmc.facebook.com/intern/diff/D38889818)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83807
Approved by: https://github.com/iseeyuan
2022-08-29 17:57:38 +00:00
8948fdc525 Switch mobile targets to flatbuffers_mobile (#82829)
Differential Revision: [D38412635](https://our.internmc.facebook.com/intern/diff/D38412635/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38412635/)!

Differential Revision: [D38412635](https://our.internmc.facebook.com/intern/diff/D38412635)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82829
Approved by: https://github.com/qihqi
2022-08-22 05:02:03 +00:00
c9475fa927 Create flatbuffers_mobile (#82828)
Differential Revision: [D38412636](https://our.internmc.facebook.com/intern/diff/D38412636/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38412636/)!

Differential Revision: [D38412636](https://our.internmc.facebook.com/intern/diff/D38412636)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82828
Approved by: https://github.com/qihqi
2022-08-20 23:25:11 +00:00
7aba6f8e7b Rename flatbuffer_serializer to *_mobile or *_full_jit (#82827)
The target named `flatbuffer_serializer` in fbcode has dependency from full jit and the one in xplat has dependency for mobile only. Rename them accordingly

```
flatbuffer_serializer in fbode -> flatbuffer_serializer_full_jit
flatbuffer_serializer in xplat -> flatbuffer_serializer_mobile
```

so it's more readable.

Differential Revision: [D38413369](https://our.internmc.facebook.com/intern/diff/D38413369/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38413369/)!

Differential Revision: [D38413369](https://our.internmc.facebook.com/intern/diff/D38413369)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82827
Approved by: https://github.com/qihqi
2022-08-19 01:29:46 +00:00
451c6296af [kineto] deprecate USE_KINETO_UPDATED (#83305)
Summary: This is used to do cross repo updates but has not been cleaned up properly

Test Plan: CI

Reviewed By: aaronenyeshi

Differential Revision: D38633379

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83305
Approved by: https://github.com/aaronenyeshi
2022-08-17 22:31:49 +00:00
684a404def Rename flatbuffer_all to flatbuffers_jit (#82826)
flatbuffer_all is a bit confusing. It's actually for full jit. Rename it accordingly. For the follow up changes, will create a target for mobile only.

Differential Revision: [D38412158](https://our.internmc.facebook.com/intern/diff/D38412158/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38412158/)!

Differential Revision: [D38412158](https://our.internmc.facebook.com/intern/diff/D38412158)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82826
Approved by: https://github.com/qihqi
2022-08-14 22:26:15 +00:00
8163af7c30 Hide flatbuffer build dependencies (#82953)
Lock down the visibility and exporting of flatbuffer implementation details. Clients should not be able to see the generated header, and should avoid picking up the flatbuffer deps if possible.

Differential Revision: [D38495231](https://our.internmc.facebook.com/intern/diff/D38495231/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D38495231/)!

Differential Revision: [D38495231](https://our.internmc.facebook.com/intern/diff/D38495231)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82953
Approved by: https://github.com/cccclai
2022-08-10 16:39:56 +00:00
f9533560cc Use flatbuffer of alternate namespace (#82952)
Summary: Minimal change to make use of flatbuffer with fbsource namespace.

Test Plan: existing unit tests

Differential Revision: D38494999

Pull Request resolved: https://github.com/pytorch/pytorch/pull/82952
Approved by: https://github.com/cccclai
2022-08-09 07:40:59 +00:00
c2ac3e6831 Typo thrid->third (#82578)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/82578
Approved by: https://github.com/huydhn
2022-08-01 17:20:19 +00:00
0b3a239e85 [pocket fft] turning on pocketfft flag (#81670)
Summary:
enabling AT_POCKETFFT_ENABLED@ flag and adding the appropriate dependencies to aten-cpu

moved mkl files from
`aten_cpu_source_non_codegen_list` to
`aten_native_source_non_codegen_list`

Test Plan:
After building testing binaries for both android and ios targets

### iOS
`fbcode/aibench/specifications/frameworks/pytorch/ios/build.sh`

Submitted benchmarks with the new binaries supporting pocketfft here:
https://www.internalfb.com/intern/aibench/details/245253003946591

### Android
`fbcode/aibench/specifications/frameworks/pytorch/android/arm64/build.sh`

Submitted Benchmarks with the new binaries supporting pocket fft here:
https://www.internalfb.com/intern/aibench/details/406253690682941

### Build Size Impact

Success: igios-pika on D37790257-V7

☷[pocket fft] turning on pocketfft flag☷
Diff: https://fburl.com/diff/exkploof
Unigraph Explorer: https://fburl.com/mbex/aipdzaqo

Changes for variation [arm64 + 3x assets]:
```Compressed  : -473 B (-0.00%) => 86.69 MiB
Uncompressed: +2.4 KiB (+0.00%) => 187.71 MiB
```

Reviewed By: kimishpatel

Differential Revision: D37790257

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81670
Approved by: https://github.com/kit1980
2022-07-21 02:45:20 +00:00
99cb5fde12 [PTE] Fix module level information in profiling (#81727)
Summary: We lost SYMBOLICATE_MOBILE_DEBUG_HANDLE flag in some buck file refactoring, bringing it back to fetch module level information in profiling

Test Plan: Profiling output has module level information

Reviewed By: kimishpatel

Differential Revision: D37970958

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81727
Approved by: https://github.com/linbinyu
2022-07-20 00:00:22 +00:00
9d3c35d1e1 Back out "Revert D37720837: Back out "Revert D37228314: [Profiler] Include ActivityType from Kineto"" (#81450)
Differential Revision: [D37842341](https://our.internmc.facebook.com/intern/diff/D37842341/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D37842341/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81450
Approved by: https://github.com/pbelevich
2022-07-15 18:25:40 +00:00
51cc614cb9 [pytorch] add missing -fexceptions flags (#81394)
Summary:
Add missing `-fexceptions` flags that are currently being passed through `exported_preprocessor_flags`. The exported preprocessor flags will be removed in a subsequent diff.

This is a rediff of D37386802 (3e1ac21c3b) with the changes split out to avoid reverts.

Test Plan:
Check flag is present:
```
$ buck uquery xplat/caffe2:common_core -a 'compiler_flags'
{
  "//xplat/caffe2:common_core" : {
    "compiler_flags" : [
      "-fexceptions",
      "-frtti",
      "-Os",
      "-Wno-unknown-pragmas",
      "-Wno-write-strings",
      "-Wno-unused-variable",
      "-Wno-unused-function",
      "-Wno-deprecated-declarations",
      "-Wno-shadow",
      "-Wno-global-constructors",
      "-Wno-missing-prototypes",
      "-std=gnu++17",
      "/EHsc",
      "/GR",
      "/O1",
      "/wd4101"
    ]
  }
}
```

Differential Revision: D37813869

Pull Request resolved: https://github.com/pytorch/pytorch/pull/81394
Approved by: https://github.com/linbinyu
2022-07-14 20:03:17 +00:00
36d2c44cce Revert "Back out "Revert D37228314: [Profiler] Include ActivityType from Kineto" (#81122)"
This reverts commit 52a538868b9239378af3923ba64a33ad7e1fb4c6.

Reverted https://github.com/pytorch/pytorch/pull/81122 on behalf of https://github.com/clee2000 due to broke periodic buck build https://github.com/pytorch/pytorch/runs/7306516655?check_suite_focus=true
2022-07-12 18:20:00 +00:00
e3a870986e [JIT] Add may_alias in FunctionSchema with associated tests (#80918)
- Created may_alias method in FunctionSchema to publicize aliasing information about inputs and outputs of a schema.
- Tested may_alias methods for basic functionality, exceptions, and wildcard functionality.

**Cases where elements of a container alias another argument will be handled with a new may_contain_alias method which will be created in a later pr**
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80918
Approved by: https://github.com/davidberard98
2022-07-12 18:07:23 +00:00
52a538868b Back out "Revert D37228314: [Profiler] Include ActivityType from Kineto" (#81122)
Reland

Differential Revision: [D37720837](https://our.internmc.facebook.com/intern/diff/D37720837/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D37720837/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81122
Approved by: https://github.com/chaekit
2022-07-12 14:54:01 +00:00
e608befae4 Revert "[c10] move fexceptions to compiler_flags (#80387)"
This reverts commit 3e1ac21c3bcbc0e27dcf058900e9572d3c135a20.

Reverted https://github.com/pytorch/pytorch/pull/80387 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-07-12 14:50:55 +00:00
3e1ac21c3b [c10] move fexceptions to compiler_flags (#80387)
Summary: Move `-fexceptions` out of the exported preprocessor flags and in to the libraries compiler flags. Apply the same changes to all rdeps of this library in the caffe2 subtree.

Test Plan:
Verify no rdeps are missing `-fexceptions` that have cpp sources:
```
% buck uquery 'kind(cxx*, rdeps(//xplat/caffe2/..., //xplat/caffe2/c10:c10, 1))' > /tmp/rdeps
% buck uquery '%Ss - attrfilter(preprocessor_flags, "-fexceptions", %Ss) - attrfilter(compiler_flags, "-fexceptions", %Ss)' @/tmp/rdeps
//xplat/pytorch_models/build/pytorch_dev_mobilenetv3/v1/nnc:asm
//xplat/pytorch_models/build/aot_test_model/v1/nnc:asm
//xplat/pytorch_models/build/pytorch_dev_linear/v1/nnc:asm
//xplat/pytorch_models/build/bi_bytedoc_nnc/v1/nnc:asm
//xplat/pytorch_models/build/bi_bytedoc_nnc/v2/nnc:asm
```

Differential Revision: D37386802

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80387
Approved by: https://github.com/linbinyu
2022-07-12 14:49:16 +00:00
a965a67492 Revert "[Profiler] Include ActivityType from Kineto (#80750)"
This reverts commit 2f6f7391efd109f1ea12bbebdda58aa9169f4e9c.

Reverted https://github.com/pytorch/pytorch/pull/80750 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally
2022-07-08 05:16:56 +00:00
2f6f7391ef [Profiler] Include ActivityType from Kineto (#80750)
We don't want to compile with Kineto on all platforms, but if we're going to have significant integration between profiler and Kineto profiler will need to be able to rely on simple API constructs like the Kineto enums.

Differential Revision: [D37228314](https://our.internmc.facebook.com/intern/diff/D37228314/)

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D37228314/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80750
Approved by: https://github.com/aaronenyeshi
2022-07-08 04:59:06 +00:00
fe73528a90 minor fix for shared build (#80739)
Summary: Fixed some issues for the shared buck build

Test Plan: CI

Reviewed By: JacobSzwejbka, larryliu0820

Differential Revision: D37564552

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80739
Approved by: https://github.com/osalpekar, https://github.com/malfet
2022-07-01 02:41:04 +00:00
b62d39eda0 Consolidate all python targets in the tools folder (#80408)
Summary:
All buck targets that points to caffe2/tools folder are now moved to tools/BUCK.
This also eliminates all python library/binary import in pt_defs.bzl, which caused T124308913.

Test Plan: CI

Differential Revision: D37468313

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80408
Approved by: https://github.com/seemethere, https://github.com/malfet
2022-06-29 23:27:47 +00:00
edf76cd9c2 Move qnnpack to shared BUCK build (#80260)
Differential Revision: D37434340

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80260
Approved by: https://github.com/larryliu0820, https://github.com/malfet
2022-06-29 22:40:37 +00:00
8a45ef23f5 [5] move XNNPACK to shared BUCK build (#80209)
Summary: Reland D36529332 (b8b46f932b) with shared buck build structure.

Test Plan:
buck build //xplat/third-party/XNNPACK:XNNPACK
Sandcastle

Differential Revision: D37407609

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80209
Approved by: https://github.com/kimishpatel
2022-06-28 02:25:07 +00:00
e98e7fe428 [4] move pt_operator_library to shared BUCK file (#80170)
Summary:
Move pt_operator_library to pt_ops.bzl and make it shared with OSS BUCK build

This will replace D36912042. I will update all load statements in future diffs.

Test Plan: sandcaslte, OSS CI

Differential Revision: D37390060

Pull Request resolved: https://github.com/pytorch/pytorch/pull/80170
Approved by: https://github.com/JacobSzwejbka
2022-06-24 21:51:20 +00:00
736bbe1ec7 [3] move aten targets to shared buck file (#79966)
Summary: move aten related targets to the shared buck file

Test Plan: Github CI, sandcastle

Differential Revision: D37323401

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79966
Approved by: https://github.com/iseeyuan
2022-06-23 15:16:44 +00:00
c9cbdb411d [2] move more pytorch buck targets to shared build (#79330)
Summary: as title

Test Plan: sandcastle and oss CI

Differential Revision: D37087229

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79330
Approved by: https://github.com/dhruvbird
2022-06-22 03:14:23 +00:00
430955b3a8 [Test] create shared targets for xplat aten (#78345)
Differential Revision: D36694963

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78345
Approved by: https://github.com/kit1980
2022-06-08 22:20:03 +00:00