Summary: Add an option to disable TORCH_WARN, some op could trigger spammy TOCH_WARN log which is not desired under certain scenario.
Test Plan:
Tested with
-pt.disable_warn = 1 and -pt.disable_warn = 0
verified TORCH_WARN and TORCH_WARN_ONCE are properly handled
tested with
-pt.strip_error_messages = 1, -pt.disable_warn = 0
verified strip error message is respected when warn is printed
Differential Revision: D40321550
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87188
Approved by: https://github.com/kurtamohler, https://github.com/ezyang
Summary: Duplicating fbcode target `fbcode//caffe2:torch-cpp-cpu` target in xplat. In D40460749 our user wants to use `torch::kNearest` enum which is defined in `torch/csrc/api/src/enum.cpp`. Adding this target to support it.
Test Plan: Rely on CI
Differential Revision: D40532087
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87327
Approved by: https://github.com/ezyang
Summary:
reland after fixing windows build failure for OVR.
Notable change:
```
#if defined(FBCODE_CAFFE2) or defined(FB_XPLAT_BUILD)
```
changed to
```#if defined(FBCODE_CAFFE2) || defined(FB_XPLAT_BUILD)
```
Appearently `-DFB_XPLAT_BUILD` wasn't getting picked up in windows if using `or `to connect
Original commit changeset: 7a31fc4b455f
Original Phabricator Diff: D40198461
Test Plan: waitforsandcastle
Reviewed By: davidberard98, cccclai
Differential Revision: D40290932
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87124
Approved by: https://github.com/gmagogsfm
Backport currently dont work with some models if:
* model is originally exported with interface call enabled (backport would disable it)
* model is flatbuffer (flatbuffer support is soft enabled via link time registry), so we manually trigger it
Fixes #ISSUE_NUMBER
Pull Request resolved: https://github.com/pytorch/pytorch/pull/86510
Approved by: https://github.com/cccclai
Currently, the source list `torch_mobile_tracer_sources` in `build_variables.bzl` is used only for OSS build. This resulted in a regression for OSS builds when `TRACING_BASED=1` was used to build the OSS model tracer binary. To prevent this from happening in the future, it makes sense to re-use this list for internal BUCK builds as well. This change does that.
#accept2ship
Differential Revision: [D39392010](https://our.internmc.facebook.com/intern/diff/D39392010/)
**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.internmc.facebook.com/intern/diff/D39392010/)!
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84770
Approved by: https://github.com/cccclai
Summary:
This diff adds device side API which will convert the model to its
quantized equivalent. THe input model must have been prepared AOT for
quantization.
API is implemented by:
- Running reset obervers
- Running observe method
- Running quantize method
- And replacing method, e.g. forward, with its quantized equivalent.
Test Plan:
test/quantization/jit/test_ondevice_quantization.py
Reviewers:
Subscribers:
Tasks:
Tags:
Differential Revision: [D38889818](https://our.internmc.facebook.com/intern/diff/D38889818)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/83807
Approved by: https://github.com/iseeyuan
Summary:
enabling AT_POCKETFFT_ENABLED@ flag and adding the appropriate dependencies to aten-cpu
moved mkl files from
`aten_cpu_source_non_codegen_list` to
`aten_native_source_non_codegen_list`
Test Plan:
After building testing binaries for both android and ios targets
### iOS
`fbcode/aibench/specifications/frameworks/pytorch/ios/build.sh`
Submitted benchmarks with the new binaries supporting pocketfft here:
https://www.internalfb.com/intern/aibench/details/245253003946591
### Android
`fbcode/aibench/specifications/frameworks/pytorch/android/arm64/build.sh`
Submitted Benchmarks with the new binaries supporting pocket fft here:
https://www.internalfb.com/intern/aibench/details/406253690682941
### Build Size Impact
Success: igios-pika on D37790257-V7
☷[pocket fft] turning on pocketfft flag☷
Diff: https://fburl.com/diff/exkploof
Unigraph Explorer: https://fburl.com/mbex/aipdzaqo
Changes for variation [arm64 + 3x assets]:
```Compressed : -473 B (-0.00%) => 86.69 MiB
Uncompressed: +2.4 KiB (+0.00%) => 187.71 MiB
```
Reviewed By: kimishpatel
Differential Revision: D37790257
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81670
Approved by: https://github.com/kit1980
Summary: We lost SYMBOLICATE_MOBILE_DEBUG_HANDLE flag in some buck file refactoring, bringing it back to fetch module level information in profiling
Test Plan: Profiling output has module level information
Reviewed By: kimishpatel
Differential Revision: D37970958
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81727
Approved by: https://github.com/linbinyu
Summary:
Add missing `-fexceptions` flags that are currently being passed through `exported_preprocessor_flags`. The exported preprocessor flags will be removed in a subsequent diff.
This is a rediff of D37386802 (3e1ac21c3b) with the changes split out to avoid reverts.
Test Plan:
Check flag is present:
```
$ buck uquery xplat/caffe2:common_core -a 'compiler_flags'
{
"//xplat/caffe2:common_core" : {
"compiler_flags" : [
"-fexceptions",
"-frtti",
"-Os",
"-Wno-unknown-pragmas",
"-Wno-write-strings",
"-Wno-unused-variable",
"-Wno-unused-function",
"-Wno-deprecated-declarations",
"-Wno-shadow",
"-Wno-global-constructors",
"-Wno-missing-prototypes",
"-std=gnu++17",
"/EHsc",
"/GR",
"/O1",
"/wd4101"
]
}
}
```
Differential Revision: D37813869
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81394
Approved by: https://github.com/linbinyu
- Created may_alias method in FunctionSchema to publicize aliasing information about inputs and outputs of a schema.
- Tested may_alias methods for basic functionality, exceptions, and wildcard functionality.
**Cases where elements of a container alias another argument will be handled with a new may_contain_alias method which will be created in a later pr**
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80918
Approved by: https://github.com/davidberard98
Summary: Move `-fexceptions` out of the exported preprocessor flags and in to the libraries compiler flags. Apply the same changes to all rdeps of this library in the caffe2 subtree.
Test Plan:
Verify no rdeps are missing `-fexceptions` that have cpp sources:
```
% buck uquery 'kind(cxx*, rdeps(//xplat/caffe2/..., //xplat/caffe2/c10:c10, 1))' > /tmp/rdeps
% buck uquery '%Ss - attrfilter(preprocessor_flags, "-fexceptions", %Ss) - attrfilter(compiler_flags, "-fexceptions", %Ss)' @/tmp/rdeps
//xplat/pytorch_models/build/pytorch_dev_mobilenetv3/v1/nnc:asm
//xplat/pytorch_models/build/aot_test_model/v1/nnc:asm
//xplat/pytorch_models/build/pytorch_dev_linear/v1/nnc:asm
//xplat/pytorch_models/build/bi_bytedoc_nnc/v1/nnc:asm
//xplat/pytorch_models/build/bi_bytedoc_nnc/v2/nnc:asm
```
Differential Revision: D37386802
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80387
Approved by: https://github.com/linbinyu
Summary:
Move pt_operator_library to pt_ops.bzl and make it shared with OSS BUCK build
This will replace D36912042. I will update all load statements in future diffs.
Test Plan: sandcaslte, OSS CI
Differential Revision: D37390060
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80170
Approved by: https://github.com/JacobSzwejbka