Summary:
In SelectiveBuildOperator, we can specify argument `include_all_overloads`. If True, all overloaded operators (for example, `aten::to.dtype_layout`, `aten::to.prim_Device"` are considered as overloaded operators of `aten::to`), will be built and linked to the final binary. This can significantly increases the final binary size, which could be a deal breaker for on-device deployment.
In this diff, we make back-compatible changes to add new arguments `--not-include-all-overloads-static-root-ops` and `--not-include-all-overloads-closure-ops`. When they are set, we set `include_all_overloads` flag to False for static root ops and closure ops, and rely on code analyzer to decide the actual used overloaded operator.
Test Plan:
- unit test
```
buck test //xplat/caffe2/tools:gen_operators_yaml_test
```
- See test plan in D48771544 where we reduce the shared lib file `libmrengine.lib` from 16653072 bytes to 13686032 bytes.
- See detailed document: https://fburl.com/gdoc/mc93h6kb
Reviewed By: larryliu0820
Differential Revision: D48772302
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108396
Approved by: https://github.com/larryliu0820
Summary: Removes the dependency on the unified YAML file
Test Plan:
Smoke test via some caffe2 tests.
```
buck2 run xplat/caffe2:supported_mobile_models_test
```
Build a major FoA app that uses model tracing and confirm it still works.
```
buck2 build fb4a
```
CI/CD for the rest. If operator tracing / bundling was broken, I'd hope in the 1000+ tests spawned by this change should catch it.
Differential Revision: D44946368
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99122
Approved by: https://github.com/dhruvbird
Summary:
See [this post](https://fb.workplace.com/groups/devinfra.capacity.eng/permalink/1200060064273920/) for context and specifically [this solution](https://fb.workplace.com/groups/devinfra.capacity.eng/posts/1200060064273920/?comment_id=1200166060929987&reply_comment_id=1200177124262214) which this diff implements.
The gist is that updating `bzl` file is *very* expensive for diff time testing and triggers many flaky tests when attempting to land a model update from EdgeML. The purpose of these bzl files (from what I can tell) is to unit test models via a CXX resources map. Since it's only used for CXX resource generation, this can be accomplished via generating `fb_xplat_cxx_library` BUCK target instead. This required shuffling around some existing BUCK files due to buck rules around file ownership.
Since the EdgeML process already generates code to begin with, this is straightforward to do by just changing the code from generating bzl files to now generate a BUCK file and change the existing targets to use it thus we can now delete the old bzl files.
Test Plan:
Run the model gen script.
```
buck2 run mode/opt caffe2/torch/fb/mobile/cli:cli -- --concat_all_model_configs
```
Sanity test the new BUCK target.
```
buck2 build xplat/pytorch_models/build:test_resources
```
Run the model unit tests and confirm they still work.
```
buck2 run xplat/caffe2:for_each_prod_ptl_model_test
```
CI/CD for the rest.
I expect some flaky test given the `bzl` file deletion which triggers off a ton of unrelated tests.
Differential Revision: D44699671
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98450
Approved by: https://github.com/JacobSzwejbka
Summary: We use a static op library in a test for PyTorch C++ usages, but don't want to introduce all base ops. Because the goal is to check if a given model can run on the exact op collection (i.e., fbios ops, fb4a ops), and these base ops are not present in real apps. So add an option to disable this feature.
Test Plan: Build. Expect no change to existing targets.
Differential Revision: D39164021
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84360
Approved by: https://github.com/kit1980
Summary:
Move pt_operator_library to pt_ops.bzl and make it shared with OSS BUCK build
This will replace D36912042. I will update all load statements in future diffs.
Test Plan: sandcaslte, OSS CI
Differential Revision: D37390060
Pull Request resolved: https://github.com/pytorch/pytorch/pull/80170
Approved by: https://github.com/JacobSzwejbka