Did some easy fixes from enabling TRY200. Most of these seem like oversights instead of intentional. The proper way to silence intentional errors is with `from None` to note that you thought about whether it should contain the cause and decided against it.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111496
Approved by: https://github.com/malfet
This PR re-lands
- [Typing] Fix PEP 484 Violation (#105022)
- Update mypy to 1.4.1 (#91983)
That were reverted due to the conflict with internal source repo.
Mostly fixes for PEP-484 violation (i.e. when default arg is set to None, but type is not annotated as optional)
Plus few real fixes:
- Add missing `_get_upgraders_entry_map` to `torch/_C/__init__.pyi`
- Add missing return statement to `torch._export. deserialize_graph`
- Fix error message in `torch.ao.ns.fx.weight_utils.get_lstm_mod_weights`
- Add assert it `torch/optim/optimizer.py` that Optional list is not None
TODO (in followup PR):
- Fix erroneous `isinstance` check in `torch/ao/quantization/_pt2e/qat_utils.py`
Unrelated, to bypass CI failures due to the gcc9 dependency update in Ubuntu-18.04:
- Add hack to squash older libstdc++ from conda environment in favor one from OS to `.ci/docker/install_conda.sh`
- Update bazel cuda builds to focal, as with libstdc++-6.0.32 bazel builds loose the ability to catch exceptions (probably because they link with cupti statically, but I could not found where it is done)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105227
Approved by: https://github.com/atalman, https://github.com/albanD, https://github.com/Skylion007
This PR re-lands
- [Typing] Fix PEP 484 Violation (#105022)
- Update mypy to 1.4.1 (#91983)
That were reverted due to the conflict with internal source repo.
Mostly fixes for PEP-484 violation (i.e. when default arg is set to None, but type is not annotated as optional)
Plus few real fixes:
- Add missing `_get_upgraders_entry_map` to `torch/_C/__init__.pyi`
- Add missing return statement to `torch._export. deserialize_graph`
- Fix error message in `torch.ao.ns.fx.weight_utils.get_lstm_mod_weights`
- Add assert it `torch/optim/optimizer.py` that Optional list is not None
TODO (in followup PR):
- Fix erroneous `isinstance` check in `torch/ao/quantization/_pt2e/qat_utils.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105227
Approved by: https://github.com/atalman, https://github.com/albanD, https://github.com/Skylion007
Previous to this PR, we are comparing torch args/kwargs with OnnxFunction OpSchema without normalizing args/kwargs first. Essentially, the function signature is different between ATen and OnnxFunction, and onnx-script has preprocessing on these args/kwargs with an internal tools: `param_manipulation` for both eager mode and graph mode. This PR references on the internal tool to normalize the torch args/kwargs before feeding them to OnnxFunction during op_level_debug and dispatching. The PR significantly reduces the dispatching need on nearest matching mechanism.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104679
Approved by: https://github.com/BowenBao
Previous to this PR, to support onnxscript function proto in torchscript exporter, the registered custom symbolic functions are all forced to call `.function_proto` API as onnxscript functions. The PR makes sure the custom function is onnxscript function before using the API. To avoid the dependency, hasattr is used instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104785
Approved by: https://github.com/BowenBao
Fixes#97728Fixes#98622
Fixes https://github.com/microsoft/onnx-script/issues/393
Provide op_level_debug in exporter which creates randomnied torch.Tensor based on FakeTensorProp real shape as inputs of both torch ops and ONNX symbolic function. The PR leverages on Transformer class to create a new fx.Graph, but shares the same Module with the original one to save memory.
The test is different from [op_correctness_test.py](https://github.com/microsoft/onnx-script/blob/main/onnxscript/tests/function_libs/torch_aten/ops_correctness_test.py) as op_level_debug generating real tensors based on the fake tensors in the model.
Limitation:
1. Some of the trace_only function is not supported due to lack of param_schema which leads to arg/kwargs wronly split and ndarray wrapping. (WARNINGS in SARIF)
2. The ops with dim/indices (INT64) is not supported that they need the information(shape) from other input args. (WARNINGS in SARIF)
3. sym_size and built-in ops are not supported.
4. op_level_debug only labels results in SARIF. It doesn't stop exporter.
5. Introduce ONNX owning FakeTensorProp supports int/float/bool
6. parametrized op_level_debug and dynamic_shapes into FX tests
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97494
Approved by: https://github.com/justinchuby, https://github.com/BowenBao