Updated PyTorch ONNX exporter (markdown)

Justin Chu
2024-05-15 10:54:35 -07:00
parent fd96d91474
commit 1452446f99

@ -342,9 +342,13 @@ Set the environment variable `TORCH_LOGS="onnx_diagnostics"` to capture detailed
* Python code: [torch/onnx/](https://github.com/pytorch/pytorch/tree/onnx_ms_1/torch/onnx)
* C++ code: [torch/csrc/jit/passes/onnx/](https://github.com/pytorch/pytorch/tree/onnx_ms_1/torch/csrc/jit/passes/onnx)
# Features
## Decomposition and pre-dispatch
## Quantized model export
https://github.com/pytorch/pytorch/issues/116684
## Features
### Quantized model export
To support quantized model export, we need to unpack the quantized tensor inputs and the PackedParam weights (<https://github.com/pytorch/pytorch/pull/69232>). We construct through `TupleConstruct` to have a 1-to-1 input mapping,
so that we can use `replaceAllUsesWith` API for its successors. In addition, we support quantized namespace export, and the developers can add more symbolics for quantized operators conveniently in the current framework.