mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Updated PyTorch ONNX exporter (markdown)
@ -342,9 +342,13 @@ Set the environment variable `TORCH_LOGS="onnx_diagnostics"` to capture detailed
|
||||
* Python code: [torch/onnx/](https://github.com/pytorch/pytorch/tree/onnx_ms_1/torch/onnx)
|
||||
* C++ code: [torch/csrc/jit/passes/onnx/](https://github.com/pytorch/pytorch/tree/onnx_ms_1/torch/csrc/jit/passes/onnx)
|
||||
|
||||
# Features
|
||||
## Decomposition and pre-dispatch
|
||||
|
||||
## Quantized model export
|
||||
https://github.com/pytorch/pytorch/issues/116684
|
||||
|
||||
## Features
|
||||
|
||||
### Quantized model export
|
||||
|
||||
To support quantized model export, we need to unpack the quantized tensor inputs and the PackedParam weights (<https://github.com/pytorch/pytorch/pull/69232>). We construct through `TupleConstruct` to have a 1-to-1 input mapping,
|
||||
so that we can use `replaceAllUsesWith` API for its successors. In addition, we support quantized namespace export, and the developers can add more symbolics for quantized operators conveniently in the current framework.
|
Reference in New Issue
Block a user