Files
pytorch/test/onnx/test_pytorch_common.py
BowenBao 57d1df071f [ONNX] Support inplace operations on inplace indexing (#52063) (#53306)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53306

* [ONNX] Fix for sequence of mutations in blocks (#51577)

Fixes consecutive mutations in a tensor inside blocks.
Also, support append and pop in blocks.

* Support inplace operations + indexing

* Clean up old pass for remove mutations

* Add loop test

* Fixes for set attr in loops

* Removing the new jit API flag

* [ONNX] Redesign onnx pass to enable shape type dependent pattern conversion - cont (#51795)

With the introduction of ONNX shape inference, shape and type are inferred on the fly as operators get converted from ATen to ONNX when running symbolic function. This resolves the shape/type requirement for the symbolic functions. The pre-onnx passes however, can not be supported by shape inference, since at that stage the operators in the graph are still ATen operators.

This PR is to update the design of ONNX pass, to enable a mechanism of capturing subgraphs of ATen operators of certain patterns, and convert them later, when shape/type information of upstream operators are available.

The new design will require pre-onnx passes that need shape/type to be written in two parts, encapsulation and conversion.

    The encapsulation part will find the nodes of patterns, like how pre-onnx passes were written previously. But instead of converting the nodes, it will encapsulate them into a sub-block of a new placeholder node. This part is called before onnx pass, so it runs before calling symbolic functions.

    The conversion part will be called inside the onnx pass. In onnx pass, run_symbolic_func will be called for each node in topological order. When it reaches the placeholder node, the conversion part will be invoked. It will convert the nodes inside the sub-block based on pattern. By that time, it will have shape/type of upstream operators available. After the conversion is complete, the placeholder node will be removed, and nodes inside its sub-block converted. Run_symbolic_func will be called for these nodes, and they will be converted from ATen operator to ONNX operator.

This PR includes several other fixes, listed below.
* ~~replace helper.cpp with onnx_utils.cpp for holding utility functions.~~
* fix EraseNumberTypes on Bool type, the code was outdated that back then Bool type doesn't exist.
* ~~enable onnx shape inference in export with parameter/initializer data.~~
* other code clean ups.
* fix insertion of identity nodes for loop opset 13 sequence output.

~~PR depends on #51603~~

* Fix after merge

* clang

* Fix clang

* Fix clang

* Fix warning message.

* Fixes for non-model param attributes

* Fix for caffe2

* Additional test

* clang

* Skip test for lower opsets

* fix clang-tidy

* Update init.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Update remove_inplace_ops_for_onnx.cpp

* Fix for clang formatting

Test Plan: Imported from OSS

Reviewed By: pbelevich, malfet

Differential Revision: D26922416

Pulled By: SplitInfinity

fbshipit-source-id: e7108620b39b6404c594910786c4d275fee59d84

Co-authored-by: Bowen Bao <bowbao@microsoft.com>
2021-03-12 02:49:11 -08:00

106 lines
3.3 KiB
Python

import functools
import os
import unittest
import sys
import torch
import torch.autograd.function as function
pytorch_test_dir = os.path.dirname(os.path.dirname(os.path.realpath(__file__)))
sys.path.insert(-1, pytorch_test_dir)
from torch.testing._internal.common_utils import * # noqa: F401
torch.set_default_tensor_type('torch.FloatTensor')
BATCH_SIZE = 2
RNN_BATCH_SIZE = 7
RNN_SEQUENCE_LENGTH = 11
RNN_INPUT_SIZE = 5
RNN_HIDDEN_SIZE = 3
def _skipper(condition, reason):
def decorator(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
if condition():
raise unittest.SkipTest(reason)
return f(*args, **kwargs)
return wrapper
return decorator
skipIfNoCuda = _skipper(lambda: not torch.cuda.is_available(),
'CUDA is not available')
skipIfTravis = _skipper(lambda: os.getenv('TRAVIS'),
'Skip In Travis')
# skips tests for all versions below min_opset_version.
# if exporting the op is only supported after a specific version,
# add this wrapper to prevent running the test for opset_versions
# smaller than the currently tested opset_version
def skipIfUnsupportedMinOpsetVersion(min_opset_version):
def skip_dec(func):
def wrapper(self):
if self.opset_version < min_opset_version:
raise unittest.SkipTest("Skip verify test for unsupported opset_version")
return func(self)
return wrapper
return skip_dec
# skips tests for all versions above min_opset_version.
def skipIfUnsupportedMaxOpsetVersion(min_opset_version):
def skip_dec(func):
def wrapper(self):
if self.opset_version > min_opset_version:
raise unittest.SkipTest("Skip verify test for unsupported opset_version")
return func(self)
return wrapper
return skip_dec
# Enables tests for scripting, instead of only tracing the model.
def enableScriptTest():
def script_dec(func):
def wrapper(self):
self.is_script_test_enabled = True
return func(self)
return wrapper
return script_dec
# Disable tests for scripting.
def disableScriptTest():
def script_dec(func):
def wrapper(self):
self.is_script_test_enabled = False
return func(self)
return wrapper
return script_dec
# skips tests for opset_versions listed in unsupported_opset_versions.
# if the caffe2 test cannot be run for a specific version, add this wrapper
# (for example, an op was modified but the change is not supported in caffe2)
def skipIfUnsupportedOpsetVersion(unsupported_opset_versions):
def skip_dec(func):
def wrapper(self):
if self.opset_version in unsupported_opset_versions:
raise unittest.SkipTest("Skip verify test for unsupported opset_version")
return func(self)
return wrapper
return skip_dec
def skipIfONNXShapeInference(onnx_shape_inference):
def skip_dec(func):
def wrapper(self):
if self.onnx_shape_inference is onnx_shape_inference:
raise unittest.SkipTest("Skip verify test for unsupported opset_version")
return func(self)
return wrapper
return skip_dec
def flatten(x):
return tuple(function._iter_filter(lambda o: isinstance(o, torch.Tensor))(x))