mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Updated Boxing and Unboxing in the PyTorch Operator Library (markdown)
@ -63,5 +63,5 @@ The following diagram shows a high level overview of how these pieces interconne
|
||||
6. An unboxed operator implementation is an operator implementation that is implemented in terms of unboxed values like Tensor, int64_t and std::string. This is usually a plain C++ function that is registered as an implementation for an operator. Custom Operators are written this way. Also, we have some codegen that registers the kernels for operators in native_functions.yaml as unboxed operator implementations.
|
||||
7. When the unboxed calling API wants to call an operator implementation that is written as a boxed kernel, it needs to box the arguments into a stack of IValues. We have a piece of metaprogramming that does that and that works for almost all of our operators. There’s a few corner cases missing like operators taking DimnameList arguments and we’re planning to fix those. This unboxing code needs to live in the 5->3 call arrow because the metaprogramming needs to know the actual parameter types. This means this code is above the “op call time / op registration time” line and is part of “op call time”. At op call time, when calling an unboxed kernel, we have the necessary information to generate this code. At registration time, when registering a boxed kernel, we do not.
|
||||
8. When the boxed calling API wants to call a kernel that is written as an unboxed operator implementation, it needs to unbox the arguments into actual values like Tensor, int64_t, or std::string. This is also solved by a piece of metaprogramming. As in point (7), this metaprogramming needs to be generated in the unboxed world because it needs to know the actual parameter types. That means it has to live below the “op call time / op registration time” line and is part of “op registration time”. At op registration time, when registering an unboxed kernel, we have the necessary type information to generate this. At op call time, when calling a boxed kernel, we do not. So what actually happens is that whenever an unboxed kernel is registered, we auto-generate a boxed wrapper kernel (3) for it and also register that boxed kernel to be called when the boxed calling API wants to call this kernel. This piece of metaprogramming currently works for everything that the custom op API supports, but only for about 25% of the operators in native_functions.yaml. We’re planning to grow that to 100%.
|
||||
9. This is some codegenerated unboxing logic for the operators that aren't supported by the templated unboxing logic yet. This logic is generated by gen_unboxing_wrappers.py and written to generated_unboxing_wrappers.cpp.
|
||||
9. This is some codegenerated unboxing logic for the operators that aren't supported by the templated unboxing logic yet. This logic is generated by gen_unboxing_wrappers.py and written to generated_unboxing_wrappers.cpp. This box is going to be deleted soon.
|
||||
10. Before [https://github.com/pytorch/pytorch/pull/36838](https://github.com/pytorch/pytorch/pull/36838), JIT had its own unboxing logic that was separate from c10. It was a file called register_aten_ops.cpp that contained codegenerated individual logic for each operator, telling JIT how to unbox it. This does not exist anymore.
|
||||
|
Reference in New Issue
Block a user