Updates alias pattern (and torch.absolute to use it) (#42586)

Summary:
This PR canonicalizes our (current) pattern for adding aliases to PyTorch. That pattern is:

- Copy the original functions native_functions.yaml entry, but replace the original function's name with their own.
- Implement the corresponding functions and have them redispatch to the original function.
- Add docstrings to the new functions that reference the original function.
- Update the alias_map in torch/csrc/jit/passes/normalize_ops.cpp.
- Update the op_alias_mappings in torch/testing/_internal/jit_utils.py.
- Add a test validating the alias's behavior is the same as the original function's.

An alternative pattern would be to use Python and C++ language features to alias ops directly. For example in Python:

```
torch.absolute = torch.abs
```

Let the pattern in this PR be the "native function" pattern, and the alternative pattern be the "language pattern." There are pros/cons to both approaches:

**Pros of the "Language Pattern"**
- torch.absolute is torch.abs.
- no (or very little) overhead for calling the alias.
- no native_functions.yaml redundancy or possibility of "drift" between the original function's entries and the alias's.

**Cons of the "Language Pattern"**
- requires manually adding doc entries
- requires updating Python alias and C++ alias lists
- requires hand writing alias methods on Tensor (technically this should require a C++ test to validate)
- no single list of all PyTorch ops -- have to check native_functions.yaml and one of the separate alias lists

**Pros of the "Native Function" pattern**

- alias declarations stay in native_functions.yaml
- doc entries are written as normal

**Cons of the "Native Function" pattern**

- aliases redispatch to the original functions
- torch.absolute is not torch.abs (requires writing test to validate behavior)
- possibility of drift between original's and alias's native_functions.yaml entries

While either approach is reasonable, I suggest the "native function" pattern since it preserves "native_functions.yaml" as a source of truth and minimizes the number of alias lists that need to be maintained. In the future, entries in native_functions.yaml may support an "alias" argument and replace whatever pattern we choose now.

Ops that are likely to use aliasing are:

- div (divide, true_divide)
- mul (multiply)
- bucketize (digitize)
- cat (concatenate)
- clamp (clip)
- conj (conjugate)
- rad2deg (degrees)
- trunc (fix)
- neg (negative)
- deg2rad (radians)
- round (rint)
- acos (arccos)
- acosh (arcosh)
- asin (arcsin)
- asinh (arcsinh)
- atan (arctan)
- atan2 (arctan2)
- atanh (arctanh)
- bartlett_window (bartlett)
- hamming_window (hamming)
- hann_window (hanning)
- bitwise_not (invert)
- gt (greater)
- ge (greater_equal)
- lt (less)
- le (less_equal)
- ne (not_equal)
- ger (outer)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/42586

Reviewed By: ngimel

Differential Revision: D22991086

Pulled By: mruberry

fbshipit-source-id: d6ac96512d095b261ed2f304d7dddd38cf45e7b0
This commit is contained in:
Mike Ruberry
2020-08-07 00:22:38 -07:00
committed by Facebook GitHub Bot
parent cb1ac94069
commit 73642d9425
3 changed files with 29 additions and 11 deletions

View File

@ -151,6 +151,17 @@ Tensor abs(const Tensor& self) {
}
Tensor& abs_(Tensor& self) { return unary_op_impl_(self, at::abs_out); }
// Absolute, alias for abs
Tensor& absolute_out(Tensor& result, const Tensor& self) {
return at::abs_out(result, self);
}
Tensor absolute(const Tensor& self) {
return self.abs();
}
Tensor& absolute_(Tensor& self) {
return self.abs_();
}
Tensor& angle_out(Tensor& result, const Tensor& self) {
return unary_op_impl_with_complex_to_float_out(result, self, angle_stub);
}

View File

@ -220,24 +220,34 @@
variants: function, method
- func: abs_(Tensor(a!) self) -> Tensor(a!)
variants: function, method
variants: method
- func: abs.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)
# Note [Adding an alias]
# To add an alias do the following:
#
# 1) Copy the original functions native_functions.yaml entry, but replace the
# original function's name with their own.
# 2) Implement the corresponding functions and have them redispatch to the
# original function.
# 3) Add docstrings to the new functions that reference the original function.
# 4) Update the alias_map in torch/csrc/jit/passes/normalize_ops.cpp.
# 5) Update the op_alias_mappings in torch/testing/_internal/jit_utils.py.
# 6) Add a test validating the alias's behavior is the same as the original
# function's.
#
# See torch.absolute, an alias for torch.abs, as an example.
# Absolute, alias for abs
- func: absolute(Tensor self) -> Tensor
use_c10_dispatcher: full
variants: function, method
dispatch:
CPU, CUDA: abs
- func: absolute_(Tensor(a!) self) -> Tensor(a!)
variants: function, method
dispatch:
CPU, CUDA: abs_
variants: method
- func: absolute.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)
dispatch:
CPU, CUDA: abs_out
- func: angle(Tensor self) -> Tensor
use_c10_dispatcher: full

View File

@ -161,9 +161,6 @@
- name: abs(Tensor self) -> Tensor
self: grad * self.sign()
- name: absolute(Tensor self) -> Tensor
self: grad * self.sign()
- name: acos(Tensor self) -> Tensor
self: grad * -((-self * self + 1).rsqrt())