add hardsigmoid FP operator to PyTorch (#34545)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34545

This is for common operator coverage, since this is widely used.  A future PR
will add the quantized version.

Some initial questions for reviewers, since it's my first FP operator
diff:
* do we need a backwards.out method for this?
* do we need CUDA? If yes, should it be this PR or is it ok to split

Test Plan:
```
// test
python test/test_torch.py TestTorchDeviceTypeCPU.test_hardsigmoid_cpu_float32

// benchmark
python -m pt.hardsigmoid_test
...
Forward Execution Time (us) : 40.315

Forward Execution Time (us) : 42.603
```

Imported from OSS

Differential Revision: D20371692

fbshipit-source-id: 95668400da9577fd1002ce3f76b9777c6f96c327
This commit is contained in:
Vasiliy Kuznetsov
2020-03-16 15:19:02 -07:00
committed by Facebook GitHub Bot
parent 6d8649dc53
commit 1bac5fd0d3
12 changed files with 230 additions and 6 deletions

View File

@ -4,7 +4,7 @@ Python implementation of __torch_function__
While most of the torch API and handling for __torch_function__ happens
at the C++ level, some of the torch API is written in Python so we need
python-level handling for __torch_function__ overrides as well. The main
developer-facing functionality in this file are handle_torch_function and
developer-facing functionality in this file are handle_torch_function and
has_torch_function. See torch/functional.py and test/test_overrides.py
for usage examples.
@ -126,6 +126,7 @@ def get_ignored_functions():
torch.nn.functional.has_torch_function,
torch.nn.functional.handle_torch_function,
torch.nn.functional.sigmoid,
torch.nn.functional.hardsigmoid,
torch.nn.functional.tanh,
)
@ -134,7 +135,7 @@ def get_testing_overrides():
Returns
-------
A dictionary that maps overridable functions in the PyTorch API to
A dictionary that maps overridable functions in the PyTorch API to
lambda functions that have the same signature as the real function
and unconditionally return -1. These lambda functions are useful
for testing API coverage for a type that defines __torch_function__.