mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Summary: In TorchScript and C++ extensions we currently advocate a mix of `torch::` and `at::` namespace usage. In the C++ frontend I had instead exported all symbols from `at::` and some from `c10::` into the `torch::` namespace. This is far, far easier for users to understand, and also avoid bugs around creating tensors vs. variables. The same should from now on be true for the TorchScript C++ API (for running and loading models) and all C++ extensions. Note that since we're just talking about typedefs, this change does not break any existing code. Once this lands I will update stuff in `pytorch/tutorials` too. zdevito ezyang gchanan Pull Request resolved: https://github.com/pytorch/pytorch/pull/13523 Differential Revision: D12942787 Pulled By: goldsborough fbshipit-source-id: 76058936bd8707b33d9e5bbc2d0705fc3d820763
20 lines
732 B
C++
20 lines
732 B
C++
#include <torch/extension.h>
|
|
|
|
// Declare the function from cuda_extension.cu. It will be compiled
|
|
// separately with nvcc and linked with the object file of cuda_extension.cpp
|
|
// into one shared library.
|
|
void sigmoid_add_cuda(const float* x, const float* y, float* output, int size);
|
|
|
|
torch::Tensor sigmoid_add(torch::Tensor x, torch::Tensor y) {
|
|
AT_CHECK(x.type().is_cuda(), "x must be a CUDA tensor");
|
|
AT_CHECK(y.type().is_cuda(), "y must be a CUDA tensor");
|
|
auto output = torch::zeros_like(x);
|
|
sigmoid_add_cuda(
|
|
x.data<float>(), y.data<float>(), output.data<float>(), output.numel());
|
|
return output;
|
|
}
|
|
|
|
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
|
|
m.def("sigmoid_add", &sigmoid_add, "sigmoid(x) + sigmoid(y)");
|
|
}
|