Use torch:: instead of at:: in all C++ APIs (#13523)

Summary:
In TorchScript and C++ extensions we currently advocate a mix of `torch::` and `at::` namespace usage. In the C++ frontend I had instead exported all symbols from `at::` and some from `c10::` into the `torch::` namespace. This is far, far easier for users to understand, and also avoid bugs around creating tensors vs. variables. The same should from now on be true for the TorchScript C++ API (for running and loading models) and all C++ extensions.

Note that since we're just talking about typedefs, this change does not break any existing code.

Once this lands I will update stuff in `pytorch/tutorials` too.

zdevito ezyang gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13523

Differential Revision: D12942787

Pulled By: goldsborough

fbshipit-source-id: 76058936bd8707b33d9e5bbc2d0705fc3d820763
This commit is contained in:
Peter Goldsborough
2018-11-06 14:28:20 -08:00
committed by Facebook Github Bot
parent be424de869
commit 393ad6582d
90 changed files with 158 additions and 164 deletions

View File

@ -5,10 +5,10 @@
// into one shared library.
void sigmoid_add_cuda(const float* x, const float* y, float* output, int size);
at::Tensor sigmoid_add(at::Tensor x, at::Tensor y) {
torch::Tensor sigmoid_add(torch::Tensor x, torch::Tensor y) {
AT_CHECK(x.type().is_cuda(), "x must be a CUDA tensor");
AT_CHECK(y.type().is_cuda(), "y must be a CUDA tensor");
auto output = at::zeros_like(x);
auto output = torch::zeros_like(x);
sigmoid_add_cuda(
x.data<float>(), y.data<float>(), output.data<float>(), output.numel());
return output;