Files
pytorch/torch/csrc/api/src/nn/modules/linear.cpp
Peter Goldsborough 393ad6582d Use torch:: instead of at:: in all C++ APIs (#13523)
Summary:
In TorchScript and C++ extensions we currently advocate a mix of `torch::` and `at::` namespace usage. In the C++ frontend I had instead exported all symbols from `at::` and some from `c10::` into the `torch::` namespace. This is far, far easier for users to understand, and also avoid bugs around creating tensors vs. variables. The same should from now on be true for the TorchScript C++ API (for running and loading models) and all C++ extensions.

Note that since we're just talking about typedefs, this change does not break any existing code.

Once this lands I will update stuff in `pytorch/tutorials` too.

zdevito ezyang gchanan
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13523

Differential Revision: D12942787

Pulled By: goldsborough

fbshipit-source-id: 76058936bd8707b33d9e5bbc2d0705fc3d820763
2018-11-06 14:32:25 -08:00

37 lines
874 B
C++

#include <torch/nn/modules/linear.h>
#include <torch/types.h>
#include <torch/utils.h>
#include <cmath>
#include <cstdint>
namespace torch {
namespace nn {
LinearOptions::LinearOptions(int64_t in, int64_t out) : in_(in), out_(out) {}
LinearImpl::LinearImpl(LinearOptions options) : options(std::move(options)) {
reset();
}
void LinearImpl::reset() {
weight =
register_parameter("weight", torch::empty({options.out_, options.in_}));
if (options.with_bias_) {
bias = register_parameter("bias", torch::empty(options.out_));
}
const auto stdv = 1.0 / std::sqrt(weight.size(1));
NoGradGuard no_grad;
for (auto& p : parameters()) {
p->uniform_(-stdv, stdv);
}
}
Tensor LinearImpl::forward(Tensor input) {
AT_ASSERT(!options.with_bias_ || bias.defined());
return torch::linear(input, weight, bias);
}
} // namespace nn
} // namespace torch