Files
pytorch/test/cpp_extensions/extension.cpp
Edward Z. Yang 756a86d52c Support large negative SymInt (#99157)
The strategy is that we will heap allocate a LargeNegativeIntSymNodeImpl whenever we have a large negative int, so that we can keep the old `is_symbolic` test (now called `is_heap_allocated`) on SymInt. Whenever we need to do something with these ints, though, we convert them back into a plain `int64_t` (and then, e.g., wrap it in whatever user specificed SymNodeImpl they need.) We cannot wrap directly in the user specified SymNodeImpl as we generally do not know what the "tracing context" is from C++. We expect large negative ints to be rare, so we don't apply optimizations like singleton-ifying INT_MIN.  Here's the order to review:

* c10/core/SymInt.h and cpp
  * `is_symbolic` renamed to `is_heap_allocated` as I needed to audit all use sites: the old `is_symbolic` test would return true for large negative int, but it would be wrong to then try to dispatch on the LargeNegativeIntSymNodeImpl which supports very few operations. In this file, I had to update expect_int,
  * If you pass in a large negative integer, we instead heap allocate it in `promote_to_negative`. The function is written in a funny way to keep compact constructor code for SymInt (the heap allocation happens out of line)
  * clone is now moved out-of-line
  * New method maybe_as_int which will give you a constant int if it is possible, either because it's stored inline or in LargeNegativeIntSymNodeImpl. This is the preferred replacement for previous use of is_symbolic() and then as_int_unchecked().
  * Rename toSymNodeImpl to toSymNode, which is more correct (since it returns a SymNode)
  * Complete rewrite of `normalize_symints.cpp` to use new `maybe_as_int`. Cannot easily use the old code structure, so it's now done doing a macro and typing out each case manually (it's actually not that bad.)
  * Reimplementations of all the unary operators by hand to use `maybe_as_int`, relatively simple.
* c10/core/LargeNegativeIntSymNodeImpl.h - Just stores a int64_t value, but it has to be big and negative. Most methods are not implemented, since we will rewrap the large negative int in the real SymNodeImpl subclass before doing operations with it
* The rest of the files are just rewriting code to use `maybe_as_int`. There is a nontrivial comment in c10/core/SymIntArrayRef.h

Very minor test adjustment in c10/test/core/SymInt_test.cpp . Plan to exercise this properly in next PR.

Companion XLA PR: https://github.com/pytorch/xla/pull/4882

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/99157
Approved by: https://github.com/albanD
2023-04-15 22:43:51 +00:00

56 lines
1.8 KiB
C++

#include <torch/extension.h>
// test include_dirs in setuptools.setup with relative path
#include <tmp.h>
torch::Tensor sigmoid_add(torch::Tensor x, torch::Tensor y) {
return x.sigmoid() + y.sigmoid();
}
struct MatrixMultiplier {
MatrixMultiplier(int A, int B) {
tensor_ =
torch::ones({A, B}, torch::dtype(torch::kFloat64).requires_grad(true));
}
torch::Tensor forward(torch::Tensor weights) {
return tensor_.mm(weights);
}
torch::Tensor get() const {
return tensor_;
}
private:
torch::Tensor tensor_;
};
bool function_taking_optional(c10::optional<torch::Tensor> tensor) {
return tensor.has_value();
}
torch::Tensor random_tensor() {
return torch::randn({1});
}
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
m.def("sigmoid_add", &sigmoid_add, "sigmoid(x) + sigmoid(y)");
m.def(
"function_taking_optional",
&function_taking_optional,
"function_taking_optional");
py::class_<MatrixMultiplier>(m, "MatrixMultiplier")
.def(py::init<int, int>())
.def("forward", &MatrixMultiplier::forward)
.def("get", &MatrixMultiplier::get);
m.def("get_complex", []() { return c10::complex<double>(1.0, 2.0); });
m.def("get_device", []() { return at::device_of(random_tensor()).value(); });
m.def("get_generator", []() { return at::detail::getDefaultCPUGenerator(); });
m.def("get_intarrayref", []() { return at::IntArrayRef({1, 2, 3}); });
m.def("get_memory_format", []() { return c10::get_contiguous_memory_format(); });
m.def("get_storage", []() { return random_tensor().storage(); });
m.def("get_symfloat", []() { return c10::SymFloat(1.0); });
m.def("get_symint", []() { return c10::SymInt(1); });
m.def("get_symintarrayref", []() { return at::SymIntArrayRef({1, 2, 3}); });
m.def("get_tensor", []() { return random_tensor(); });
}