mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 05:34:18 +08:00
Remove BUILD_NAMEDTENSOR macros (#30894)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30894 This PR begins the process of removing BUILD_NAMEDTENSOR macros. There will be followups. Reasons for removing the macros: - BUILD_NAMEDTENSOR is always on and has been on since pytorch 1.3.0. - Since we don't test building without it, it is useless to keep around. - Code becomes nicer to read without the macros Reasons for not removing the macros: - potential for feature flagging Now, I argue against needing to feature flag. The main reason why we might want to feature flag is if we need to disable the feature. We'd need a fast switch to disable the feature if someone discovers in the future that named tensors caused some regression in some existing workflows. In https://github.com/pytorch/pytorch/pull/25798, I did a variety of macro- and micro- benchmarks to determine the performance impact of named tensors on regular tensors. [The microbenchmarks](https://github.com/pytorch/pytorch/pull/25798#issuecomment-529014810) were not very stable, and running the microbenchmarks for more iterations doesn't actually help because the noise is not distributed in a nice way. Instead of microbenchmarks I ran a [profiler (perf)](https://github.com/pytorch/pytorch/pull/25798#issuecomment-555707645) to estimate how much overhead named tensors add to unnamed code. I estimated the overhead to be less than 100ns for `add` and even smaller for `mm`; there are ways to optimize even futher if we find this to be a problem. [Initial macrobenchmarks](https://github.com/pytorch/pytorch/pull/25798#issuecomment-530539104) were also not very stable. I ran imagenet for some number of epochs. To make them more stable, I got rid of the data loading (which seemed to vary between runs). [In some benchmarkers without data loading](https://github.com/pytorch/pytorch/pull/25798#issuecomment-562214053), we can see that the results are less noisy now. These results support no noticeable regressions in speed. Test Plan: - wait for CI Differential Revision: D18858543 Pulled By: zou3519 fbshipit-source-id: 08bf3853a9f506c6b084808dc9ddd1e835f48c13
This commit is contained in:
committed by
Facebook Github Bot
parent
f48a8901c5
commit
e05ee4c421
@ -25,9 +25,7 @@
|
||||
#include <torch/csrc/utils/tensor_new.h>
|
||||
#include <torch/csrc/jit/tracer.h>
|
||||
#include <ATen/core/EnableNamedTensor.h>
|
||||
#ifdef BUILD_NAMEDTENSOR
|
||||
#include <ATen/NamedTensorUtils.h>
|
||||
#endif
|
||||
|
||||
#include <ATen/ATen.h>
|
||||
#include <pybind11/pybind11.h>
|
||||
@ -323,7 +321,6 @@ PyObject *THPVariable_get_ndim(THPVariable *self, void *unused)
|
||||
END_HANDLE_TH_ERRORS
|
||||
}
|
||||
|
||||
#ifdef BUILD_NAMEDTENSOR
|
||||
PyObject *THPVariable_get_names(THPVariable *self, void *unused)
|
||||
{
|
||||
HANDLE_TH_ERRORS
|
||||
@ -370,7 +367,6 @@ int THPVariable_set_names(THPVariable *self, PyObject *names) {
|
||||
return 0;
|
||||
END_HANDLE_TH_ERRORS_RET(-1)
|
||||
}
|
||||
#endif
|
||||
|
||||
int THPVariable_set_requires_grad(THPVariable *self, PyObject *obj, void *unused)
|
||||
{
|
||||
@ -524,9 +520,7 @@ static struct PyGetSetDef THPVariable_properties[] = {
|
||||
{"layout", (getter)THPVariable_layout, nullptr, nullptr, nullptr},
|
||||
{"device", (getter)THPVariable_device, nullptr, nullptr, nullptr},
|
||||
{"ndim", (getter)THPVariable_get_ndim, nullptr, nullptr, nullptr},
|
||||
#ifdef BUILD_NAMEDTENSOR
|
||||
{"names", (getter)THPVariable_get_names, (setter)THPVariable_set_names, nullptr, nullptr},
|
||||
#endif
|
||||
{nullptr}
|
||||
};
|
||||
|
||||
|
Reference in New Issue
Block a user