Add flag to temporarily disable MKL-DNN conv (#23837)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23837

This is a temporary workaround to an issue in MKL-DNN's Convolution backwards implementation: https://github.com/pytorch/pytorch/issues/23825

It is only used to enable testing quantization

Test Plan: Imported from OSS

Differential Revision: D16659081

Pulled By: jamesr66a

fbshipit-source-id: de18ebe98dec2a042f28b23373e20da2b44a42a2
This commit is contained in:
James Reed
2019-08-06 11:11:45 -07:00
committed by Facebook Github Bot
parent 9588cd921e
commit 6ba60ec9b0
5 changed files with 123 additions and 69 deletions

View File

@ -2,6 +2,8 @@
#include <torch/csrc/utils/init.h>
#include <torch/csrc/utils/throughput_benchmark.h>
#include <ATen/native/Convolution.h>
#include <pybind11/functional.h>
namespace torch {
@ -44,6 +46,14 @@ void initThroughputBenchmarkBindings(PyObject* module) {
AutoNoGIL no_gil_guard;
return self.benchmark(config);
});
m.def("_enable_mkldnn_conv", []() {
at::native::disable_mkldnn_conv.exchange(false);
});
m.def("_disable_mkldnn_conv", []() {
at::native::disable_mkldnn_conv.exchange(true);
});
}
} // namespace throughput_benchmark