Integrate custom op tests with CI (#10611)

Summary:
This PR is stacked on https://github.com/pytorch/pytorch/pull/10610, and only adds changes in one file `.jenkins/pytorch/test.sh`, where we now build the custom op tests and run them.

I'd also like to take this PR to discuss whether the [`TorchConfig.cmake`](https://github.com/pytorch/pytorch/blob/master/cmake/TorchConfig.cmake.in) I made is robust enough (we will also see in the CI) orionr Yangqing dzhulgakov what do you think?

Also ezyang for CI changes
Pull Request resolved: https://github.com/pytorch/pytorch/pull/10611

Differential Revision: D9597627

Pulled By: goldsborough

fbshipit-source-id: f5af8164c076894f448cef7e5b356a6b3159f8b3
This commit is contained in:
Peter Goldsborough
2018-09-10 15:24:47 -07:00
committed by Facebook Github Bot
parent 3e665cc29b
commit a0d4106c07
13 changed files with 153 additions and 48 deletions

View File

@ -22,9 +22,12 @@ void get_operator_from_registry_and_execute() {
std::vector<at::Tensor> output;
torch::jit::pop(stack, output);
const auto manual = custom_op(torch::ones(5), 2.0, 3);
assert(output.size() == 3);
for (const auto& tensor : output) {
assert(tensor.allclose(torch::ones(5) * 2));
for (size_t i = 0; i < output.size(); ++i) {
assert(output[i].allclose(torch::ones(5) * 2));
assert(output[i].allclose(manual[i]));
}
}
@ -71,10 +74,9 @@ void test_argument_checking_for_serialized_modules(
module->forward({});
assert(false);
} catch (const at::Error& error) {
std::cout << error.what_without_backtrace() << std::endl;
assert(
std::string(error.what_without_backtrace())
.find("custom::op() is missing value for argument 'tensor'") == 0);
.find("forward() is missing value for argument 'input'") == 0);
}
}