Files
pytorch/test/cpp/common/main.cpp
Peter Goldsborough db8d01b248 Move JIT tests to gtest (#12030)
Summary:
In our #better-engineering quest of removing all uses of catch in favor of gtest, this PR ports JIT tests to gtest. After #11846 lands, we will be able to delete catch.

I don't claim to use/write these tests much (though I wrote the custom operator tests) so please do scrutinize whether you will want to write tests in the way I propose. Basically:

1. One function declaration per "test case" in test/cpp/jit/test.h
2. One definition in test/cpp/jit/test.cpp
3. If you want to be able to run it in Python, add it to `runJitTests()` which is called from Python tests
4. If you want to be able to run it in C++, add a `JIT_TEST` line in test/cpp/jit/gtest.cpp

Notice also I was able to share support code between C++ frontend and JIT tests, which is healthy.

ezyang apaszke zdevito
Pull Request resolved: https://github.com/pytorch/pytorch/pull/12030

Differential Revision: D10207745

Pulled By: goldsborough

fbshipit-source-id: d4bae087e4d03818b72b8853cd5802d79a4cf32e
2018-10-06 23:09:44 -07:00

33 lines
904 B
C++

#include <gtest/gtest.h>
#include <torch/cuda.h>
#include <iostream>
#include <string>
std::string add_negative_flag(const std::string& flag) {
std::string filter = ::testing::GTEST_FLAG(filter);
if (filter.find('-') == std::string::npos) {
filter.push_back('-');
} else {
filter.push_back(':');
}
filter += flag;
return filter;
}
int main(int argc, char* argv[]) {
::testing::InitGoogleTest(&argc, argv);
if (!torch::cuda::is_available()) {
std::cout << "CUDA not available. Disabling CUDA and MultiCUDA tests"
<< std::endl;
::testing::GTEST_FLAG(filter) = add_negative_flag("*_CUDA:*_MultiCUDA");
} else if (torch::cuda::device_count() < 2) {
std::cout << "Only one CUDA device detected. Disabling MultiCUDA tests"
<< std::endl;
::testing::GTEST_FLAG(filter) = add_negative_flag("*_MultiCUDA");
}
return RUN_ALL_TESTS();
}