mirror of
				https://github.com/pytorch/pytorch.git
				synced 2025-11-04 16:04:58 +08:00 
			
		
		
		
	Summary: Currently the C++ API and C++ extensions are effectively two different, entirely orthogonal code paths. This PR unifies the C++ API with the C++ extension API by adding an element of Python binding support to the C++ API. This means the `torch/torch.h` included by C++ extensions, which currently routes to `torch/csrc/torch.h`, can now be rerouted to `torch/csrc/api/include/torch/torch.h` -- i.e. the main C++ API header. This header then includes Python binding support conditioned on a define (`TORCH_WITH_PYTHON_BINDINGS`), *which is only passed when building a C++ extension*. Currently stacked on top of https://github.com/pytorch/pytorch/pull/11498 Why is this useful? 1. One less codepath. In particular, there has been trouble again and again due to the two `torch/torch.h` header files and ambiguity when both ended up in the include path. This is now fixed. 2. I have found that it is quite common to want to bind a C++ API module back into Python. This could be for simple experimentation, or to have your training loop in Python but your models in C++. This PR makes this easier by adding pybind11 support to the C++ API. 3. The C++ extension API simply becomes richer by gaining access to the C++ API headers. soumith ezyang apaszke Pull Request resolved: https://github.com/pytorch/pytorch/pull/11510 Reviewed By: ezyang Differential Revision: D9998835 Pulled By: goldsborough fbshipit-source-id: 7a94b44a9d7e0377b7f1cfc99ba2060874d51535
		
			
				
	
	
		
			20 lines
		
	
	
		
			720 B
		
	
	
	
		
			C++
		
	
	
	
	
	
			
		
		
	
	
			20 lines
		
	
	
		
			720 B
		
	
	
	
		
			C++
		
	
	
	
	
	
#include <torch/extension.h>
 | 
						|
 | 
						|
// Declare the function from cuda_extension.cu. It will be compiled
 | 
						|
// separately with nvcc and linked with the object file of cuda_extension.cpp
 | 
						|
// into one shared library.
 | 
						|
void sigmoid_add_cuda(const float* x, const float* y, float* output, int size);
 | 
						|
 | 
						|
at::Tensor sigmoid_add(at::Tensor x, at::Tensor y) {
 | 
						|
  AT_CHECK(x.type().is_cuda(), "x must be a CUDA tensor");
 | 
						|
  AT_CHECK(y.type().is_cuda(), "y must be a CUDA tensor");
 | 
						|
  auto output = at::zeros_like(x);
 | 
						|
  sigmoid_add_cuda(
 | 
						|
      x.data<float>(), y.data<float>(), output.data<float>(), output.numel());
 | 
						|
  return output;
 | 
						|
}
 | 
						|
 | 
						|
PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
 | 
						|
  m.def("sigmoid_add", &sigmoid_add, "sigmoid(x) + sigmoid(y)");
 | 
						|
}
 |