mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 12:54:11 +08:00
Updated TH to ATen porting guide (markdown)
@ -11,8 +11,8 @@ You're here because you are working on a task involving porting legacy TH code i
|
||||
Let us take `max_unpool1d` as an example. max_unpool1d, max_unpool2d, and max_unpool3d are neural network functions that are currently implemented in our legacy THNN (CPU) / THCUNN (CUDA) libraries. It is generally better if these live in our new library ATen, since it is more feature complete and reduces cognitive overhead. Here is the documentation for one of these functions: https://pytorch.org/docs/master/nn.html?highlight=adaptive#torch.nn.MaxUnpool1d You can find how the function is binded to ATen in nn.yaml (https://github.com/pytorch/pytorch/blob/b30c803662d4c980588b087beebf98982b3b653c/aten/src/ATen/nn.yaml#L185-L195) and how its frontend API is defined in native_functions.yaml (https://github.com/pytorch/pytorch/blob/b30c803662d4c980588b087beebf98982b3b653c/aten/src/ATen/native/native_functions.yaml#L3334-L3356).
|
||||
|
||||
The goal here is to remove the entry in nn.yaml, so the implementation is done complete in ATen. There are a few reasonable steps you could take to do this (these could all be separate PRs):
|
||||
* Implement a CPU-only replacement. You can still dispatch to THCUNN for the CUDA implementation by specifying backend-specific dispatch in native_functions.yaml (search for "dispatch:" for examples). The implementation of RoiPooling (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/RoiPooling.cpp) can give you some inspiration here.
|
||||
* Implement a CUDA-only replacement. As before, the CUDA implementation of RoiPooling (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/RoiPooling.cu) can give you some inspiration.
|
||||
* Implement a CPU-only replacement. You can still dispatch to THCUNN for the CUDA implementation by specifying backend-specific dispatch in native_functions.yaml (search for "dispatch:" for examples). The implementation of FractionalMaxPool3d (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/FractionalMaxPool3d.cpp) can give you some inspiration here.
|
||||
* Implement a CUDA-only replacement. As before, the CUDA implementation of FractionalMaxPool3d (https://github.com/pytorch/pytorch/blob/master/aten/src/ATen/native/cuda/FractionalMaxPool3d.cu) can give you some inspiration.
|
||||
* Implement a CPU-only replacement for the backwards functions (these are suffixed with _backward and _backward_out).
|
||||
* Implement a CUDA-only replacement for the backwards functions
|
||||
|
||||
|
Reference in New Issue
Block a user