mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 05:34:18 +08:00
free up dispatch key space (in C++) (#69633)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/69633 Test Plan: Imported from OSS Reviewed By: albanD Differential Revision: D33255193 Pulled By: bdhirsh fbshipit-source-id: 79773e9c15bf4f2f27675121a49ff5ffd1375238 (cherry picked from commit eac0b1300569e035f3de28a1f0fdce03f60bd270)
This commit is contained in:
committed by
PyTorch MergeBot
parent
1cec719448
commit
20b8653dfa
@ -15,9 +15,9 @@ keys for a single example of each use case. These use cases are listed below:
|
||||
- CPU/AutogradCPU: represents in-tree backends which we usually have dedicated inference &
|
||||
autograd kernel in pytorch core library.
|
||||
E.g. CPU, CUDA
|
||||
- QuantizedCPU/AutogradOther: represents in-tree backends which we usually have backend specific
|
||||
- FPGA/AutogradOther: represents in-tree backends which we usually have backend specific
|
||||
inference kernels, but they share the same autograd kernel specified in AutogradOther.
|
||||
E.g. QuantizedCPU, QuantizedCUDA
|
||||
E.g. FPGA, SparseCsrCPU
|
||||
- XLA/AutogradXLA: represents out-of-tree backends which we don't have either inference or autograd
|
||||
kernel defined in pytorch core library. Backend owner is responsible for registering both
|
||||
inference & autograd kernels in their extensions(e.g. torch-xla) for the operators they support.
|
||||
@ -53,7 +53,7 @@ class PythonDispatcher:
|
||||
name = "foo"
|
||||
runtime_keys = [
|
||||
"CPU", "AutogradCPU",
|
||||
"QuantizedCPU", "AutogradOther",
|
||||
"FPGA", "AutogradOther",
|
||||
"XLA", "AutogradXLA",
|
||||
"Lazy", "AutogradLazy",
|
||||
]
|
||||
|
Reference in New Issue
Block a user