mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 05:34:18 +08:00
Rename Math to CompositeImplicitAutograd (#54466)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/54466 I had to very carefully audit all the use sites since there are a lot of other uses of the string Math; I did most of the conversion by grepping for all occurrences of Math and then doing a search replace. I also updated documentation for clarity. Signed-off-by: Edward Z. Yang <ezyang@fb.com> Test Plan: Imported from OSS Reviewed By: ngimel Differential Revision: D27253239 Pulled By: ezyang fbshipit-source-id: afb485d07ff39575742a4f0e1e205179b60bc953
This commit is contained in:
committed by
Facebook GitHub Bot
parent
87989a6cf9
commit
145bc5cd51
@ -26,16 +26,16 @@ keys for a single example of each use case. These use cases are listed below:
|
||||
Kernels registered to this key MUST work for inference for all backends.
|
||||
- Autograd: alias key mapped to autograd of all backends like AutogradCPU, AutogradXLA, AutogradOther.
|
||||
Kernels registered to this key MUST work for autograd for all backends.
|
||||
- Math: alias key Math = DefaultBackend + Autograd
|
||||
- CompositeImplicitAutograd: alias key CompositeImplicitAutograd = DefaultBackend + Autograd
|
||||
Kernels registered to this key MUST work for both inference + autograd for all backends.
|
||||
|
||||
Note we only allow registrations to alias keys inside pytorch core library. E.g you shouldn't register
|
||||
a Math or DefaultBackend kernel from torch-xla extension, instead you should upstream the kernel into
|
||||
a CompositeImplicitAutograd or DefaultBackend kernel from torch-xla extension, instead you should upstream the kernel into
|
||||
pytorch/pytorch repo so that it's available for all backends and continuously tested even without the extension.
|
||||
|
||||
Usage:
|
||||
dispatcher = PythonDispatcher()
|
||||
dispatcher.register(["CPU", "XLA", "Math"])
|
||||
dispatcher.register(["CPU", "XLA", "CompositeImplicitAutograd"])
|
||||
print(dispatcher.dispatchTable()) # This tells you exactly which kernel is used for certain backend.
|
||||
# For more debugging information
|
||||
# print(dispatcher.keys())
|
||||
@ -57,7 +57,7 @@ class PythonDispatcher:
|
||||
alias_keys = [
|
||||
"DefaultBackend",
|
||||
"Autograd",
|
||||
"Math",
|
||||
"CompositeImplicitAutograd",
|
||||
]
|
||||
supported_keys = runtime_keys + alias_keys
|
||||
|
||||
@ -85,8 +85,8 @@ class PythonDispatcher:
|
||||
if len(set(dispatchKeys)) != len(dispatchKeys):
|
||||
raise RuntimeError(f"Overriden is not allowed but found duplicates in {dispatchKeys}.")
|
||||
# We currently forbid this in codegen instead of C++ dispatcher.
|
||||
if 'Math' in dispatchKeys and 'DefaultBackend' in dispatchKeys:
|
||||
raise RuntimeError("Registration to both Math and DefaultBackend is not allowed.")
|
||||
if 'CompositeImplicitAutograd' in dispatchKeys and 'DefaultBackend' in dispatchKeys:
|
||||
raise RuntimeError("Registration to both CompositeImplicitAutograd and DefaultBackend is not allowed.")
|
||||
for key in dispatchKeys:
|
||||
if key not in self.supported_keys:
|
||||
raise RuntimeError(f"{key} is not supported, please select a dispatch key in {self.supported_keys}.")
|
||||
|
Reference in New Issue
Block a user