mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 05:34:18 +08:00
Add "mps" device to PyTorch framework.
Remove the "mlc" device for Mac platforms. This commit will be followed up with: * adding MPS runtime components * PyTorch ops for MPS device Fixes #ISSUE_NUMBER Pull Request resolved: https://github.com/pytorch/pytorch/pull/76291 Approved by: https://github.com/albanD
This commit is contained in:
committed by
PyTorch MergeBot
parent
a0bf0f5611
commit
54c75e1e8f
@ -21,7 +21,7 @@ keys for a single example of each use case. These use cases are listed below:
|
||||
- XLA/AutogradXLA: represents out-of-tree backends which we don't have either inference or autograd
|
||||
kernel defined in pytorch core library. Backend owner is responsible for registering both
|
||||
inference & autograd kernels in their extensions(e.g. torch-xla) for the operators they support.
|
||||
E.g. XLA, XPU, MLC
|
||||
E.g. XLA, XPU, MPS
|
||||
- CompositeExplicitAutograd: alias key mapped to inference kernels of all backends like CPU, CUDA, XLA etc.
|
||||
Kernels registered to this key MUST work for inference for all backends.
|
||||
- Autograd: alias key mapped to autograd of all backends like AutogradCPU, AutogradXLA, AutogradOther.
|
||||
|
Reference in New Issue
Block a user