Files
pytorch/docs/source/library.md
Mikayla Gawarecki 1196bb1c2e Add utility to get computed kernel in torch.library (#158393)
Adds `OperatorEntry::getComputedKernelForDispatchKey` which returns the KernelFunction corresponding to `OperatorEntry.dispatchTable_[dispatch_ix]` for a given dispatch key
- Specifically it returns a `SafeKernelFunction` that holds a `KernelToken`. This `KernelToken` is registered to the `KernelFunction` in `OperatorEntry.kernels_` and will be invalidated when the `KernelFunction` is destructed (i.e. when the `AnnotatedKernel` that holds this `KernelFunction` is removed from `kernels_`, which happens when the corresponding impl is deregistered).
- `SafeKernelFunction` can be called via `callBoxed`, the validity of the token will be checked before this happens
- `SafeKernelFunction` is pybinded and `getComputedKernelForDispatchKey` is exposed to the frontend ia `torch.library.get_kernel`

Related to https://github.com/pytorch/pytorch/issues/155330

Pull Request resolved: https://github.com/pytorch/pytorch/pull/158393
Approved by: https://github.com/albanD
2025-08-13 21:00:59 +00:00

2.7 KiB

(torch-library-docs)=

torch.library

.. py:module:: torch.library
.. currentmodule:: torch.library

torch.library is a collection of APIs for extending PyTorch's core library of operators. It contains utilities for testing custom operators, creating new custom operators, and extending operators defined with PyTorch's C++ operator registration APIs (e.g. aten operators).

For a detailed guide on effectively using these APIs, please see PyTorch Custom Operators Landing Page for more details on how to effectively use these APIs.

Testing custom ops

Use {func}torch.library.opcheck to test custom ops for incorrect usage of the Python torch.library and/or C++ TORCH_LIBRARY APIs. Also, if your operator supports training, use {func}torch.autograd.gradcheck to test that the gradients are mathematically correct.

.. autofunction:: opcheck

Creating new custom ops in Python

Use {func}torch.library.custom_op to create new custom ops.

.. autofunction:: custom_op
.. autofunction:: triton_op
.. autofunction:: wrap_triton

Extending custom ops (created from Python or C++)

Use the register.* methods, such as {func}torch.library.register_kernel and {func}torch.library.register_fake, to add implementations for any operators (they may have been created using {func}torch.library.custom_op or via PyTorch's C++ operator registration APIs).

.. autofunction:: register_kernel
.. autofunction:: register_autocast
.. autofunction:: register_autograd
.. autofunction:: register_fake
.. autofunction:: register_vmap
.. autofunction:: impl_abstract
.. autofunction:: get_ctx
.. autofunction:: register_torch_dispatch
.. autofunction:: infer_schema
.. autoclass:: torch._library.custom_ops.CustomOpDef
   :members: set_kernel_enabled
.. autofunction:: get_kernel

Low-level APIs

The following APIs are direct bindings to PyTorch's C++ low-level operator registration APIs.

.. warning:: The low-level operator registration APIs and the PyTorch Dispatcher are a complicated PyTorch concept. We recommend you use the higher level APIs above (that do not require a torch.library.Library object) when possible. `This blog post <http://blog.ezyang.com/2020/09/lets-talk-about-the-pytorch-dispatcher/>`_ is a good starting point to learn about the PyTorch Dispatcher.

A tutorial that walks you through some examples on how to use this API is available on Google Colab.

.. autoclass:: torch.library.Library
  :members:

.. autofunction:: fallthrough_kernel

.. autofunction:: define

.. autofunction:: impl