* Add the `kernels check` subcommand
This subcommand checks a given kernel. Currently it applies the same ABI
checks as `kernel-abi-check` in `kernel-builder`.
* Print an error when `build` contains files
* Forgot to update has_issues in two places
* Support registering inference/training-specific layers
This change makes it possible to register kernels specialized for
inference, training, and/or `torch.compile`. To do so, the mapping
notation is extended to support registering specialized kernels
for a specific 'mode'. For instance, the following mapping,
```python
kernel_layer_mapping = {
"SiluAndMul": {
"cuda": {
Mode.DEFAULT: LayerRepository(
repo_id="kernels-community/activation",
layer_name="SiluAndMul",
),
Mode.TRAINING | Mode.TORCH_COMPILE: LayerRepository(
repo_id="kernels-community/activation-training-optimized",
layer_name="SiluAndMul",
),
}
}
}
```
uses `kernels-community/activation` by default, but will switch to
using `kernels-community/activation-training-optimized` if a model
is kernelized for training and `torch.compile`.
To make it easier to add more modes in the future and to unify the
`register_kernel_mapping` and `kernelize` signatures, the `training`
and `needs_torch_compile` arguments of `kernelize` are replaced by
a single `mode` argument:
```python
model = MyModel(...)
model = kernelize(model, mode=Mode.TRAINING | Mode.TORCH_COMPILE)
```
* Documentation fixes
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
* Add note on when the fallback is used
* Tighten up some Mode checks
* Fix ruff check
* Attempt to fix mypy errors
* More typing fixes
* Ignore Python < 3.11 type check SNAFU
---------
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
* Add `generate-readme` subcommand for generating a README
This README includes all the top-level functions with docs (if
docstrings are available).
* CI: attempt README generation
* Add PyYAML dependencies
* Typing fixes
* Remove old build backend
* Add types, use `Path` where possible
* Remove unused `get_metadata` function
This function is also problematic, because it assumes that `build.toml`
is always present.
* Only import torch when needed
This avoids the (costly) torch load when e.g. the setuptools hooks
are running in downstream packages.
* Lock Python/Torch versions
Also update to Torch 2.5.1/2.6.0.
* Set the minimum Python version to 3.9
* Change step description
* PoC: allow users to lock the kernel revisions
This change allows Python projects that use kernels to lock the
kernel revisions on a project-basis. For this to work, the user
only has to include `hf-kernels` as a build dependency. During
the build, a lock file is written to the package's pkg-info.
During runtime we can read it out and use the corresponding
revision. When the kernel is not locked, the revision that is provided
as an argument is used.
* Generate lock files with `hf-lock-kernels`, copy to egg
* Various improvements
* Name CLI `hf-kernels`, add `download` subcommand
* hf-kernels.lock
* Bump version to 0.1.1
* Use setuptools for testing the wheel
* Factor out tomllib module selection
* Pass through `local_files_only` in `get_metadata`
* Do not reuse implementation in `load_kernel`
* The tests install hf-kernels from PyPI, should be local
* docker: package is in subdirectory