mirror of
https://github.com/huggingface/kernels.git
synced 2025-10-20 20:46:42 +08:00
c9d6ba261ad10e6233cf4781ac7cd394e8f007d3
tool.kernels
to tool.hf-kernels
From the `pyproject.toml` spec: > A mechanism is needed to allocate names within the `tool.*`` namespace, > to make sure that different projects do not attempt to use the same > sub-table and collide. Our rule is that a project can use the subtable > `tool.$NAME` if, and only if, they own the entry for $NAME in the > Cheeseshop/PyPI. https://packaging.python.org/en/latest/specifications/pyproject-toml/#arbitrary-tool-configuration-the-tool-table
kernels
Make sure you have torch==2.5.1+cu124
installed.
import torch
from kernels import get_kernel
# Download optimized kernels from the Hugging Face hub
activation = get_kernel("kernels-community/activation")
# Random tensor
x = torch.randn((10, 10), dtype=torch.float16, device="cuda")
# Run the kernel
y = torch.empty_like(x)
activation.gelu_fast(y, x)
print(y)
Docker Reference
build and run the reference example/basic.py in a Docker container with the following commands:
docker build --platform linux/amd64 -t kernels-reference -f docker/Dockerfile.reference .
docker run --gpus all -it --rm -e HF_TOKEN=$HF_TOKEN kernels-reference
Description
Languages
Python
98.7%
Nix
1.2%