A high-throughput and memory-efficient inference and serving engine for LLMs
Updated 2025-10-20 03:47:19 +08:00
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Updated 2025-10-20 03:20:45 +08:00
Train transformer language models with reinforcement learning.
Updated 2025-10-20 01:27:03 +08:00
Community maintained hardware plugin for vLLM on Ascend
Updated 2025-10-19 17:06:05 +08:00
verl: Volcano Engine Reinforcement Learning for LLMs
Updated 2025-10-19 08:50:34 +08:00
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Updated 2025-10-18 11:58:07 +08:00
Load compute kernels from the Hub
Updated 2025-10-17 23:26:19 +08:00
AlphaFold 3 inference pipeline.
Updated 2025-10-17 23:06:15 +08:00
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Updated 2025-10-17 22:24:46 +08:00
Official git repository for Biopython (originally converted from CVS)
Updated 2025-10-15 21:49:15 +08:00
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
Updated 2025-10-15 09:58:53 +08:00
🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support
Updated 2025-10-14 20:11:32 +08:00
A high-throughput and memory-efficient inference and serving engine for LLMs
Updated 2025-10-11 16:48:30 +08:00
npu fused RMSNorm for transformers kernel
Updated 2025-09-24 16:23:46 +08:00
🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
Updated 2025-09-22 11:27:37 +08:00
文件仓库
Updated 2025-09-15 17:09:52 +08:00
Updated 2025-09-14 09:01:41 +08:00
全库下载pdb数据的python脚本,支持异步下载,支持断点续传,在耗尽一个文件夹的inode后,能自动创建新文件夹存储
Updated 2025-08-19 16:34:20 +08:00