mirror of
https://github.com/huggingface/peft.git
synced 2025-10-20 15:33:48 +08:00
add details about transformers
This commit is contained in:
@ -130,7 +130,7 @@ You can refer to the [Google Colab](https://colab.research.google.com/drive/12GT
|
||||
|
||||
## EETQ quantization
|
||||
|
||||
You can also perform LoRA fine-tuning on EETQ quantized models. [EETQ](https://github.com/NetEase-FuXi/EETQ) package offers simple and efficient way to perform 8-bit quantization, which is claimed to be faster than the `LLM.int8()` algorithm.
|
||||
You can also perform LoRA fine-tuning on EETQ quantized models. [EETQ](https://github.com/NetEase-FuXi/EETQ) package offers simple and efficient way to perform 8-bit quantization, which is claimed to be faster than the `LLM.int8()` algorithm. First, make sure that you have a transformers version that is compatible with EETQ (e.g. by installing it from latest pypi or from source).
|
||||
|
||||
```py
|
||||
import torch
|
||||
|
Reference in New Issue
Block a user