add details about transformers

This commit is contained in:
Younes Belkada
2024-04-26 09:40:25 +02:00
parent fb03c0d9e8
commit ece3ce2474

View File

@ -130,7 +130,7 @@ You can refer to the [Google Colab](https://colab.research.google.com/drive/12GT
## EETQ quantization
You can also perform LoRA fine-tuning on EETQ quantized models. [EETQ](https://github.com/NetEase-FuXi/EETQ) package offers simple and efficient way to perform 8-bit quantization, which is claimed to be faster than the `LLM.int8()` algorithm.
You can also perform LoRA fine-tuning on EETQ quantized models. [EETQ](https://github.com/NetEase-FuXi/EETQ) package offers simple and efficient way to perform 8-bit quantization, which is claimed to be faster than the `LLM.int8()` algorithm. First, make sure that you have a transformers version that is compatible with EETQ (e.g. by installing it from latest pypi or from source).
```py
import torch