mirror of
https://github.com/huggingface/peft.git
synced 2025-10-20 15:33:48 +08:00
Fine-tuning a multilayer perceptron using LoRA and 🤗 PEFT
PEFT supports fine-tuning any type of model as long as the layers being used are supported. The model does not have to be a transformers model, for instance. To demonstrate this, the accompanying notebook multilayer_perceptron_lora.ipynb
shows how to apply LoRA to a simple multilayer perceptron and use it to train a model to perform a classification task.