Files
peft/examples/multilayer_perceptron

Fine-tuning a multilayer perceptron using LoRA and 🤗 PEFT

Open In Colab

PEFT supports fine-tuning any type of model as long as the layers being used are supported. The model does not have to be a transformers model, for instance. To demonstrate this, the accompanying notebook multilayer_perceptron_lora.ipynb shows how to apply LoRA to a simple multilayer perceptron and use it to train a model to perform a classification task.