mirror of
https://github.com/huggingface/peft.git
synced 2025-10-20 23:43:47 +08:00
Compare commits
1 Commits
v0.12.0
...
docs_relat
Author | SHA1 | Date | |
---|---|---|---|
b169484659 |
@ -14,7 +14,7 @@ specific language governing permissions and limitations under the License.
|
||||
|
||||
Some fine-tuning techniques, such as prompt tuning, are specific to language models. That means in 🤗 PEFT, it is
|
||||
assumed a 🤗 Transformers model is being used. However, other fine-tuning techniques - like
|
||||
[LoRA](./conceptual_guides/lora) - are not restricted to specific model types.
|
||||
[LoRA](../conceptual_guides/lora) - are not restricted to specific model types.
|
||||
|
||||
In this guide, we will see how LoRA can be applied to a multilayer perceptron and a computer vision model from the [timm](https://huggingface.co/docs/timm/index) library.
|
||||
|
||||
|
@ -17,7 +17,7 @@ The development of this API has been motivated by the need for super users to no
|
||||
|
||||
## Supported tuner types
|
||||
|
||||
Currently the supported adapter types are the 'injectable' adapters, meaning adapters where an inplace modification of the model is sufficient to correctly perform the fine tuning. As such, only [LoRA](./conceptual_guides/lora), AdaLoRA and [IA3](./conceptual_guides/ia3) are currently supported in this API.
|
||||
Currently the supported adapter types are the 'injectable' adapters, meaning adapters where an inplace modification of the model is sufficient to correctly perform the fine tuning. As such, only [LoRA](../conceptual_guides/lora), AdaLoRA and [IA3](../conceptual_guides/ia3) are currently supported in this API.
|
||||
|
||||
## `inject_adapter_in_model` method
|
||||
|
||||
|
Reference in New Issue
Block a user