mirror of
https://github.com/huggingface/transformers.git
synced 2025-10-21 01:23:56 +08:00
Compare commits
63 Commits
reference_
...
v4.54.0
Author | SHA1 | Date | |
---|---|---|---|
3fd456b200 | |||
97f8c71f52 | |||
d6e9f71a6e | |||
5da6ad2731 | |||
c06d4cd6ce | |||
3e4d584a5b | |||
300d42a43e | |||
abaa043d60 | |||
6630c5b714 | |||
ed9a96bc6d | |||
d913b39ef3 | |||
a26f0fabb8 | |||
69cff312f5 | |||
a98bbc294c | |||
45c7bfb157 | |||
2670da66ce | |||
4b125e2993 | |||
4f17bf0572 | |||
ddb0546d14 | |||
a91653561e | |||
5d0ba3e479 | |||
850bdeaa95 | |||
17f02102c5 | |||
f90de364c2 | |||
3b3f9c0c46 | |||
2a82cf06ad | |||
e3760501b0 | |||
91f591f7bc | |||
c46c17db57 | |||
4600c27c4f | |||
c392d47c9b | |||
565c035a2e | |||
5a81d7e0b3 | |||
ad6fd2da0e | |||
4741e1f1b7 | |||
12b612830d | |||
947a37e8f5 | |||
7b897fe583 | |||
9b7244f189 | |||
ec8a09a5fe | |||
c5a80dd6c4 | |||
7a4e2e7868 | |||
596a75f6e9 | |||
a0e5a7d34b | |||
ea56eb6bed | |||
0fe03afeb8 | |||
82603b6cc2 | |||
10c990f7e2 | |||
e7e6efcbbd | |||
a62f65a989 | |||
623ab01039 | |||
eb1a007f7f | |||
b357cbb19d | |||
019b74977d | |||
5dba4bc7b2 | |||
d9b35c635e | |||
6e9972962f | |||
c6d0500d15 | |||
4884b6bf41 | |||
075a65657a | |||
2936902a76 | |||
cbcb8e6c1f | |||
601260fd96 |
@ -44,7 +44,7 @@ limitations under the License.
|
||||
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ja.md">日本語</a> |
|
||||
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_hd.md">हिन्दी</a> |
|
||||
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_ru.md">Русский</a> |
|
||||
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_pt-br.md">Рortuguês</a> |
|
||||
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_pt-br.md">Português</a> |
|
||||
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_te.md">తెలుగు</a> |
|
||||
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_fr.md">Français</a> |
|
||||
<a href="https://github.com/huggingface/transformers/blob/main/i18n/README_de.md">Deutsch</a> |
|
||||
|
@ -78,6 +78,9 @@ RUN git clone https://github.com/NetEase-FuXi/EETQ.git && cd EETQ/ && git submod
|
||||
# RUN python3 -m pip install --no-cache-dir flute-kernel==0.4.1
|
||||
# RUN python3 -m pip install --no-cache-dir git+https://github.com/Dao-AILab/fast-hadamard-transform.git
|
||||
|
||||
# Add fp-quant for quantization testing
|
||||
RUN python3 -m pip install --no-cache-dir "fp-quant>=0.1.6"
|
||||
|
||||
# Add compressed-tensors for quantization testing
|
||||
RUN python3 -m pip install --no-cache-dir compressed-tensors
|
||||
|
||||
|
@ -280,7 +280,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
|
||||
الآن لإرسال النموذج إلى Hub، تأكد من تسجيل الدخول. إما تشغيل في المحطة الأوامر الطرفية الخاصة بك:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
أو من دفتر ملاحظات:
|
||||
|
@ -41,7 +41,7 @@ picture-in-picture" allowfullscreen></iframe>
|
||||
قبل مشاركة نموذج على Hub، ستحتاج إلى بيانات اعتماد حساب Hugging Face الخاصة بك. إذا كنت تستخدم منصة الأوامر، فقم بتشغيل الأمر التالي في بيئة افتراضية حيث تم تثبيت 🤗 Transformers. سيقوم هذا الأمر بتخزين رمز الدخول الخاص بك في مجلد تخزين المؤقت لـ Hugging Face (`~/.cache/` بشكل افتراضي):
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
إذا كنت تستخدم دفتر ملاحظات مثل Jupyter أو Colaboratory، فتأكد من تثبيت مكتبة [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library). تسمح لك هذه المكتبة بالتفاعل برمجيًا مع Hub.
|
||||
|
@ -324,7 +324,7 @@ python examples/pytorch/summarization/run_summarization.py
|
||||
يمكن لجميع النصوص البرمجية رفع نموذجك النهائي إلى [مركز النماذج](https://huggingface.co/models). تأكد من تسجيل الدخول إلى Hugging Face قبل البدء:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
ثم أضف المعلمة `push_to_hub` إلى النص البرمجي . ستقوم هذه المعلمة بإنشاء مستودع باستخدام اسم مستخدم Hugging Face واسم المجلد المحدد في `output_dir`.
|
||||
|
@ -56,7 +56,7 @@ Dateien lassen sich auch in einem Repository leicht bearbeiten, und Sie können
|
||||
Bevor Sie ein Modell für den Hub freigeben, benötigen Sie Ihre Hugging Face-Anmeldedaten. Wenn Sie Zugang zu einem Terminal haben, führen Sie den folgenden Befehl in der virtuellen Umgebung aus, in der 🤗 Transformers installiert ist. Dadurch werden Ihre Zugangsdaten in Ihrem Hugging Face-Cache-Ordner (standardmäßig `~/.cache/`) gespeichert:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
Wenn Sie ein Notebook wie Jupyter oder Colaboratory verwenden, stellen Sie sicher, dass Sie die [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) Bibliothek installiert haben. Diese Bibliothek ermöglicht Ihnen die programmatische Interaktion mit dem Hub.
|
||||
|
@ -324,7 +324,7 @@ python examples/pytorch/summarization/run_summarization.py
|
||||
Alle Skripte können Ihr endgültiges Modell in den [Model Hub](https://huggingface.co/models) hochladen. Stellen Sie sicher, dass Sie bei Hugging Face angemeldet sind, bevor Sie beginnen:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
Dann fügen Sie dem Skript das Argument `push_to_hub` hinzu. Mit diesem Argument wird ein Repository mit Ihrem Hugging Face-Benutzernamen und dem in `output_dir` angegebenen Ordnernamen erstellt.
|
||||
|
@ -179,6 +179,8 @@
|
||||
title: FBGEMM
|
||||
- local: quantization/finegrained_fp8
|
||||
title: Fine-grained FP8
|
||||
- local: quantization/fp_quant
|
||||
title: FP-Quant
|
||||
- local: gguf
|
||||
title: GGUF
|
||||
- local: quantization/gptq
|
||||
@ -451,6 +453,8 @@
|
||||
title: ErnieM
|
||||
- local: model_doc/esm
|
||||
title: ESM
|
||||
- local: model_doc/exaone4
|
||||
title: EXAONE-4.0
|
||||
- local: model_doc/falcon
|
||||
title: Falcon
|
||||
- local: model_doc/falcon3
|
||||
@ -695,6 +699,8 @@
|
||||
title: XLM-V
|
||||
- local: model_doc/xlnet
|
||||
title: XLNet
|
||||
- local: model_doc/xlstm
|
||||
title: xLSTM
|
||||
- local: model_doc/yoso
|
||||
title: YOSO
|
||||
- local: model_doc/zamba
|
||||
@ -723,6 +729,10 @@
|
||||
title: DAB-DETR
|
||||
- local: model_doc/deepseek_v2
|
||||
title: DeepSeek-V2
|
||||
- local: model_doc/deepseek_vl
|
||||
title: DeepseekVL
|
||||
- local: model_doc/deepseek_vl_hybrid
|
||||
title: DeepseekVLHybrid
|
||||
- local: model_doc/deformable_detr
|
||||
title: Deformable DETR
|
||||
- local: model_doc/deit
|
||||
@ -973,6 +983,8 @@
|
||||
title: Donut
|
||||
- local: model_doc/emu3
|
||||
title: Emu3
|
||||
- local: model_doc/evolla
|
||||
title: Evolla
|
||||
- local: model_doc/flava
|
||||
title: FLAVA
|
||||
- local: model_doc/gemma3
|
||||
|
@ -271,7 +271,7 @@ The model is ready to be pushed to the Hub now. Log in to your Hugging Face acco
|
||||
<hfoption id="huggingface-CLI">
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
</hfoption>
|
||||
|
@ -33,6 +33,7 @@ By default, `TrainingArguments.report_to` is set to `"all"`, so a [`Trainer`] wi
|
||||
it's the second one).
|
||||
- [`~integrations.TensorBoardCallback`] if tensorboard is accessible (either through PyTorch >= 1.4
|
||||
or tensorboardX).
|
||||
- [`~integrations.TrackioCallback`] if [trackio](https://github.com/gradio-app/trackio) is installed.
|
||||
- [`~integrations.WandbCallback`] if [wandb](https://www.wandb.com/) is installed.
|
||||
- [`~integrations.CometCallback`] if [comet_ml](https://www.comet.com/site/) is installed.
|
||||
- [`~integrations.MLflowCallback`] if [mlflow](https://www.mlflow.org/) is installed.
|
||||
@ -72,6 +73,9 @@ Here is the list of the available [`TrainerCallback`] in the library:
|
||||
|
||||
[[autodoc]] integrations.TensorBoardCallback
|
||||
|
||||
[[autodoc]] integrations.TrackioCallback
|
||||
- setup
|
||||
|
||||
[[autodoc]] integrations.WandbCallback
|
||||
- setup
|
||||
|
||||
|
@ -93,6 +93,10 @@ Learn how to quantize models in the [Quantization](../quantization) guide.
|
||||
|
||||
[[autodoc]] QuarkConfig
|
||||
|
||||
## FPQuantConfig
|
||||
|
||||
[[autodoc]] FPQuantConfig
|
||||
|
||||
## AutoRoundConfig
|
||||
|
||||
[[autodoc]] AutoRoundConfig
|
||||
|
220
docs/source/en/model_doc/deepseek_vl.md
Normal file
220
docs/source/en/model_doc/deepseek_vl.md
Normal file
@ -0,0 +1,220 @@
|
||||
<!--Copyright 2025 Deepseek AI and The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
<div style="float: right;">
|
||||
<div class="flex flex-wrap space-x-1">
|
||||
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
|
||||
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
# DeepseekVL
|
||||
|
||||
[Deepseek-VL](https://arxiv.org/abs/2403.05525) was introduced by the DeepSeek AI team. It is a vision-language model (VLM) designed to process both text and images for generating contextually relevant responses. The model leverages [LLaMA](./llama) as its text encoder, while [SigLip](./siglip) is used for encoding images.
|
||||
|
||||
You can find all the original Deepseek-VL checkpoints under the [DeepSeek-community](https://huggingface.co/deepseek-community) organization.
|
||||
|
||||
> [!TIP]
|
||||
> Click on the Deepseek-VL models in the right sidebar for more examples of how to apply Deepseek-VL to different vision and language tasks.
|
||||
|
||||
The example below demonstrates how to generate text based on an image with [`Pipeline`] or the [`AutoModel`] class.
|
||||
|
||||
<hfoptions id="usage">
|
||||
<hfoption id="Pipeline">
|
||||
|
||||
```py
|
||||
import torch
|
||||
from transformers import pipeline
|
||||
|
||||
pipe = pipeline(
|
||||
task="image-text-to-text",
|
||||
model="deepseek-community/deepseek-vl-1.3b-chat",
|
||||
device=0,
|
||||
torch_dtype=torch.float16
|
||||
)
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "image",
|
||||
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg",
|
||||
},
|
||||
{ "type": "text", "text": "Describe this image."},
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
pipe(text=messages, max_new_tokens=20, return_full_text=False)
|
||||
```
|
||||
</hfoption>
|
||||
|
||||
<hfoption id="AutoModel">
|
||||
|
||||
```py
|
||||
import torch
|
||||
from transformers import DeepseekVLForConditionalGeneration, AutoProcessor
|
||||
|
||||
model = DeepseekVLForConditionalGeneration.from_pretrained(
|
||||
"deepseek-community/deepseek-vl-1.3b-chat",
|
||||
torch_dtype=torch.float16,
|
||||
device_map="auto",
|
||||
attn_implementation="sdpa"
|
||||
)
|
||||
|
||||
processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-1.3b-chat")
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role":"user",
|
||||
"content":[
|
||||
{
|
||||
"type":"image",
|
||||
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
|
||||
},
|
||||
{
|
||||
"type":"text",
|
||||
"text":"Describe this image."
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
]
|
||||
|
||||
inputs = processor.apply_chat_template(
|
||||
messages,
|
||||
add_generation_prompt=True,
|
||||
tokenize=True,
|
||||
return_dict=True,
|
||||
return_tensors="pt"
|
||||
).to(model.device, dtype=model.dtype)
|
||||
|
||||
generated_ids = model.generate(**inputs, max_new_tokens=128)
|
||||
generated_ids_trimmed = [
|
||||
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
|
||||
]
|
||||
output_text = processor.batch_decode(
|
||||
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
|
||||
)
|
||||
|
||||
print(output_text)
|
||||
```
|
||||
</hfoption>
|
||||
</hfoptions>
|
||||
|
||||
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
|
||||
|
||||
The example below uses [torchao](../quantization/torchao) to only quantize the weights to int4.
|
||||
|
||||
```python
|
||||
import torch
|
||||
from transformers import TorchAoConfig, DeepseekVLForConditionalGeneration, AutoProcessor
|
||||
|
||||
quantization_config = TorchAoConfig(
|
||||
"int4_weight_only",
|
||||
group_size=128
|
||||
)
|
||||
|
||||
model = DeepseekVLForConditionalGeneration.from_pretrained(
|
||||
"deepseek-community/deepseek-vl-1.3b-chat",
|
||||
torch_dtype=torch.bfloat16,
|
||||
device_map="auto",
|
||||
quantization_config=quantization_config
|
||||
)
|
||||
```
|
||||
### Notes
|
||||
|
||||
- Do inference with multiple images in a single conversation.
|
||||
```py
|
||||
import torch
|
||||
from transformers import DeepseekVLForConditionalGeneration, AutoProcessor
|
||||
|
||||
model = DeepseekVLForConditionalGeneration.from_pretrained(
|
||||
"deepseek-community/deepseek-vl-1.3b-chat",
|
||||
torch_dtype=torch.float16,
|
||||
device_map="auto",
|
||||
attn_implementation="sdpa"
|
||||
)
|
||||
|
||||
processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-1.3b-chat")
|
||||
|
||||
messages = [
|
||||
[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "text", "text": "What’s the difference between"},
|
||||
{"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
|
||||
{"type": "text", "text": " and "},
|
||||
{"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
|
||||
]
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "image", "url": "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"},
|
||||
{"type": "text", "text": "What do you see in this image?"}
|
||||
]
|
||||
}
|
||||
]
|
||||
]
|
||||
|
||||
inputs = processor.apply_chat_template(
|
||||
messages,
|
||||
add_generation_prompt=True,
|
||||
padding=True,
|
||||
truncation=True,
|
||||
tokenize=True,
|
||||
return_dict=True,
|
||||
return_tensors="pt"
|
||||
).to(model.device, dtype=model.dtype)
|
||||
|
||||
generated_ids = model.generate(**inputs, max_new_tokens=128)
|
||||
generated_ids_trimmed = [
|
||||
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
|
||||
]
|
||||
output_text = processor.batch_decode(
|
||||
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
|
||||
)
|
||||
|
||||
print(output_text)
|
||||
```
|
||||
|
||||
## DeepseekVLConfig
|
||||
|
||||
[[autodoc]] DeepseekVLConfig
|
||||
|
||||
## DeepseekVLProcessor
|
||||
|
||||
[[autodoc]] DeepseekVLProcessor
|
||||
|
||||
## DeepseekVLImageProcessor
|
||||
|
||||
[[autodoc]] DeepseekVLImageProcessor
|
||||
|
||||
## DeepseekVLModel
|
||||
|
||||
[[autodoc]] DeepseekVLModel
|
||||
- forward
|
||||
|
||||
## DeepseekVLForConditionalGeneration
|
||||
|
||||
[[autodoc]] DeepseekVLForConditionalGeneration
|
||||
- forward
|
219
docs/source/en/model_doc/deepseek_vl_hybrid.md
Normal file
219
docs/source/en/model_doc/deepseek_vl_hybrid.md
Normal file
@ -0,0 +1,219 @@
|
||||
<!--Copyright 2025 Deepseek AI and The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
<div style="float: right;">
|
||||
<div class="flex flex-wrap space-x-1">
|
||||
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
# DeepseekVLHybrid
|
||||
|
||||
[Deepseek-VL-Hybrid](https://arxiv.org/abs/2403.05525) was introduced by the DeepSeek AI team. It is a vision-language model (VLM) designed to process both text and images for generating contextually relevant responses. The model leverages [LLaMA](./llama) as its text encoder, while [SigLip](./siglip) is used for encoding low-resolution images and [SAM (Segment Anything Model)](./sam) is incorporated to handle high-resolution image encoding, enhancing the model’s ability to process fine-grained visual details. Deepseek-VL-Hybrid is a variant of Deepseek-VL that uses [SAM (Segment Anything Model)](./sam) to handle high-resolution image encoding.
|
||||
|
||||
You can find all the original Deepseek-VL-Hybrid checkpoints under the [DeepSeek-community](https://huggingface.co/deepseek-community) organization.
|
||||
|
||||
> [!TIP]
|
||||
> Click on the Deepseek-VL-Hybrid models in the right sidebar for more examples of how to apply Deepseek-VL-Hybrid to different vision and language tasks.
|
||||
|
||||
The example below demonstrates how to generate text based on an image with [`Pipeline`] or the [`AutoModel`] class.
|
||||
|
||||
<hfoptions id="usage">
|
||||
<hfoption id="Pipeline">
|
||||
|
||||
```py
|
||||
import torch
|
||||
from transformers import pipeline
|
||||
|
||||
pipe = pipeline(
|
||||
task="image-text-to-text",
|
||||
model="deepseek-community/deepseek-vl-7b-chat",
|
||||
device=0,
|
||||
torch_dtype=torch.float16
|
||||
)
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "image",
|
||||
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg",
|
||||
},
|
||||
{ "type": "text", "text": "Describe this image."},
|
||||
]
|
||||
}
|
||||
]
|
||||
|
||||
pipe(text=messages, max_new_tokens=20, return_full_text=False)
|
||||
```
|
||||
</hfoption>
|
||||
|
||||
<hfoption id="AutoModel">
|
||||
|
||||
```py
|
||||
import torch
|
||||
from transformers import DeepseekVLHybridForConditionalGeneration, AutoProcessor
|
||||
|
||||
model = DeepseekVLHybridForConditionalGeneration.from_pretrained(
|
||||
"deepseek-community/deepseek-vl-7b-chat",
|
||||
torch_dtype=torch.float16,
|
||||
device_map="auto",
|
||||
attn_implementation="sdpa"
|
||||
)
|
||||
|
||||
processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-7b-chat")
|
||||
|
||||
messages = [
|
||||
{
|
||||
"role":"user",
|
||||
"content":[
|
||||
{
|
||||
"type":"image",
|
||||
"url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
|
||||
},
|
||||
{
|
||||
"type":"text",
|
||||
"text":"Describe this image."
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
]
|
||||
|
||||
inputs = processor.apply_chat_template(
|
||||
messages,
|
||||
add_generation_prompt=True,
|
||||
tokenize=True,
|
||||
return_dict=True,
|
||||
return_tensors="pt"
|
||||
).to(model.device, dtype=model.dtype)
|
||||
|
||||
generated_ids = model.generate(**inputs, max_new_tokens=128)
|
||||
generated_ids_trimmed = [
|
||||
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
|
||||
]
|
||||
output_text = processor.batch_decode(
|
||||
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
|
||||
)
|
||||
|
||||
print(output_text)
|
||||
```
|
||||
</hfoption>
|
||||
</hfoptions>
|
||||
|
||||
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
|
||||
|
||||
The example below uses [torchao](../quantization/torchao) to only quantize the weights to int4.
|
||||
|
||||
```python
|
||||
import torch
|
||||
from transformers import TorchAoConfig, DeepseekVLHybridForConditionalGeneration, AutoProcessor
|
||||
|
||||
quantization_config = TorchAoConfig(
|
||||
"int4_weight_only",
|
||||
group_size=128
|
||||
)
|
||||
|
||||
model = DeepseekVLHybridForConditionalGeneration.from_pretrained(
|
||||
"deepseek-community/deepseek-vl-7b-chat",
|
||||
torch_dtype=torch.bfloat16,
|
||||
device_map="auto",
|
||||
quantization_config=quantization_config
|
||||
)
|
||||
```
|
||||
### Notes
|
||||
|
||||
- Do inference with multiple images in a single conversation.
|
||||
```py
|
||||
import torch
|
||||
from transformers import DeepseekVLHybridForConditionalGeneration, AutoProcessor
|
||||
|
||||
model = DeepseekVLHybridForConditionalGeneration.from_pretrained(
|
||||
"deepseek-community/deepseek-vl-7b-chat",
|
||||
torch_dtype=torch.float16,
|
||||
device_map="auto",
|
||||
attn_implementation="sdpa"
|
||||
)
|
||||
|
||||
processor = AutoProcessor.from_pretrained("deepseek-community/deepseek-vl-7b-chat")
|
||||
|
||||
messages = [
|
||||
[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "text", "text": "What’s the difference between"},
|
||||
{"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
|
||||
{"type": "text", "text": " and "},
|
||||
{"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"}
|
||||
]
|
||||
}
|
||||
],
|
||||
[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "image", "url": "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.jpg"},
|
||||
{"type": "text", "text": "What do you see in this image?"}
|
||||
]
|
||||
}
|
||||
]
|
||||
]
|
||||
|
||||
inputs = processor.apply_chat_template(
|
||||
messages,
|
||||
add_generation_prompt=True,
|
||||
padding=True,
|
||||
truncation=True,
|
||||
tokenize=True,
|
||||
return_dict=True,
|
||||
return_tensors="pt"
|
||||
).to(model.device, dtype=model.dtype)
|
||||
|
||||
generated_ids = model.generate(**inputs, max_new_tokens=128)
|
||||
generated_ids_trimmed = [
|
||||
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
|
||||
]
|
||||
output_text = processor.batch_decode(
|
||||
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
|
||||
)
|
||||
|
||||
print(output_text)
|
||||
```
|
||||
|
||||
## DeepseekVLHybridConfig
|
||||
|
||||
[[autodoc]] DeepseekVLHybridConfig
|
||||
|
||||
## DeepseekVLHybridProcessor
|
||||
|
||||
[[autodoc]] DeepseekVLHybridProcessor
|
||||
|
||||
## DeepseekVLHybridImageProcessor
|
||||
|
||||
[[autodoc]] DeepseekVLHybridImageProcessor
|
||||
|
||||
## DeepseekVLHybridModel
|
||||
|
||||
[[autodoc]] DeepseekVLHybridModel
|
||||
- forward
|
||||
|
||||
## DeepseekVLHybridForConditionalGeneration
|
||||
|
||||
[[autodoc]] DeepseekVLHybridForConditionalGeneration
|
||||
- forward
|
@ -47,7 +47,8 @@ Here is a quick example of how to encode and decode an audio using this model:
|
||||
>>> inputs = processor(raw_audio=audio_sample, sampling_rate=processor.sampling_rate, return_tensors="pt")
|
||||
|
||||
>>> encoder_outputs = model.encode(inputs["input_values"], inputs["padding_mask"])
|
||||
>>> audio_values = model.decode(encoder_outputs.audio_codes, encoder_outputs.audio_scales, inputs["padding_mask"])[0]
|
||||
>>> # `encoder_outputs.audio_codes` contains discrete codes
|
||||
>>> audio_values = model.decode(**encoder_outputs, padding_mask=inputs["padding_mask"])[0]
|
||||
>>> # or the equivalent with a forward pass
|
||||
>>> audio_values = model(inputs["input_values"], inputs["padding_mask"]).audio_values
|
||||
```
|
||||
|
@ -31,7 +31,7 @@ The Ernie 4.5 model was released in the [Ernie 4.5 Model Family](https://ernie.b
|
||||
This family of models contains multiple different architectures and model sizes. This model in specific targets the base text
|
||||
model without mixture of experts (moe) with 0.3B parameters in total. It uses the standard [Llama](./llama.md) at its core.
|
||||
|
||||
Other models from the family can be found at [Ernie 4.5 MoE](./ernie4_5_moe.md).
|
||||
Other models from the family can be found at [Ernie 4.5 Moe](./ernie4_5_moe.md).
|
||||
|
||||
<div class="flex justify-center">
|
||||
<img src="https://ernie.baidu.com/blog/posts/ernie4.5/overview.png"/>
|
||||
|
@ -23,11 +23,11 @@ rendered properly in your Markdown viewer.
|
||||
</div>
|
||||
</div>
|
||||
|
||||
# Ernie 4.5 MoE
|
||||
# Ernie 4.5 Moe
|
||||
|
||||
## Overview
|
||||
|
||||
The Ernie 4.5 MoE model was released in the [Ernie 4.5 Model Family](https://ernie.baidu.com/blog/posts/ernie4.5/) release by baidu.
|
||||
The Ernie 4.5 Moe model was released in the [Ernie 4.5 Model Family](https://ernie.baidu.com/blog/posts/ernie4.5/) release by baidu.
|
||||
This family of models contains multiple different architectures and model sizes. This model in specific targets the base text
|
||||
model with mixture of experts (moe) - one with 21B total, 3B active parameters and another one with 300B total, 47B active parameters.
|
||||
It uses the standard [Llama](./llama.md) at its core combined with a specialized MoE based on [Mixtral](./mixtral.md) with additional shared
|
||||
@ -167,17 +167,17 @@ This model was contributed by [Anton Vlasjuk](https://huggingface.co/AntonV).
|
||||
The original code can be found [here](https://github.com/PaddlePaddle/ERNIE).
|
||||
|
||||
|
||||
## Ernie4_5_MoEConfig
|
||||
## Ernie4_5_MoeConfig
|
||||
|
||||
[[autodoc]] Ernie4_5_MoEConfig
|
||||
[[autodoc]] Ernie4_5_MoeConfig
|
||||
|
||||
## Ernie4_5_MoEModel
|
||||
## Ernie4_5_MoeModel
|
||||
|
||||
[[autodoc]] Ernie4_5_MoEModel
|
||||
[[autodoc]] Ernie4_5_MoeModel
|
||||
- forward
|
||||
|
||||
## Ernie4_5_MoEForCausalLM
|
||||
## Ernie4_5_MoeForCausalLM
|
||||
|
||||
[[autodoc]] Ernie4_5_MoEForCausalLM
|
||||
[[autodoc]] Ernie4_5_MoeForCausalLM
|
||||
- forward
|
||||
- generate
|
||||
|
95
docs/source/en/model_doc/evolla.md
Normal file
95
docs/source/en/model_doc/evolla.md
Normal file
@ -0,0 +1,95 @@
|
||||
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
# Evolla
|
||||
|
||||
## Overview
|
||||
|
||||
The Evolla model was proposed in [Decoding the Molecular Language of Proteins with Evolla](https://doi.org/10.1101/2025.01.05.630192) by [Zhou et al.](https://doi.org/10.1101/2025.01.05.630192).
|
||||
|
||||
Evolla is an advanced 80-billion-parameter protein-language generative model designed to decode the molecular language of proteins. It integrates information from protein sequences, structures, and user queries to generate precise and contextually nuanced insights into protein function. Trained on an unprecedented AI-generated dataset of 546 million protein question-answer pairs and 150 billion word tokens, Evolla significantly advances research in proteomics and functional genomics, providing expert-level insights and shedding light on the molecular logic encoded in proteins.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Proteins, nature’s intricate molecular machines, are the products of billions of years of evolution and play fundamental roles in sustaining life. Yet, deciphering their molecular language - that is, understanding how protein sequences and structures encode and determine biological functions - remains a corner-stone challenge in modern biology. Here, we introduce Evolla, an 80 billion frontier protein-language generative model designed to decode the molecular language of proteins. By integrating information from protein sequences, structures, and user queries, Evolla generates precise and contextually nuanced insights into protein function. A key innovation of Evolla lies in its training on an unprecedented AI-generated dataset: 546 million protein question-answer pairs and 150 billion word tokens, designed to reflect the immense complexity and functional diversity of proteins. Post-pretraining, Evolla integrates Direct Preference Optimization (DPO) to refine the model based on preference signals and Retrieval-Augmented Generation (RAG) for external knowledge incorporation, improving response quality and relevance. To evaluate its performance, we propose a novel framework, Instructional Response Space (IRS), demonstrating that Evolla delivers expert-level insights, advancing research in proteomics and functional genomics while shedding light on the molecular logic encoded in proteins. The online demo is available at http://www.chat-protein.com/.*
|
||||
|
||||
Examples:
|
||||
|
||||
```python
|
||||
processor = EvollaProcessor.from_pretrained("westlake-repl/Evolla-10B-DPO-hf")
|
||||
model = EvollaForProteinText2Text.from_pretrained("westlake-repl/Evolla-10B-DPO-hf")
|
||||
# aa_seq should have same length as foldseek
|
||||
protein_inputs = [
|
||||
{
|
||||
|
||||
"aa_seq": "MATGGRRG...",
|
||||
"foldseek": "###lqpfd...", # hashtag means the low-confidence foldseek tokens
|
||||
},
|
||||
{
|
||||
"aa_seq": "MLPGLALL...",
|
||||
"foldseek": "dfwwkwad...",
|
||||
}
|
||||
]
|
||||
message_list = [
|
||||
[
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are an AI expert that can answer any questions about protein.",
|
||||
},
|
||||
{"role": "user", "content": "What is the function of this protein?"},
|
||||
],
|
||||
[
|
||||
{
|
||||
"role": "system",
|
||||
"content": "You are an AI expert that can answer any questions about protein.",
|
||||
},
|
||||
{"role": "user", "content": "What is the function of this protein?"},
|
||||
]
|
||||
]
|
||||
input_dict = processor(
|
||||
protein_informations, messages_list, return_tensors="pt", text_max_length=512, protein_max_length=1024
|
||||
)
|
||||
with torch.no_grad():
|
||||
generated_ids = hf_model.generate(**input_dict)
|
||||
generated_texts = processor.batch_decode(
|
||||
generated_ids, skip_special_tokens=True
|
||||
)
|
||||
```
|
||||
|
||||
Tips:
|
||||
|
||||
- This model was contributed by [Xibin Bayes Zhou](https://huggingface.co/XibinBayesZhou).
|
||||
- The original code can be found [here](https://github.com/westlake-repl/Evolla).
|
||||
|
||||
|
||||
## EvollaConfig
|
||||
|
||||
[[autodoc]] EvollaConfig
|
||||
|
||||
## EvollaModel
|
||||
|
||||
[[autodoc]] EvollaModel
|
||||
- forward
|
||||
|
||||
## EvollaForProteinText2Text
|
||||
|
||||
[[autodoc]] EvollaForProteinText2Text
|
||||
- forward
|
||||
|
||||
## EvollaProcessor
|
||||
|
||||
[[autodoc]] EvollaProcessor
|
||||
- __call__
|
208
docs/source/en/model_doc/exaone4.md
Normal file
208
docs/source/en/model_doc/exaone4.md
Normal file
@ -0,0 +1,208 @@
|
||||
<!--Copyright 2025 The LG AI Research and The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
# EXAONE 4
|
||||
|
||||
## Overview
|
||||
|
||||
**[EXAONE 4.0](https://github.com/LG-AI-EXAONE/EXAONE-4.0)** model is the language model, which integrates a **Non-reasoning mode** and **Reasoning mode** to achieve both the excellent usability of [EXAONE 3.5](https://github.com/LG-AI-EXAONE/EXAONE-3.5) and the advanced reasoning abilities of [EXAONE Deep](https://github.com/LG-AI-EXAONE/EXAONE-Deep). To pave the way for the agentic AI era, EXAONE 4.0 incorporates essential features such as agentic tool use, and its multilingual capabilities are extended
|
||||
to support Spanish in addition to English and Korean.
|
||||
|
||||
The EXAONE 4.0 model series consists of two sizes: a mid-size **32B** model optimized for high performance, and a small-size **1.2B** model designed for on-device applications.
|
||||
|
||||
In the EXAONE 4.0 architecture, we apply new architectural changes compared to previous EXAONE models as below:
|
||||
|
||||
1. **Hybrid Attention**: For the 32B model, we adopt hybrid attention scheme, which combines *Local attention (sliding window attention)* with *Global attention (full attention)* in a 3:1 ratio. We do not use RoPE (Rotary Positional Embedding) for global attention for better global context understanding.
|
||||
2. **QK-Reorder-Norm**: We reorder the LayerNorm position from the traditional Pre-LN scheme by applying LayerNorm directly to the attention and MLP outputs, and we add RMS normalization right after the Q and K projection. It helps yield better performance on downstream tasks despite consuming more computation.
|
||||
|
||||
For more details, please refer to our [technical report](https://arxiv.org/abs/2507.11407), [HuggingFace paper](https://huggingface.co/papers/2507.11407), [blog](https://www.lgresearch.ai/blog/view?seq=576), and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-4.0).
|
||||
|
||||
All model weights including quantized versions are available at [Huggingface Collections](https://huggingface.co/collections/LGAI-EXAONE/exaone-40-686b2e0069800c835ed48375).
|
||||
|
||||
|
||||
## Model Details
|
||||
|
||||
### Model Specifications
|
||||
|
||||
| Model Configuration | 32B | 1.2B |
|
||||
|:-------------------|:-----:|:------:|
|
||||
| d_model | 5,120 | 2,048 |
|
||||
| Number of layers | 64 | 30 |
|
||||
| Normalization | QK-Reorder-LN | QK-Reorder-LN |
|
||||
| Non-linearity | SwiGLU | SwiGLU |
|
||||
| Feedforward dimension | 27,392 | 4,096 |
|
||||
| Attention type | Hybrid (3:1 Local-Global) | Global |
|
||||
| Head type | GQA | GQA |
|
||||
| Number of heads | 40 | 32 |
|
||||
| Number of KV heads | 8 | 8 |
|
||||
| Head size | 128 | 64 |
|
||||
| Max sequence length | 131,072 | 65,536 |
|
||||
| RoPE theta | 1,000,000 | 1,000,000 |
|
||||
| Tokenizer | BBPE | BBPE |
|
||||
| Vocab size | 102,400 | 102,400 |
|
||||
| Tied word embedding | False | True |
|
||||
| Knowledge cut-off | Nov. 2024 | Nov. 2024 |
|
||||
|
||||
|
||||
## Usage tips
|
||||
|
||||
### Non-reasoning mode
|
||||
|
||||
For general use, you can use the EXAONE 4.0 models with the following example:
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_name = "LGAI-EXAONE/EXAONE-4.0-32B"
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_name,
|
||||
torch_dtype="bfloat16",
|
||||
device_map="auto"
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
|
||||
# choose your prompt
|
||||
prompt = "Explain how wonderful you are"
|
||||
prompt = "Explica lo increíble que eres"
|
||||
prompt = "너가 얼마나 대단한지 설명해 봐"
|
||||
|
||||
messages = [
|
||||
{"role": "user", "content": prompt}
|
||||
]
|
||||
input_ids = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=True,
|
||||
add_generation_prompt=True,
|
||||
return_tensors="pt"
|
||||
)
|
||||
|
||||
output = model.generate(
|
||||
input_ids.to(model.device),
|
||||
max_new_tokens=128,
|
||||
do_sample=False,
|
||||
)
|
||||
print(tokenizer.decode(output[0]))
|
||||
```
|
||||
|
||||
### Reasoning mode
|
||||
|
||||
The EXAONE 4.0 models have reasoning capabilities for handling complex problems. You can activate reasoning mode by using the `enable_thinking=True` argument with the tokenizer, which opens a reasoning block that starts with `<think>` tag without closing it.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
{"role": "user", "content": "Which one is bigger, 3.12 vs 3.9?"}
|
||||
]
|
||||
input_ids = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=True,
|
||||
add_generation_prompt=True,
|
||||
return_tensors="pt",
|
||||
enable_thinking=True,
|
||||
)
|
||||
|
||||
output = model.generate(
|
||||
input_ids.to(model.device),
|
||||
max_new_tokens=128,
|
||||
do_sample=True,
|
||||
temperature=0.6,
|
||||
top_p=0.95
|
||||
)
|
||||
print(tokenizer.decode(output[0]))
|
||||
```
|
||||
|
||||
> [!IMPORTANT]
|
||||
> The model generation with reasoning mode can be affected sensitively by sampling parameters, so please refer to the [Usage Guideline](https://github.com/LG-AI-EXAONE/EXAONE-4.0#usage-guideline) on official GitHub page for better quality.
|
||||
|
||||
### Agentic tool use
|
||||
|
||||
The EXAONE 4.0 models can be used as agents with their tool calling capabilities. You can provide tool schemas to the model for effective tool calling.
|
||||
|
||||
```python
|
||||
import random
|
||||
|
||||
def roll_dice(max_num: int):
|
||||
return random.randint(1, max_num)
|
||||
|
||||
tools = [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "roll_dice",
|
||||
"description": "Roll a dice with the number 1 to N. User can select the number N.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"required": ["max_num"],
|
||||
"properties": {
|
||||
"max_num": {
|
||||
"type": "int",
|
||||
"description": "Max number of the dice"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
messages = [
|
||||
{"role": "user", "content": "Roll D6 dice twice!"}
|
||||
]
|
||||
input_ids = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=True,
|
||||
add_generation_prompt=True,
|
||||
return_tensors="pt",
|
||||
tools=tools,
|
||||
)
|
||||
|
||||
output = model.generate(
|
||||
input_ids.to(model.device),
|
||||
max_new_tokens=1024,
|
||||
do_sample=True,
|
||||
temperature=0.6,
|
||||
top_p=0.95,
|
||||
)
|
||||
print(tokenizer.decode(output[0]))
|
||||
```
|
||||
|
||||
## Exaone4Config
|
||||
|
||||
[[autodoc]] Exaone4Config
|
||||
|
||||
## Exaone4Model
|
||||
|
||||
[[autodoc]] Exaone4Model
|
||||
- forward
|
||||
|
||||
## Exaone4ForCausalLM
|
||||
|
||||
[[autodoc]] Exaone4ForCausalLM
|
||||
- forward
|
||||
|
||||
## Exaone4ForSequenceClassification
|
||||
|
||||
[[autodoc]] Exaone4ForSequenceClassification
|
||||
- forward
|
||||
|
||||
## Exaone4ForTokenClassification
|
||||
|
||||
[[autodoc]] Exaone4ForTokenClassification
|
||||
- forward
|
||||
|
||||
## Exaone4ForQuestionAnswering
|
||||
|
||||
[[autodoc]] Exaone4ForQuestionAnswering
|
||||
- forward
|
@ -48,6 +48,32 @@ for i in output:
|
||||
|
||||
This HF implementation is contributed by [Sukriti Sharma](https://huggingface.co/SukritiSharma) and [Alexander Brooks](https://huggingface.co/abrooks9944).
|
||||
|
||||
## Notes
|
||||
|
||||
- `GraniteMoeHybridForCausalLM` supports padding-free training which concatenates distinct training examples while still processing inputs as separate batches. It can significantly accelerate inference by [~2x](https://github.com/huggingface/transformers/pull/35861#issue-2807873129) (depending on model and data distribution) and reduce memory-usage if there are examples of varying lengths by avoiding unnecessary compute and memory overhead from padding tokens.
|
||||
|
||||
Padding-free training requires the `flash-attn`, `mamba-ssm`, and `causal-conv1d` packages and the following arguments must be passed to the model in addition to `input_ids` and `labels`.
|
||||
|
||||
- `position_ids: torch.LongTensor`: the position index of each token in each sequence.
|
||||
- `seq_idx: torch.IntTensor`: the index of each sequence in the batch.
|
||||
- Each of the [`FlashAttentionKwargs`]
|
||||
- `cu_seq_lens_q: torch.LongTensor`: the cumulative sequence lengths of all queries.
|
||||
- `cu_seq_lens_k: torch.LongTensor`: the cumulative sequence lengths of all keys.
|
||||
- `max_length_q: int`: the longest query length in the batch.
|
||||
- `max_length_k: int`: the longest key length in the batch.
|
||||
|
||||
The `attention_mask` inputs should not be provided. The [`DataCollatorWithFlattening`] programmatically generates the set of additional arguments above using `return_seq_idx=True` and `return_flash_attn_kwargs=True`. See the [Improving Hugging Face Training Efficiency Through Packing with Flash Attention](https://huggingface.co/blog/packing-with-FA2) blog post for additional information.
|
||||
|
||||
```python
|
||||
from transformers import DataCollatorWithFlattening
|
||||
|
||||
# Example of using padding-free training
|
||||
data_collator = DataCollatorWithFlattening(
|
||||
tokenizer=tokenizer,
|
||||
return_seq_idx=True,
|
||||
return_flash_attn_kwargs=True
|
||||
)
|
||||
```
|
||||
|
||||
## GraniteMoeHybridConfig
|
||||
|
||||
@ -61,4 +87,4 @@ This HF implementation is contributed by [Sukriti Sharma](https://huggingface.co
|
||||
## GraniteMoeHybridForCausalLM
|
||||
|
||||
[[autodoc]] GraniteMoeHybridForCausalLM
|
||||
- forward
|
||||
- forward
|
||||
|
@ -77,4 +77,12 @@ The resource should ideally demonstrate something new instead of duplicating an
|
||||
- encode_inputs
|
||||
- post_process_semantic_segmentation
|
||||
- post_process_instance_segmentation
|
||||
- post_process_panoptic_segmentation
|
||||
|
||||
## Mask2FormerImageProcessorFast
|
||||
|
||||
[[autodoc]] Mask2FormerImageProcessorFast
|
||||
- preprocess
|
||||
- post_process_semantic_segmentation
|
||||
- post_process_instance_segmentation
|
||||
- post_process_panoptic_segmentation
|
@ -76,6 +76,14 @@ This model was contributed by [francesco](https://huggingface.co/francesco). The
|
||||
- post_process_instance_segmentation
|
||||
- post_process_panoptic_segmentation
|
||||
|
||||
## MaskFormerImageProcessorFast
|
||||
|
||||
[[autodoc]] MaskFormerImageProcessorFast
|
||||
- preprocess
|
||||
- post_process_semantic_segmentation
|
||||
- post_process_instance_segmentation
|
||||
- post_process_panoptic_segmentation
|
||||
|
||||
## MaskFormerFeatureExtractor
|
||||
|
||||
[[autodoc]] MaskFormerFeatureExtractor
|
||||
|
@ -13,116 +13,125 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
<div style="float: right;">
|
||||
<div class="flex flex-wrap space-x-1">
|
||||
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
# Mistral3
|
||||
# Mistral 3
|
||||
|
||||
## Overview
|
||||
[Mistral 3](https://mistral.ai/news/mistral-small-3) is a latency optimized model with a lot fewer layers to reduce the time per forward pass. This model adds vision understanding and supports long context lengths of up to 128K tokens without compromising performance.
|
||||
|
||||
Building upon Mistral Small 3 (2501), Mistral Small 3.1 (2503) adds state-of-the-art vision understanding and enhances long context capabilities up to 128k tokens without compromising text performance. With 24 billion parameters, this model achieves top-tier capabilities in both text and vision tasks.
|
||||
You can find the original Mistral 3 checkpoints under the [Mistral AI](https://huggingface.co/mistralai/models?search=mistral-small-3) organization.
|
||||
|
||||
It is ideal for:
|
||||
- Fast-response conversational agents.
|
||||
- Low-latency function calling.
|
||||
- Subject matter experts via fine-tuning.
|
||||
- Local inference for hobbyists and organizations handling sensitive data.
|
||||
- Programming and math reasoning.
|
||||
- Long document understanding.
|
||||
- Visual understanding.
|
||||
|
||||
This model was contributed by [cyrilvallez](https://huggingface.co/cyrilvallez) and [yonigozlan](https://huggingface.co/yonigozlan).
|
||||
> [!TIP]
|
||||
> This model was contributed by [cyrilvallez](https://huggingface.co/cyrilvallez) and [yonigozlan](https://huggingface.co/yonigozlan).
|
||||
> Click on the Mistral3 models in the right sidebar for more examples of how to apply Mistral3 to different tasks.
|
||||
|
||||
The original code can be found [here](https://github.com/vllm-project/vllm/blob/main/vllm/model_executor/models/pixtral.py) and [here](https://github.com/mistralai/mistral-common).
|
||||
The example below demonstrates how to generate text for an image with [`Pipeline`] and the [`AutoModel`] class.
|
||||
|
||||
## Usage example
|
||||
<hfoptions id="usage">
|
||||
<hfoption id="Pipeline">
|
||||
|
||||
### Inference with Pipeline
|
||||
```py
|
||||
import torch
|
||||
from transformers import pipeline
|
||||
|
||||
Here is how you can use the `image-text-to-text` pipeline to perform inference with the `Mistral3` models in just a few lines of code:
|
||||
```python
|
||||
>>> from transformers import pipeline
|
||||
messages = [
|
||||
{"role": "user",
|
||||
"content":[
|
||||
{"type": "image",
|
||||
"image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg",},
|
||||
{"type": "text", "text": "Describe this image."}
|
||||
,]
|
||||
,}
|
||||
,]
|
||||
|
||||
>>> messages = [
|
||||
... {
|
||||
... "role": "user",
|
||||
... "content": [
|
||||
... {
|
||||
... "type": "image",
|
||||
... "image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg",
|
||||
... },
|
||||
... {"type": "text", "text": "Describe this image."},
|
||||
... ],
|
||||
... },
|
||||
... ]
|
||||
pipeline = pipeline(
|
||||
task="image-text-to-text",
|
||||
model="mistralai/Mistral-Small-3.1-24B-Instruct-2503",
|
||||
torch_dtype=torch.bfloat16,
|
||||
device=0
|
||||
)
|
||||
outputs = pipeline(text=messages, max_new_tokens=50, return_full_text=False)
|
||||
|
||||
>>> pipe = pipeline("image-text-to-text", model="mistralai/Mistral-Small-3.1-24B-Instruct-2503", torch_dtype=torch.bfloat16)
|
||||
>>> outputs = pipe(text=messages, max_new_tokens=50, return_full_text=False)
|
||||
>>> outputs[0]["generated_text"]
|
||||
outputs[0]["generated_text"]
|
||||
'The image depicts a vibrant and lush garden scene featuring a variety of wildflowers and plants. The central focus is on a large, pinkish-purple flower, likely a Greater Celandine (Chelidonium majus), with a'
|
||||
```
|
||||
### Inference on a single image
|
||||
</hfoption>
|
||||
<hfoption id="AutoModel">
|
||||
|
||||
This example demonstrates how to perform inference on a single image with the Mistral3 models using chat templates.
|
||||
```py
|
||||
import torch
|
||||
from transformers import AutoProcessor, AutoModelForImageTextToText
|
||||
|
||||
```python
|
||||
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
|
||||
>>> import torch
|
||||
torch_device = "cuda"
|
||||
model_checkpoint = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
|
||||
processor = AutoProcessor.from_pretrained(model_checkpoint)
|
||||
model = AutoModelForImageTextToText.from_pretrained(
|
||||
model_checkpoint,
|
||||
device_map=torch_device,
|
||||
torch_dtype=torch.bfloat16
|
||||
)
|
||||
|
||||
>>> torch_device = "cuda"
|
||||
>>> model_checkpoint = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
|
||||
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)
|
||||
>>> model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, torch_dtype=torch.bfloat16)
|
||||
messages = [
|
||||
{"role": "user",
|
||||
"content":[
|
||||
{"type": "image",
|
||||
"image": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/bee.jpg",},
|
||||
{"type": "text", "text": "Describe this image."}
|
||||
,]
|
||||
,}
|
||||
,]
|
||||
|
||||
>>> messages = [
|
||||
... {
|
||||
... "role": "user",
|
||||
... "content": [
|
||||
... {"type": "image", "url": "http://images.cocodataset.org/val2017/000000039769.jpg"},
|
||||
... {"type": "text", "text": "Describe this image"},
|
||||
... ],
|
||||
... }
|
||||
... ]
|
||||
inputs = processor.apply_chat_template(
|
||||
messages,
|
||||
add_generation_prompt=True,
|
||||
tokenize=True, return_dict=True,
|
||||
return_tensors="pt").to(model.device, dtype=torch.bfloat16)
|
||||
|
||||
>>> inputs = processor.apply_chat_template(messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
|
||||
generate_ids = model.generate(**inputs, max_new_tokens=20)
|
||||
decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
|
||||
|
||||
>>> generate_ids = model.generate(**inputs, max_new_tokens=20)
|
||||
>>> decoded_output = processor.decode(generate_ids[0, inputs["input_ids"].shape[1] :], skip_special_tokens=True)
|
||||
|
||||
>>> decoded_output
|
||||
"The image depicts two cats lying on a pink blanket. The larger cat, which appears to be an"...
|
||||
decoded_output
|
||||
'The image depicts a vibrant and lush garden scene featuring a variety of wildflowers and plants. The central focus is on a large, pinkish-purple flower, likely a Greater Celandine (Chelidonium majus), with a'
|
||||
```
|
||||
</hfoption>
|
||||
</hfoptions>
|
||||
|
||||
### Text-only generation
|
||||
This example shows how to generate text using the Mistral3 model without providing any image input.
|
||||
## Notes
|
||||
|
||||
- Mistral 3 supports text-only generation.
|
||||
```py
|
||||
from transformers import AutoProcessor, AutoModelForImageTextToText
|
||||
import torch
|
||||
|
||||
````python
|
||||
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
|
||||
>>> import torch
|
||||
torch_device = "cuda"
|
||||
model_checkpoint = ".mistralai/Mistral-Small-3.1-24B-Instruct-2503"
|
||||
processor = AutoProcessor.from_pretrained(model_checkpoint)
|
||||
model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, torch_dtype=torch.bfloat16)
|
||||
|
||||
>>> torch_device = "cuda"
|
||||
>>> model_checkpoint = ".mistralai/Mistral-Small-3.1-24B-Instruct-2503"
|
||||
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)
|
||||
>>> model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, torch_dtype=torch.bfloat16)
|
||||
SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
|
||||
user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."
|
||||
|
||||
>>> SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat."
|
||||
>>> user_prompt = "Give me 5 non-formal ways to say 'See you later' in French."
|
||||
messages = [
|
||||
{"role": "system", "content": SYSTEM_PROMPT},
|
||||
{"role": "user", "content": user_prompt},
|
||||
]
|
||||
|
||||
>>> messages = [
|
||||
... {"role": "system", "content": SYSTEM_PROMPT},
|
||||
... {"role": "user", "content": user_prompt},
|
||||
... ]
|
||||
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||||
inputs = processor(text=text, return_tensors="pt").to(0, dtype=torch.float16)
|
||||
generate_ids = model.generate(**inputs, max_new_tokens=50, do_sample=False)
|
||||
decoded_output = processor.batch_decode(generate_ids[:, inputs["input_ids"].shape[1] :], skip_special_tokens=True)[0]
|
||||
|
||||
>>> text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
|
||||
>>> inputs = processor(text=text, return_tensors="pt").to(0, dtype=torch.float16)
|
||||
>>> generate_ids = model.generate(**inputs, max_new_tokens=50, do_sample=False)
|
||||
>>> decoded_output = processor.batch_decode(generate_ids[:, inputs["input_ids"].shape[1] :], skip_special_tokens=True)[0]
|
||||
|
||||
>>> print(decoded_output)
|
||||
print(decoded_output)
|
||||
"1. À plus tard!
|
||||
2. Salut, à plus!
|
||||
3. À toute!
|
||||
4. À la prochaine!
|
||||
5. Je me casse, à plus!
|
||||
2. Salut, à plus!
|
||||
3. À toute!
|
||||
4. À la prochaine!
|
||||
5. Je me casse, à plus!
|
||||
|
||||
```
|
||||
/\_/\
|
||||
@ -131,98 +140,93 @@ This example shows how to generate text using the Mistral3 model without providi
|
||||
```"
|
||||
````
|
||||
|
||||
### Batched image and text inputs
|
||||
Mistral3 models also support batched image and text inputs.
|
||||
- Mistral 3 accepts batched image and text inputs.
|
||||
```py
|
||||
from transformers import AutoProcessor, AutoModelForImageTextToText
|
||||
import torch
|
||||
|
||||
```python
|
||||
>>> from transformers import AutoProcessor, AutoModelForImageTextToText
|
||||
>>> import torch
|
||||
torch_device = "cuda"
|
||||
model_checkpoint = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
|
||||
processor = AutoProcessor.from_pretrained(model_checkpoint)
|
||||
model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, torch_dtype=torch.bfloat16)
|
||||
|
||||
>>> torch_device = "cuda"
|
||||
>>> model_checkpoint = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
|
||||
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)
|
||||
>>> model = AutoModelForImageTextToText.from_pretrained(model_checkpoint, device_map=torch_device, torch_dtype=torch.bfloat16)
|
||||
|
||||
>>> messages = [
|
||||
... [
|
||||
... {
|
||||
... "role": "user",
|
||||
... "content": [
|
||||
... {"type": "image", "url": "https://llava-vl.github.io/static/images/view.jpg"},
|
||||
... {"type": "text", "text": "Write a haiku for this image"},
|
||||
... ],
|
||||
... },
|
||||
... ],
|
||||
... [
|
||||
... {
|
||||
... "role": "user",
|
||||
... "content": [
|
||||
... {"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"},
|
||||
... {"type": "text", "text": "Describe this image"},
|
||||
... ],
|
||||
... },
|
||||
... ],
|
||||
... ]
|
||||
messages = [
|
||||
[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "image", "url": "https://llava-vl.github.io/static/images/view.jpg"},
|
||||
{"type": "text", "text": "Write a haiku for this image"},
|
||||
],
|
||||
},
|
||||
],
|
||||
[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "image", "url": "https://www.ilankelman.org/stopsigns/australia.jpg"},
|
||||
{"type": "text", "text": "Describe this image"},
|
||||
],
|
||||
},
|
||||
],
|
||||
]
|
||||
|
||||
|
||||
>>> inputs = processor.apply_chat_template(messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
|
||||
inputs = processor.apply_chat_template(messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
|
||||
|
||||
>>> output = model.generate(**inputs, max_new_tokens=25)
|
||||
output = model.generate(**inputs, max_new_tokens=25)
|
||||
|
||||
>>> decoded_outputs = processor.batch_decode(output, skip_special_tokens=True)
|
||||
>>> decoded_outputs
|
||||
decoded_outputs = processor.batch_decode(output, skip_special_tokens=True)
|
||||
decoded_outputs
|
||||
["Write a haiku for this imageCalm waters reflect\nWhispers of the forest's breath\nPeace on wooden path"
|
||||
, "Describe this imageThe image depicts a vibrant street scene in what appears to be a Chinatown district. The focal point is a traditional Chinese"]
|
||||
```
|
||||
|
||||
### Batched multi-image input and quantization with BitsAndBytes
|
||||
This implementation of the Mistral3 models supports batched text-images inputs with different number of images for each text.
|
||||
This example also how to use `BitsAndBytes` to load the model in 4bit quantization.
|
||||
- Mistral 3 also supported batched image and text inputs with a different number of images for each text. The example below quantizes the model with bitsandbytes.
|
||||
|
||||
```python
|
||||
>>> from transformers import AutoProcessor, AutoModelForImageTextToText, BitsAndBytesConfig
|
||||
>>> import torch
|
||||
```py
|
||||
from transformers import AutoProcessor, AutoModelForImageTextToText, BitsAndBytesConfig
|
||||
import torch
|
||||
|
||||
>>> torch_device = "cuda"
|
||||
>>> model_checkpoint = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
|
||||
>>> processor = AutoProcessor.from_pretrained(model_checkpoint)
|
||||
>>> quantization_config = BitsAndBytesConfig(load_in_4bit=True)
|
||||
>>> model = AutoModelForImageTextToText.from_pretrained(
|
||||
... model_checkpoint, quantization_config=quantization_config
|
||||
... )
|
||||
torch_device = "cuda"
|
||||
model_checkpoint = "mistralai/Mistral-Small-3.1-24B-Instruct-2503"
|
||||
processor = AutoProcessor.from_pretrained(model_checkpoint)
|
||||
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
|
||||
model = AutoModelForImageTextToText.from_pretrained(
|
||||
model_checkpoint, quantization_config=quantization_config
|
||||
)
|
||||
|
||||
>>> messages = [
|
||||
... [
|
||||
... {
|
||||
... "role": "user",
|
||||
... "content": [
|
||||
... {"type": "image", "url": "https://llava-vl.github.io/static/images/view.jpg"},
|
||||
... {"type": "text", "text": "Write a haiku for this image"},
|
||||
... ],
|
||||
... },
|
||||
... ],
|
||||
... [
|
||||
... {
|
||||
... "role": "user",
|
||||
... "content": [
|
||||
... {"type": "image", "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"},
|
||||
... {"type": "image", "url": "https://thumbs.dreamstime.com/b/golden-gate-bridge-san-francisco-purple-flowers-california-echium-candicans-36805947.jpg"},
|
||||
... {"type": "text", "text": "These images depict two different landmarks. Can you identify them?"},
|
||||
... ],
|
||||
... },
|
||||
... ],
|
||||
>>> ]
|
||||
messages = [
|
||||
[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "image", "url": "https://llava-vl.github.io/static/images/view.jpg"},
|
||||
{"type": "text", "text": "Write a haiku for this image"},
|
||||
],
|
||||
},
|
||||
],
|
||||
[
|
||||
{
|
||||
"role": "user",
|
||||
"content": [
|
||||
{"type": "image", "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"},
|
||||
{"type": "image", "url": "https://thumbs.dreamstime.com/b/golden-gate-bridge-san-francisco-purple-flowers-california-echium-candicans-36805947.jpg"},
|
||||
{"type": "text", "text": "These images depict two different landmarks. Can you identify them?"},
|
||||
],
|
||||
},
|
||||
],
|
||||
]
|
||||
|
||||
>>> inputs = processor.apply_chat_template(messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
|
||||
inputs = processor.apply_chat_template(messages, padding=True, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt").to(model.device, dtype=torch.bfloat16)
|
||||
|
||||
>>> output = model.generate(**inputs, max_new_tokens=25)
|
||||
output = model.generate(**inputs, max_new_tokens=25)
|
||||
|
||||
>>> decoded_outputs = processor.batch_decode(output, skip_special_tokens=True)
|
||||
>>> decoded_outputs
|
||||
decoded_outputs = processor.batch_decode(output, skip_special_tokens=True)
|
||||
decoded_outputs
|
||||
["Write a haiku for this imageSure, here is a haiku inspired by the image:\n\nCalm lake's wooden path\nSilent forest stands guard\n", "These images depict two different landmarks. Can you identify them? Certainly! The images depict two iconic landmarks:\n\n1. The first image shows the Statue of Liberty in New York City."]
|
||||
```
|
||||
|
||||
|
||||
## Mistral3Config
|
||||
|
||||
[[autodoc]] Mistral3Config
|
||||
|
@ -38,7 +38,7 @@ This model was contributed by [Jitesh Jain](https://huggingface.co/praeclarumjj3
|
||||
|
||||
## Usage tips
|
||||
|
||||
- OneFormer requires two inputs during inference: *image* and *task token*.
|
||||
- OneFormer requires two inputs during inference: *image* and *task token*.
|
||||
- During training, OneFormer only uses panoptic annotations.
|
||||
- If you want to train the model in a distributed environment across multiple nodes, then one should update the
|
||||
`get_num_masks` function inside in the `OneFormerLoss` class of `modeling_oneformer.py`. When training on multiple nodes, this should be
|
||||
@ -69,7 +69,14 @@ The resource should ideally demonstrate something new instead of duplicating an
|
||||
|
||||
[[autodoc]] OneFormerImageProcessor
|
||||
- preprocess
|
||||
- encode_inputs
|
||||
- post_process_semantic_segmentation
|
||||
- post_process_instance_segmentation
|
||||
- post_process_panoptic_segmentation
|
||||
|
||||
## OneFormerImageProcessorFast
|
||||
|
||||
[[autodoc]] OneFormerImageProcessorFast
|
||||
- preprocess
|
||||
- post_process_semantic_segmentation
|
||||
- post_process_instance_segmentation
|
||||
- post_process_panoptic_segmentation
|
||||
@ -87,4 +94,3 @@ The resource should ideally demonstrate something new instead of duplicating an
|
||||
|
||||
[[autodoc]] OneFormerForUniversalSegmentation
|
||||
- forward
|
||||
|
@ -1,194 +1,101 @@
|
||||
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
<div style="float: right;">
|
||||
<div class="flex flex-wrap space-x-1">
|
||||
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
<img alt="TensorFlow" src="https://img.shields.io/badge/TensorFlow-FF6F00?style=flat&logo=tensorflow&logoColor=white">
|
||||
<img alt="Flax" src="https://img.shields.io/badge/Flax-29a79b.svg?style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAC0AAAAtCAMAAAANxBKoAAAC7lBMVEAAADg5vYHPVgAoJH+/v76+v39/f9JbLP///9+AIgAnY3///+mcqzt8fXy9fgkXa3Ax9709fr+///9/f8qXq49qp5AaLGMwrv8/P0eW60VWawxYq8yqJzG2dytt9Wyu9elzci519Lf3O3S2efY3OrY0+Xp7PT///////+dqNCexMc6Z7AGpJeGvbenstPZ5ejQ1OfJzOLa7ejh4+/r8fT29vpccbklWK8PVa0AS6ghW63O498vYa+lsdKz1NDRt9Kw1c672tbD3tnAxt7R6OHp5vDe7OrDyuDn6vLl6/EAQKak0MgATakkppo3ZK/Bz9y8w9yzu9jey97axdvHzeG21NHH4trTwthKZrVGZLSUSpuPQJiGAI+GAI8SWKydycLL4d7f2OTi1+S9xNzL0ePT6OLGzeEAo5U0qJw/aLEAo5JFa7JBabEAp5Y4qZ2QxLyKmsm3kL2xoMOehrRNb7RIbbOZgrGre68AUqwAqZqNN5aKJ5N9lMq+qsd8kMa4pcWzh7muhLMEV69juq2kbKqgUaOTR5uMMZWLLZSGAI5VAIdEAH+ovNDHuNCnxcy3qcaYx8K8msGplrx+wLahjbYdXrV6vbMvYK9DrZ8QrZ8tqJuFmsnSos6sw8ecy8RffsNVeMCvmb43aLltv7Q4Y7EZWK4QWa1gt6meZKUdr6GOAZVeA4xPAISyveLUwtivxtKTpNJ2jcqfvcltiMiwwcfAoMVxhL+Kx7xjdrqTe60tsaNQsKKaRKACrJ6UTZwkqpqTL5pkHY4AloSgsd2ptNXPvNOOncuxxsqFl8lmg8apt8FJcr9EbryGxLqlkrkrY7dRa7ZGZLQ5t6iXUZ6PPpgVpZeJCJFKAIGareTa0+KJod3H0deY2M+esM25usmYu8d2zsJOdcBVvrCLbqcAOaaHaKQAMaScSKSBqKCXMJ2RHpiLF5NmJZAdAHN2kta11dKu1M+DkcZLdb+Mcql3TppyRJdzQ5ZtNZNlIY+DF4+voCOQAAAAZ3RSTlMABAT+MEEJ/RH+/TP+Zlv+pUo6Ifz8+fco/fz6+evr39S9nJmOilQaF/7+/f38+smmoYp6b1T+/v7++vj189zU0tDJxsGzsrKSfv34+Pf27dDOysG9t6+n/vv6+vr59uzr1tG+tZ6Qg9Ym3QAABR5JREFUSMeNlVVUG1EQhpcuxEspXqS0SKEtxQp1d3d332STTRpIQhIISQgJhODu7lAoDoUCpe7u7u7+1puGpqnCPOyZvffbOXPm/PsP9JfQgyCC+tmTABTOcbxDz/heENS7/1F+9nhvkHePG0wNDLbGWwdXL+rbLWvpmZHXD82gMfBjTh+aSe6Gnn7lwQIOTR0c8wfX3PWgv7avbdKwf/ZoBp1Gp/PvuvXW3vw5ib7emnTW4OR+3D4jB9vjNJ/7gNvfWWeH/TO/JyYrsiKCRjVEZA3UB+96kON+DxOQ/NLE8PE5iUYgIXjFnCOlxEQMaSGVxjg4gxOnEycGz8bptuNjVx08LscIgrzH3umcn+KKtiBIyvzOO2O99aAdR8cF19oZalnCtbREUw79tCd5sow1g1UKM6kXqUx4T8wsi3sTjJ3yzDmmhenLXLpo8u45eG5y4Vvbk6kkC4LLtJMowkSQxmk4ggVJEG+7c6QpHT8vvW9X7/o7+3ELmiJi2mEzZJiz8cT6TBlanBk70cB5GGIGC1gRDdZ00yADLW1FL6gqhtvNXNG5S9gdSrk4M1qu7JAsmYshzDS4peoMrU/gT7qQdqYGZaYhxZmVbGJAm/CS/HloWyhRUlknQ9KYcExTwS80d3VNOxUZJpITYyspl0LbhArhpZCD9cRWEQuhYkNGMHToQ/2Cs6swJlb39CsllxdXX6IUKh/H5jbnSsPKjgmoaFQ1f8wRLR0UnGE/RcDEjj2jXG1WVTwUs8+zxfcrVO+vSsuOpVKxCfYZiQ0/aPKuxQbQ8lIz+DClxC8u+snlcJ7Yr1z1JPqUH0V+GDXbOwAib931Y4Imaq0NTIXPXY+N5L18GJ37SVWu+hwXff8l72Ds9XuwYIBaXPq6Shm4l+Vl/5QiOlV+uTk6YR9PxKsI9xNJny31ygK1e+nIRC1N97EGkFPI+jCpiHe5PCEy7oWqWSwRrpOvhFzcbTWMbm3ZJAOn1rUKpYIt/lDhW/5RHHteeWFN60qo98YJuoq1nK3uW5AabyspC1BcIEpOhft+SZAShYoLSvnmSfnYADUERP5jJn2h5XtsgCRuhYQqAvwTwn33+YWEKUI72HX5AtfSAZDe8F2DtPPm77afhl0EkthzuCQU0BWApgQIH9+KB0JhopMM7bJrdTRoleM2JAVNMyPF+wdoaz+XJpGoVAQ7WXUkcV7gT3oUZyi/ISIJAVKhgNp+4b4veCFhYVJw4locdSjZCp9cPUhLF9EZ3KKzURepMEtCDPP3VcWFx4UIiZIklIpFNfHpdEafIF2aRmOcrUmjohbT2WUllbmRvgfbythbQO3222fpDJoufaQPncYYuqoGtUEsCJZL6/3PR5b4syeSjZMQG/T2maGANlXT2v8S4AULWaUkCxfLyW8iW4kdka+nEMjxpL2NCwsYNBp+Q61PF43zyDg9Bm9+3NNySn78jMZUUkumqE4Gp7JmFOdP1vc8PpRrzj9+wPinCy8K1PiJ4aYbnTYpCCbDkBSbzhu2QJ1Gd82t8jI8TH51+OzvXoWbnXUOBkNW+0mWFwGcGOUVpU81/n3TOHb5oMt2FgYGjzau0Nif0Ss7Q3XB33hjjQHjHA5E5aOyIQc8CBrLdQSs3j92VG23nNEjbkbdbBr9zm04ruvw37vh0QKOdeGIkckc80fX3KH/h7PT4BOjgCty8VZ5ux1MoO5Cf5naca2LAsEgehI+drX8o/0Nu+W0m6K/I9gGPd/dfx/EN/wN62AhsBWuAAAAAElFTkSuQmCC
|
||||
">
|
||||
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
|
||||
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
# OPT
|
||||
|
||||
<div class="flex flex-wrap space-x-1">
|
||||
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
<img alt="TensorFlow" src="https://img.shields.io/badge/TensorFlow-FF6F00?style=flat&logo=tensorflow&logoColor=white">
|
||||
<img alt="Flax" src="https://img.shields.io/badge/Flax-29a79b.svg?style=flat&logo=data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAC0AAAAtCAMAAAANxBKoAAAC7lBMVEUAAADg5vYHPVgAoJH+/v76+v39/f9JbLP///9+AIgAnY3///+mcqzt8fXy9fgkXa3Ax9709fr+///9/f8qXq49qp5AaLGMwrv8/P0eW60VWawxYq8yqJzG2dytt9Wyu9elzci519Lf3O3S2efY3OrY0+Xp7PT///////+dqNCexMc6Z7AGpJeGvbenstPZ5ejQ1OfJzOLa7ejh4+/r8fT29vpccbklWK8PVa0AS6ghW63O498vYa+lsdKz1NDRt9Kw1c672tbD3tnAxt7R6OHp5vDe7OrDyuDn6vLl6/EAQKak0MgATakkppo3ZK/Bz9y8w9yzu9jey97axdvHzeG21NHH4trTwthKZrVGZLSUSpuPQJiGAI+GAI8SWKydycLL4d7f2OTi1+S9xNzL0ePT6OLGzeEAo5U0qJw/aLEAo5JFa7JBabEAp5Y4qZ2QxLyKmsm3kL2xoMOehrRNb7RIbbOZgrGre68AUqwAqZqNN5aKJ5N/lMq+qsd8kMa4pcWzh7muhLMEV69juq2kbKqgUaOTR5uMMZWLLZSGAI5VAIdEAH+ovNDHuNCnxcy3qcaYx8K8msGplrx+wLahjbYdXrV6vbMvYK9DrZ8QrZ8tqJuFms+Sos6sw8ecy8RffsNVeMCvmb43aLltv7Q4Y7EZWK4QWa1gt6meZKUdr6GOAZVeA4xPAISyveLUwtivxtKTpNJ2jcqfvcltiMiwwcfAoMVxhL+Kx7xjdrqTe60tsaNQs6KaRKACrJ6UTZwkqpqTL5pkHY4AloSgsd2ptNXPvNOOncuxxsqFl8lmg8apt8FJcr9EbryGxLqlkrkrY7dRa7ZGZLQ5t6iXUZ6PPpgVpZeJCJFKAIGareTa0+KJod3H0deY2M+esM25usmYu8d2zsJOdcBVvrCLbqcAOaaHaKQAMaScWqKBXqCXMJ2RHpiLF5NmJZAdAHN2kta11dKu1M+DkcZLdb+Mcql3TppyRJdzQ5ZtNZNlIY+DF4+voCOQAAAAZ3RSTlMABAT+MEEJ/RH+/TP+Zlv+pUo6Ifz8+fco/fz6+evr39S9nJmOilQaF/7+/f38+smmoYp6b1T+/v7++vj189zU0tDJxsGzsrKSfv34+Pf27dDOysG9t6+n/vv6+vr59uzr1tG+tZ6Qg9Ym3QAABR5JREFUSMeNlVVUG1EQhpcuxEspXqS0SKEtxQp1d3d332STTRpIQhIISQgJhODu7lAoDoUCpe7u7u7+1puGpqnCPOyZvffbOXPm/PsP9JfQgyCC+tmTABTOcbxDz/heENS7/1F+9nhvkHePG0wNDLbGWwdXL+rbLWvpmZHXD8+gMfBjTh+aSe6Gnn7lwQIOTR0c8wfX3PWgv7avbdKwf/ZoBp1Gp/PvuvXW3vw5ib7emnTW4OR+3D4jB9vjNJ/7gNvfWWeH/TO/JyYrsiKCRjVEZA3UB+96kON+DxOQ/NLE8PE5iUYgIXjFnCOlxEQMaSGVxjg4gxOnEycGz8bptuNjVx08LscIgrzH3umcn+KKtiBIyvzOO2O99aAdR8cF19oZalnCtvREUw79tCd5sow1g1UKM6kXqUx4T8wsi3sTjJ3yzDmmhenLXLpo8u45eG5y4Vvbk6kkC4LLtJMowkSQxmk4ggVJEG+7c6QpHT8vvW9X7/o7+3ELmiJi2mEzZJiz8cT6TBlanBk70cB5GGIGC1gRDdZ00yADLW1FL6gqhtvNXNG5S9gdSrk4M1qu7JAsmYshzDS4peoMrU/gT7qQdqYGZaYhxZmVbGJAm/CS/HloWyhRUlknQ9KYcExTwS80d3VNOxUZJpITYyspl0LbhArhpZCD9cRWEQuhYkNGMHToQ/2Cs6swJlb39CsllxdXX6IUKh/H5jbnSsPKjgmoaFQ1f8wRLR0UnGE/RcDEjj2jXG1WVTwUs8+zxfcrVO+vSsuOpVKxCfYZiQ0/aPKuxQbQ8lIz+DClxC8u+snlcJ7Yr1z1JPqUH0V+GDXbOwAib931Y4Imaq0NTIXPXY+N5L18GJ37SVWu+hwXff8l72Ds9XuwYIBaXPq6Shm4l+Vl/5QiOlV+uTk6YR9PxKsI9xNJny31ygK1e+nIRC1N97EGkFPI+jCpiHe5PCEy7oWqWSwRrpOvhFzcbTWMbm3ZJAOn1rUKpYIt/lDhW/5RHHteeWFN60qo98YJuoq1nK3uW5AabyspC1BcIEpOhft+SZAShYoLSvnmSfnYADUERP5jJn2h5XtsgCRuhYQqAvwTwn33+YWEKUI72HX5AtfSAZDe8F2DtPPm77afhl0EkthzuCQU0BWApgQIH9+KB0JhopMM7bJrdTRoleM2JAVNMyPF+wdoaz+XJpGoVAQ7WXUkcV7gT3oUZyi/ISIJAVKhgNp+4b4veCFhYVJw4locdSjZCp9cPUhLF9EZ3KKzURepMEtCDPP3VcWFx4UIiZIklIpFNfHpdEafIF2aRmOcrUmjohbT2WUllbmRvgfbythbQO3222fpDJoufaQPncYYuqoGtUEsCJZL6/3PR5b4syeSjZMQG/T2maGANlXT2v8S4AULWaUkCxfLyW8iW4kdka+nEMjxpL2NCwsYNBp+Q61PF43zyDg9Bm9+3NNySn78jMZUUkumqE4Gp7JmFOdP1vc8PpRrzj9+wPinCy8K1PiJ4aYbnTYpCCbDkBSbzhu2QJ1Gd82t8jI8TH51+OzvXoWbnXUOBkNW+0mWFwGcGOUVpU81/n3TOHb5oMt2FgYGjzau0Nif0Ss7Q3XB33hjjQHjHA5E5aOyIQc8CBrLdQSs3j92VG+3nNEjbkbdbBr9zm04ruvw37vh0QKOdeGIkckc80fX3KH/h7PT4BOjgCty8VZ5ux1MoO5Cf5naca2LAsEgehI+drX8o/0Nu+W0m6K/I9gGPd/dfx/EN/wN62AhsBWuAAAAAElFTkSuQmCC
|
||||
">
|
||||
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
|
||||
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
</div>
|
||||
[OPT](https://huggingface.co/papers/2205.01068) is a suite of open-source decoder-only pre-trained transformers whose parameters range from 125M to 175B. OPT models are designed for casual language modeling and aim to enable responsible and reproducible research at scale. OPT-175B is comparable in performance to GPT-3 with only 1/7th the carbon footprint.
|
||||
|
||||
## Overview
|
||||
You can find all the original OPT checkpoints under the [OPT](https://huggingface.co/collections/facebook/opt-66ed00e15599f02966818844) collection.
|
||||
|
||||
The OPT model was proposed in [Open Pre-trained Transformer Language Models](https://huggingface.co/papers/2205.01068) by Meta AI.
|
||||
OPT is a series of open-sourced large causal language models which perform similar in performance to GPT3.
|
||||
> [!TIP]
|
||||
> This model was contributed by [ArthurZ](https://huggingface.co/ArthurZ), [ybelkada](https://huggingface.co/ybelkada), and [patrickvonplaten](https://huggingface.co/patrickvonplaten).
|
||||
>
|
||||
> Click on the OPT models in the right sidebar for more examples of how to apply OPT to different language tasks.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
The example below demonstrates how to generate text with [`Pipeline`], [`AutoModel`], and from the command line.
|
||||
|
||||
*Large language models, which are often trained for hundreds of thousands of compute days, have shown remarkable capabilities for zero- and few-shot learning. Given their computational cost, these models are difficult to replicate without significant capital. For the few that are available through APIs, no access is granted to the full model weights, making them difficult to study. We present Open Pre-trained Transformers (OPT), a suite of decoder-only pre-trained transformers ranging from 125M to 175B parameters, which we aim to fully and responsibly share with interested researchers. We show that OPT-175B is comparable to GPT-3, while requiring only 1/7th the carbon footprint to develop. We are also releasing our logbook detailing the infrastructure challenges we faced, along with code for experimenting with all of the released models.*
|
||||
|
||||
This model was contributed by [Arthur Zucker](https://huggingface.co/ArthurZ), [Younes Belkada](https://huggingface.co/ybelkada), and [Patrick Von Platen](https://huggingface.co/patrickvonplaten).
|
||||
The original code can be found [here](https://github.com/facebookresearch/metaseq).
|
||||
<hfoptions id="usage">
|
||||
<hfoption id="Pipeline">
|
||||
|
||||
```py
|
||||
import torch
|
||||
from transformers import pipeline
|
||||
|
||||
Tips:
|
||||
- OPT has the same architecture as [`BartDecoder`].
|
||||
- Contrary to GPT2, OPT adds the EOS token `</s>` to the beginning of every prompt.
|
||||
pipeline = pipeline(task="text-generation", model="facebook/opt-125m", torch_dtype=torch.float16, device=0)
|
||||
pipeline("Once upon a time, in a land far, far away,", max_length=50, num_return_sequences=1)
|
||||
```
|
||||
|
||||
> [!NOTE]
|
||||
> The `head_mask` argument is ignored when using all attention implementation other than "eager". If you have a `head_mask` and want it to have effect, load the model with `XXXModel.from_pretrained(model_id, attn_implementation="eager")`
|
||||
</hfoption>
|
||||
<hfoption id="AutoModel">
|
||||
|
||||
```py
|
||||
import torch
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
device = "cuda"
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16, attn_implementation="sdpa")
|
||||
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
|
||||
|
||||
prompt = ("Once upon a time, in a land far, far away, ")
|
||||
|
||||
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
|
||||
model.to(device)
|
||||
|
||||
generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
|
||||
tokenizer.batch_decode(generated_ids)[0]
|
||||
```
|
||||
</hfoption>
|
||||
<hfoption id="transformers CLI">
|
||||
|
||||
```py
|
||||
echo -e "Plants create energy through a process known as" | transformers run --task text-generation --model facebook/opt-125m --device 0
|
||||
```
|
||||
</hfoption>
|
||||
</hfoptions>
|
||||
|
||||
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
|
||||
|
||||
The example below uses [bitsandbytes](..quantization/bitsandbytes) to quantize the weights to 8-bits.
|
||||
|
||||
```py
|
||||
import torch
|
||||
from transformers import BitsAndBytesConfig, AutoTokenizer, AutoModelForCausalLM
|
||||
|
||||
device = "cuda"
|
||||
|
||||
bnb_config = BitsAndBytesConfig(load_in_8bit=True)
|
||||
model = AutoModelForCausalLM.from_pretrained("facebook/opt-13b", torch_dtype=torch.float16, attn_implementation="sdpa", quantization_config=bnb_config)
|
||||
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-13b")
|
||||
|
||||
prompt = ("Once upon a time, in a land far, far away, ")
|
||||
|
||||
model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
|
||||
model.to(device)
|
||||
|
||||
generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
|
||||
tokenizer.batch_decode(generated_ids)[0]
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- OPT adds an `EOS` token `</s>` to the beginning of every prompt.
|
||||
|
||||
- The `head_mask` argument is ignored if the attention implementation isn't `"eager"`. Set `attn_implementation="eager"` to enable the `head_mask`.
|
||||
|
||||
## Resources
|
||||
|
||||
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with OPT. If you're
|
||||
interested in submitting a resource to be included here, please feel free to open a Pull Request and we will review it.
|
||||
The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
<PipelineTag pipeline="text-generation" />
|
||||
|
||||
- A notebook on [fine-tuning OPT with PEFT, bitsandbytes, and Transformers](https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing). 🌎
|
||||
- A blog post on [decoding strategies with OPT](https://huggingface.co/blog/introducing-csearch#62-example-two---opt).
|
||||
- [Causal language modeling](https://huggingface.co/course/en/chapter7/6?fw=pt#training-a-causal-language-model-from-scratch) chapter of the 🤗 Hugging Face Course.
|
||||
- [`OPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#gpt-2gpt-and-causal-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb).
|
||||
- [`TFOPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_clmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb).
|
||||
- [`FlaxOPTForCausalLM`] is supported by this [causal language modeling example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#causal-language-modeling).
|
||||
|
||||
<PipelineTag pipeline="text-classification" />
|
||||
|
||||
- [Text classification task guide](sequence_classification.md)
|
||||
- [`OPTForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb).
|
||||
|
||||
<PipelineTag pipeline="question-answering" />
|
||||
|
||||
- [`OPTForQuestionAnswering`] is supported by this [question answering example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb).
|
||||
- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter
|
||||
of the 🤗 Hugging Face Course.
|
||||
|
||||
⚡️ Inference
|
||||
|
||||
- A blog post on [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models) with OPT.
|
||||
|
||||
|
||||
## Combining OPT and Flash Attention 2
|
||||
|
||||
First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature.
|
||||
|
||||
```bash
|
||||
pip install -U flash-attn --no-build-isolation
|
||||
```
|
||||
|
||||
Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16``)
|
||||
|
||||
To load and run a model using Flash Attention 2, refer to the snippet below:
|
||||
|
||||
```python
|
||||
>>> import torch
|
||||
>>> from transformers import OPTForCausalLM, GPT2Tokenizer
|
||||
>>> device = "cuda" # the device to load the model onto
|
||||
|
||||
>>> model = OPTForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16, attn_implementation="flash_attention_2")
|
||||
>>> tokenizer = GPT2Tokenizer.from_pretrained("facebook/opt-350m")
|
||||
|
||||
>>> prompt = ("A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the "
|
||||
"Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived "
|
||||
"there?")
|
||||
|
||||
>>> model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
|
||||
>>> model.to(device)
|
||||
|
||||
>>> generated_ids = model.generate(**model_inputs, max_new_tokens=30, do_sample=False)
|
||||
>>> tokenizer.batch_decode(generated_ids)[0]
|
||||
'</s>A chat between a curious human and the Statue of Liberty.\n\nHuman: What is your name?\nStatue: I am the Statue of Liberty.\nHuman: Where do you live?\nStatue: New York City.\nHuman: How long have you lived there?\nStatue: I have lived here for about a year.\nHuman: What is your favorite place to eat?\nStatue: I love'
|
||||
```
|
||||
|
||||
### Expected speedups
|
||||
|
||||
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `facebook/opt-2.7b` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths.
|
||||
|
||||
<div style="text-align: center">
|
||||
<img src="https://user-images.githubusercontent.com/49240599/281101546-d2fca6d2-ee44-48f3-9534-ba8d5bee4531.png">
|
||||
</div>
|
||||
|
||||
Below is an expected speedup diagram that compares pure inference time between the native implementation in transformers using `facebook/opt-350m` checkpoint and the Flash Attention 2 version of the model using two different sequence lengths.
|
||||
|
||||
<div style="text-align: center">
|
||||
<img src="https://user-images.githubusercontent.com/49240599/281101682-d1144e90-0dbc-46f4-8fc8-c6206cb793c9.png">
|
||||
</div>
|
||||
|
||||
|
||||
### Using Scaled Dot Product Attention (SDPA)
|
||||
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
|
||||
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
|
||||
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
|
||||
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
|
||||
page for more information.
|
||||
|
||||
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
|
||||
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
|
||||
|
||||
```python
|
||||
from transformers import OPTForCausalLM
|
||||
model = OPTForCausalLM.from_pretrained("facebook/opt-350m", torch_dtype=torch.float16, attn_implementation="sdpa")
|
||||
...
|
||||
```
|
||||
|
||||
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
|
||||
|
||||
On a local benchmark (L40S-45GB, PyTorch 2.4.0, OS Debian GNU/Linux 11) using `float16` with
|
||||
[facebook/opt-350m](https://huggingface.co/facebook/opt-350m), we saw the
|
||||
following speedups during training and inference.
|
||||
|
||||
### Training
|
||||
|
||||
| batch_size | seq_len | Time per batch (eager - s) | Time per batch (sdpa - s) | Speedup (%) | Eager peak mem (MB) | sdpa peak mem (MB) | Mem saving (%) |
|
||||
|--------------:|-----------:|:------------------------------|-----------------------------:|:---------------|:-----------------------|----------------------:|:------------------|
|
||||
| 1 | 128 | 0.047 | 0.037 | 26.360 | 1474.611 | 1474.32 | 0.019 |
|
||||
| 1 | 256 | 0.046 | 0.037 | 24.335 | 1498.541 | 1499.49 | -0.063 |
|
||||
| 1 | 512 | 0.046 | 0.037 | 24.959 | 1973.544 | 1551.35 | 27.215 |
|
||||
| 1 | 1024 | 0.062 | 0.038 | 65.135 | 4867.113 | 1698.35 | 186.578 |
|
||||
| 1 | 2048 | 0.230 | 0.039 | 483.933 | 15662.224 | 2715.75 | 476.718 |
|
||||
| 2 | 128 | 0.045 | 0.037 | 20.455 | 1498.164 | 1499.49 | -0.089 |
|
||||
| 2 | 256 | 0.046 | 0.037 | 24.027 | 1569.367 | 1551.35 | 1.161 |
|
||||
| 2 | 512 | 0.045 | 0.037 | 20.965 | 3257.074 | 1698.35 | 91.778 |
|
||||
| 2 | 1024 | 0.122 | 0.038 | 225.958 | 9054.405 | 2715.75 | 233.403 |
|
||||
| 2 | 2048 | 0.464 | 0.067 | 593.646 | 30572.058 | 4750.55 | 543.548 |
|
||||
| 4 | 128 | 0.045 | 0.037 | 21.918 | 1549.448 | 1551.35 | -0.123 |
|
||||
| 4 | 256 | 0.044 | 0.038 | 18.084 | 2451.768 | 1698.35 | 44.361 |
|
||||
| 4 | 512 | 0.069 | 0.037 | 84.421 | 5833.180 | 2715.75 | 114.791 |
|
||||
| 4 | 1024 | 0.262 | 0.062 | 319.475 | 17427.842 | 4750.55 | 266.860 |
|
||||
| 4 | 2048 | OOM | 0.062 | Eager OOM | OOM | 4750.55 | Eager OOM |
|
||||
| 8 | 128 | 0.044 | 0.037 | 18.436 | 2049.115 | 1697.78 | 20.694 |
|
||||
| 8 | 256 | 0.048 | 0.036 | 32.887 | 4222.567 | 2715.75 | 55.484 |
|
||||
| 8 | 512 | 0.153 | 0.06 | 154.862 | 10985.391 | 4750.55 | 131.245 |
|
||||
| 8 | 1024 | 0.526 | 0.122 | 330.697 | 34175.763 | 8821.18 | 287.428 |
|
||||
| 8 | 2048 | OOM | 0.122 | Eager OOM | OOM | 8821.18 | Eager OOM |
|
||||
|
||||
### Inference
|
||||
|
||||
| batch_size | seq_len | Per token latency eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem eager (MB) | Mem BT (MB) | Mem saved (%) |
|
||||
|--------------:|-----------:|--------------------------------:|-------------------------------:|---------------:|------------------:|---------------:|-----------------:|
|
||||
| 1 | 128 | 11.634 | 8.647 | 34.546 | 717.676 | 717.674 | 0 |
|
||||
| 1 | 256 | 11.593 | 8.86 | 30.851 | 742.852 | 742.845 | 0.001 |
|
||||
| 1 | 512 | 11.515 | 8.816 | 30.614 | 798.232 | 799.593 | -0.17 |
|
||||
| 1 | 1024 | 11.556 | 8.915 | 29.628 | 917.265 | 895.538 | 2.426 |
|
||||
| 2 | 128 | 12.724 | 11.002 | 15.659 | 762.434 | 762.431 | 0 |
|
||||
| 2 | 256 | 12.704 | 11.063 | 14.83 | 816.809 | 816.733 | 0.009 |
|
||||
| 2 | 512 | 12.757 | 10.947 | 16.535 | 917.383 | 918.339 | -0.104 |
|
||||
| 2 | 1024 | 13.018 | 11.018 | 18.147 | 1162.65 | 1114.81 | 4.291 |
|
||||
| 4 | 128 | 12.739 | 10.959 | 16.243 | 856.335 | 856.483 | -0.017 |
|
||||
| 4 | 256 | 12.718 | 10.837 | 17.355 | 957.298 | 957.674 | -0.039 |
|
||||
| 4 | 512 | 12.813 | 10.822 | 18.393 | 1158.44 | 1158.45 | -0.001 |
|
||||
| 4 | 1024 | 13.416 | 11.06 | 21.301 | 1653.42 | 1557.19 | 6.18 |
|
||||
| 8 | 128 | 12.763 | 10.891 | 17.193 | 1036.13 | 1036.51 | -0.036 |
|
||||
| 8 | 256 | 12.89 | 11.104 | 16.085 | 1236.98 | 1236.87 | 0.01 |
|
||||
| 8 | 512 | 13.327 | 10.939 | 21.836 | 1642.29 | 1641.78 | 0.031 |
|
||||
| 8 | 1024 | 15.181 | 11.175 | 35.848 | 2634.98 | 2443.35 | 7.843 |
|
||||
- Refer to this [notebook](https://colab.research.google.com/drive/1jCkpikz0J2o20FBQmYmAGdiKmJGOMo-o?usp=sharing) for an example of fine-tuning OPT with PEFT, bitsandbytes, and Transformers.
|
||||
- The [How 🤗 Accelerate runs very large models thanks to PyTorch](https://huggingface.co/blog/accelerate-large-models) blog post demonstrates how to run OPT for inference.
|
||||
|
||||
## OPTConfig
|
||||
|
||||
|
@ -106,6 +106,13 @@ Usage of OWLv2 is identical to [OWL-ViT](owlvit) with a new, updated image proce
|
||||
- post_process_object_detection
|
||||
- post_process_image_guided_detection
|
||||
|
||||
## Owlv2ImageProcessorFast
|
||||
|
||||
[[autodoc]] Owlv2ImageProcessorFast
|
||||
- preprocess
|
||||
- post_process_object_detection
|
||||
- post_process_image_guided_detection
|
||||
|
||||
## Owlv2Processor
|
||||
|
||||
[[autodoc]] Owlv2Processor
|
||||
|
@ -37,7 +37,11 @@ Voxtral builds on Ministral-3B by adding audio processing capabilities:
|
||||
|
||||
## Usage
|
||||
|
||||
Let's first load the model!
|
||||
### Audio Instruct Mode
|
||||
|
||||
The model supports audio-text instructions, including multi-turn and multi-audio interactions, all processed in batches.
|
||||
|
||||
➡️ audio + text instruction
|
||||
```python
|
||||
from transformers import VoxtralForConditionalGeneration, AutoProcessor
|
||||
import torch
|
||||
@ -47,14 +51,7 @@ repo_id = "mistralai/Voxtral-Mini-3B-2507"
|
||||
|
||||
processor = AutoProcessor.from_pretrained(repo_id)
|
||||
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, torch_dtype=torch.bfloat16, device_map=device)
|
||||
```
|
||||
|
||||
### Audio Instruct Mode
|
||||
|
||||
The model supports audio-text instructions, including multi-turn and multi-audio interactions, all processed in batches.
|
||||
|
||||
➡️ audio + text instruction
|
||||
```python
|
||||
conversation = [
|
||||
{
|
||||
"role": "user",
|
||||
@ -82,6 +79,15 @@ print("=" * 80)
|
||||
|
||||
➡️ multi-audio + text instruction
|
||||
```python
|
||||
from transformers import VoxtralForConditionalGeneration, AutoProcessor
|
||||
import torch
|
||||
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
repo_id = "mistralai/Voxtral-Mini-3B-2507"
|
||||
|
||||
processor = AutoProcessor.from_pretrained(repo_id)
|
||||
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, torch_dtype=torch.bfloat16, device_map=device)
|
||||
|
||||
conversation = [
|
||||
{
|
||||
"role": "user",
|
||||
@ -113,6 +119,15 @@ print("=" * 80)
|
||||
|
||||
➡️ multi-turn:
|
||||
```python
|
||||
from transformers import VoxtralForConditionalGeneration, AutoProcessor
|
||||
import torch
|
||||
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
repo_id = "mistralai/Voxtral-Mini-3B-2507"
|
||||
|
||||
processor = AutoProcessor.from_pretrained(repo_id)
|
||||
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, torch_dtype=torch.bfloat16, device_map=device)
|
||||
|
||||
conversation = [
|
||||
{
|
||||
"role": "user",
|
||||
@ -158,6 +173,15 @@ print("=" * 80)
|
||||
|
||||
➡️ text only:
|
||||
```python
|
||||
from transformers import VoxtralForConditionalGeneration, AutoProcessor
|
||||
import torch
|
||||
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
repo_id = "mistralai/Voxtral-Mini-3B-2507"
|
||||
|
||||
processor = AutoProcessor.from_pretrained(repo_id)
|
||||
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, torch_dtype=torch.bfloat16, device_map=device)
|
||||
|
||||
conversation = [
|
||||
{
|
||||
"role": "user",
|
||||
@ -184,6 +208,15 @@ print("=" * 80)
|
||||
|
||||
➡️ audio only:
|
||||
```python
|
||||
from transformers import VoxtralForConditionalGeneration, AutoProcessor
|
||||
import torch
|
||||
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
repo_id = "mistralai/Voxtral-Mini-3B-2507"
|
||||
|
||||
processor = AutoProcessor.from_pretrained(repo_id)
|
||||
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, torch_dtype=torch.bfloat16, device_map=device)
|
||||
|
||||
conversation = [
|
||||
{
|
||||
"role": "user",
|
||||
@ -210,6 +243,15 @@ print("=" * 80)
|
||||
|
||||
➡️ batched inference!
|
||||
```python
|
||||
from transformers import VoxtralForConditionalGeneration, AutoProcessor
|
||||
import torch
|
||||
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
repo_id = "mistralai/Voxtral-Mini-3B-2507"
|
||||
|
||||
processor = AutoProcessor.from_pretrained(repo_id)
|
||||
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, torch_dtype=torch.bfloat16, device_map=device)
|
||||
|
||||
conversations = [
|
||||
[
|
||||
{
|
||||
@ -262,7 +304,16 @@ for decoded_output in decoded_outputs:
|
||||
Use the model to transcribe audio (supports English, Spanish, French, Portuguese, Hindi, German, Dutch, Italian)!
|
||||
|
||||
```python
|
||||
inputs = processor.apply_transcrition_request(language="en", audio="https://huggingface.co/datasets/hf-internal-testing/dummy-audio-samples/resolve/main/obama.mp3")
|
||||
from transformers import VoxtralForConditionalGeneration, AutoProcessor
|
||||
import torch
|
||||
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
repo_id = "mistralai/Voxtral-Mini-3B-2507"
|
||||
|
||||
processor = AutoProcessor.from_pretrained(repo_id)
|
||||
model = VoxtralForConditionalGeneration.from_pretrained(repo_id, torch_dtype=torch.bfloat16, device_map=device)
|
||||
|
||||
inputs = processor.apply_transcription_request(language="en", audio="https://huggingface.co/datasets/hf-internal-testing/dummy-audio-samples/resolve/main/obama.mp3", model_id=repo_id)
|
||||
inputs = inputs.to(device, dtype=torch.bfloat16)
|
||||
|
||||
outputs = model.generate(**inputs, max_new_tokens=500)
|
||||
|
47
docs/source/en/model_doc/xlstm.md
Normal file
47
docs/source/en/model_doc/xlstm.md
Normal file
@ -0,0 +1,47 @@
|
||||
<!--Copyright 2025 NXAI GmbH. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
|
||||
# xLSTM
|
||||
|
||||
## Overview
|
||||
|
||||
The xLSTM model was proposed in [xLSTM: Extended Long Short-Term Memory](https://openreview.net/forum?id=ARAxPPIAhq) by Maximilian Beck*, Korbinian Pöppel*, Markus Spanring, Andreas Auer, Oleksandra Prudnikova, Michael Kopp, Günter Klambauer, Johannes Brandstetter and Sepp Hochreiter.
|
||||
xLSTM updates the original LSTM architecture to be competitive with Transformer models by introducing exponential gating, matrix memory expansion, and parallelizable training and ingestion.
|
||||
|
||||
The [7B model](https://hf.co/NX-AI/xLSTM-7b) variant was trained by the xLSTM team Maximilian Beck, Korbinian Pöppel, Phillip Lippe, Richard Kurle, Patrick Blies, Sebastian Böck and Sepp Hochreiter at NXAI.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*In the 1990s, the constant error carousel and gating were introduced as the central ideas of the Long Short-Term Memory (LSTM). Since then, LSTMs have stood the test of time and contributed to numerous deep learning success stories, in particular they constituted the first Large Language Models (LLMs). However, the advent of the Transformer technology with parallelizable self-attention at its core marked the dawn of a new era, outpacing LSTMs at scale. We now raise a simple question: How far do we get in language modeling when scaling LSTMs to billions of parameters, leveraging the latest techniques from modern LLMs, but mitigating known limitations of LSTMs? Firstly, we introduce exponential gating with appropriate normalization and stabilization techniques. Secondly, we modify the LSTM memory structure, obtaining: (i) sLSTM with a scalar memory, a scalar update, and new memory mixing, (ii) mLSTM that is fully parallelizable with a matrix memory and a covariance update rule. Integrating these LSTM extensions into residual block backbones yields xLSTM blocks that are then residually stacked into xLSTM architectures. Exponential gating and modified memory structures boost xLSTM capabilities to perform favorably when compared to state-of-the-art Transformers and State Space Models, both in performance and scaling.*
|
||||
|
||||
This model was contributed by [NX-AI](https://huggingface.co/NX-AI).
|
||||
The original code can be found [here](https://github.com/NX-AI/xlstm).
|
||||
|
||||
|
||||
## xLSTMConfig
|
||||
|
||||
[[autodoc]] xLSTMConfig
|
||||
|
||||
## xLSTMModel
|
||||
|
||||
[[autodoc]] xLSTMModel
|
||||
- forward
|
||||
|
||||
## xLSTMLMHeadModel
|
||||
|
||||
[[autodoc]] xLSTMForCausalLM
|
||||
- forward
|
@ -13,76 +13,95 @@ specific language governing permissions and limitations under the License.
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
<div style="float: right;">
|
||||
<div class="flex flex-wrap space-x-1">
|
||||
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
|
||||
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
</div>
|
||||
</div>
|
||||
|
||||
# YOLOS
|
||||
|
||||
<div class="flex flex-wrap space-x-1">
|
||||
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
|
||||
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
|
||||
</div>
|
||||
[YOLOS](https://huggingface.co/papers/2106.00666) uses a [Vision Transformer (ViT)](./vit) for object detection with minimal modifications and region priors. It can achieve performance comparable to specialized object detection models and frameworks with knowledge about 2D spatial structures.
|
||||
|
||||
## Overview
|
||||
|
||||
The YOLOS model was proposed in [You Only Look at One Sequence: Rethinking Transformer in Vision through Object Detection](https://huggingface.co/papers/2106.00666) by Yuxin Fang, Bencheng Liao, Xinggang Wang, Jiemin Fang, Jiyang Qi, Rui Wu, Jianwei Niu, Wenyu Liu.
|
||||
YOLOS proposes to just leverage the plain [Vision Transformer (ViT)](vit) for object detection, inspired by DETR. It turns out that a base-sized encoder-only Transformer can also achieve 42 AP on COCO, similar to DETR and much more complex frameworks such as Faster R-CNN.
|
||||
You can find all the original YOLOS checkpoints under the [HUST Vision Lab](https://huggingface.co/hustvl/models?search=yolos) organization.
|
||||
|
||||
The abstract from the paper is the following:
|
||||
|
||||
*Can Transformer perform 2D object- and region-level recognition from a pure sequence-to-sequence perspective with minimal knowledge about the 2D spatial structure? To answer this question, we present You Only Look at One Sequence (YOLOS), a series of object detection models based on the vanilla Vision Transformer with the fewest possible modifications, region priors, as well as inductive biases of the target task. We find that YOLOS pre-trained on the mid-sized ImageNet-1k dataset only can already achieve quite competitive performance on the challenging COCO object detection benchmark, e.g., YOLOS-Base directly adopted from BERT-Base architecture can obtain 42.0 box AP on COCO val. We also discuss the impacts as well as limitations of current pre-train schemes and model scaling strategies for Transformer in vision through YOLOS.*
|
||||
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/yolos_architecture.png"
|
||||
alt="drawing" width="600"/>
|
||||
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/yolos_architecture.png" alt="drawing" width="600"/>
|
||||
|
||||
<small> YOLOS architecture. Taken from the <a href="https://huggingface.co/papers/2106.00666">original paper</a>.</small>
|
||||
|
||||
This model was contributed by [nielsr](https://huggingface.co/nielsr). The original code can be found [here](https://github.com/hustvl/YOLOS).
|
||||
|
||||
## Using Scaled Dot Product Attention (SDPA)
|
||||
> [!TIP]
|
||||
> This model wasa contributed by [nielsr](https://huggingface.co/nielsr).
|
||||
> Click on the YOLOS models in the right sidebar for more examples of how to apply YOLOS to different object detection tasks.
|
||||
|
||||
PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function
|
||||
encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the
|
||||
[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html)
|
||||
or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention)
|
||||
page for more information.
|
||||
The example below demonstrates how to detect objects with [`Pipeline`] or the [`AutoModel`] class.
|
||||
|
||||
SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set
|
||||
`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used.
|
||||
<hfoptions id="usage">
|
||||
<hfoption id="Pipeline">
|
||||
|
||||
```
|
||||
from transformers import AutoModelForObjectDetection
|
||||
model = AutoModelForObjectDetection.from_pretrained("hustvl/yolos-base", attn_implementation="sdpa", torch_dtype=torch.float16)
|
||||
...
|
||||
```py
|
||||
import torch
|
||||
from transformers import pipeline
|
||||
|
||||
detector = pipeline(
|
||||
task="object-detection",
|
||||
model="hustvl/yolos-base",
|
||||
torch_dtype=torch.float16,
|
||||
device=0
|
||||
)
|
||||
detector("https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png")
|
||||
```
|
||||
|
||||
For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`).
|
||||
</hfoption>
|
||||
<hfoption id="Automodel">
|
||||
|
||||
On a local benchmark (A100-40GB, PyTorch 2.3.0, OS Ubuntu 22.04) with `float32` and `hustvl/yolos-base` model, we saw the following speedups during inference.
|
||||
```py
|
||||
import torch
|
||||
from PIL import Image
|
||||
import requests
|
||||
from transformers import AutoImageProcessor, AutoModelForObjectDetection
|
||||
|
||||
| Batch size | Average inference time (ms), eager mode | Average inference time (ms), sdpa model | Speed up, Sdpa / Eager (x) |
|
||||
|--------------|-------------------------------------------|-------------------------------------------|------------------------------|
|
||||
| 1 | 106 | 76 | 1.39 |
|
||||
| 2 | 154 | 90 | 1.71 |
|
||||
| 4 | 222 | 116 | 1.91 |
|
||||
| 8 | 368 | 168 | 2.19 |
|
||||
processor = AutoImageProcessor.from_pretrained("hustvl/yolos-base")
|
||||
model = AutoModelForObjectDetection.from_pretrained("hustvl/yolos-base", torch_dtype=torch.float16, attn_implementation="sdpa").to("cuda")
|
||||
|
||||
url = "https://huggingface.co/datasets/Narsil/image_dummy/raw/main/parrots.png"
|
||||
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
|
||||
inputs = processor(images=image, return_tensors="pt").to("cuda")
|
||||
|
||||
with torch.no_grad():
|
||||
outputs = model(**inputs)
|
||||
logits = outputs.logits.softmax(-1)
|
||||
scores, labels = logits[..., :-1].max(-1)
|
||||
boxes = outputs.pred_boxes
|
||||
|
||||
threshold = 0.3
|
||||
keep = scores[0] > threshold
|
||||
|
||||
filtered_scores = scores[0][keep]
|
||||
filtered_labels = labels[0][keep]
|
||||
filtered_boxes = boxes[0][keep]
|
||||
|
||||
width, height = image.size
|
||||
pixel_boxes = filtered_boxes * torch.tensor([width, height, width, height], device=boxes.device)
|
||||
|
||||
for score, label, box in zip(filtered_scores, filtered_labels, pixel_boxes):
|
||||
x0, y0, x1, y1 = box.tolist()
|
||||
print(f"Label {model.config.id2label[label.item()]}: {score:.2f} at [{x0:.0f}, {y0:.0f}, {x1:.0f}, {y1:.0f}]")
|
||||
```
|
||||
|
||||
</hfoption>
|
||||
</hfoptions>
|
||||
|
||||
|
||||
## Notes
|
||||
- Use [`YolosImageProcessor`] for preparing images (and optional targets) for the model. Contrary to [DETR](./detr), YOLOS doesn't require a `pixel_mask`.
|
||||
|
||||
## Resources
|
||||
|
||||
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with YOLOS.
|
||||
|
||||
<PipelineTag pipeline="object-detection"/>
|
||||
|
||||
- All example notebooks illustrating inference + fine-tuning [`YolosForObjectDetection`] on a custom dataset can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/YOLOS).
|
||||
- Scripts for finetuning [`YolosForObjectDetection`] with [`Trainer`] or [Accelerate](https://huggingface.co/docs/accelerate/index) can be found [here](https://github.com/huggingface/transformers/tree/main/examples/pytorch/object-detection).
|
||||
- See also: [Object detection task guide](../tasks/object_detection)
|
||||
|
||||
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
|
||||
|
||||
<Tip>
|
||||
|
||||
Use [`YolosImageProcessor`] for preparing images (and optional targets) for the model. Contrary to [DETR](detr), YOLOS doesn't require a `pixel_mask` to be created.
|
||||
|
||||
</Tip>
|
||||
- Refer to these [notebooks](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/YOLOS) for inference and fine-tuning with [`YolosForObjectDetection`] on a custom dataset.
|
||||
|
||||
## YolosConfig
|
||||
|
||||
|
@ -28,7 +28,7 @@ To share a model to the Hub, you need a Hugging Face [account](https://hf.co/joi
|
||||
<hfoption id="huggingface-CLI">
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
</hfoption>
|
||||
|
@ -94,7 +94,7 @@ ValueError: You defined `RobertaEmbeddings` in the modular_roberta.py, it should
|
||||
|
||||
## Implementing a modular file
|
||||
|
||||
The easiest way to start is by browsing Transformers for a model similar to yours in order to inherit from it. Some good starting points are [Mistral](./model_doc/mistral), [Qwen2](./model_doc/qwen2), [Cohere](./model_doc/cohere) and [Cohere](./model_doc/cohere2), and [Llama](./model_doc/llama). Refer to the table below for components your model might be using and where you can inherit from.
|
||||
The easiest way to start is by browsing Transformers for a model similar to yours in order to inherit from it. Some good starting points are [Mistral](./model_doc/mistral), [Qwen2](./model_doc/qwen2), [Cohere](./model_doc/cohere) and [Cohere2](./model_doc/cohere2), and [Llama](./model_doc/llama). Refer to the table below for components your model might be using and where you can inherit from.
|
||||
|
||||
| Component | Model |
|
||||
|---|---|
|
||||
|
66
docs/source/en/quantization/fp_quant.md
Normal file
66
docs/source/en/quantization/fp_quant.md
Normal file
@ -0,0 +1,66 @@
|
||||
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
# FP-Quant
|
||||
|
||||
[FP-Quant](https://github.com/IST-DASLab/FP-Quant) is a family of quantization algorithms tailored for the Blackwell generation of Nvidia GPUs. The goal is to allow for efficient post-training quantization (PTQ) and quantization-aware trainin (QAT) of LLMs in the [MXFP4 and NVFP4 data-types](https://www.opencompute.org/documents/ocp-microscaling-formats-mx-v1-0-spec-final-pdf).
|
||||
|
||||
Currently, only PTQ with MXFP4 is supported. Models can either be quantized on the fly with `quantization_config=FPQuantConfig()`:
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer, FPQuantConfig
|
||||
import torch
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
"qwen/Qwen3-8B",
|
||||
quantization_config=FPQuantConfig(),
|
||||
device_map="cuda",
|
||||
torch_dtype=torch.bfloat16,
|
||||
)
|
||||
```
|
||||
|
||||
or pre-processed with GPTQ for better quality (see [FP Format Quantization Harness](https://github.com/IST-DASLab/FP-Quant)).
|
||||
|
||||
A **Blackwell-generation GPU is required** to run the kernels. Runtime support for FP-Quant is implemented through the [QuTLASS](https://github.com/IST-DASLab/qutlass) library and a lightweight PyTorch interface lib [`fp_quant`](https://github.com/IST-DASLab/FP-Quant/tree/master/inference_lib). We recommend installing the former **from source** and the latter with `pip install fp_quant`.
|
||||
|
||||
Users **without a Blackwell-generation GPU** , can use the method with `quantization_config=FPQuantConfig(pseudoquant=True)` without having to install [QuTLASS](https://github.com/IST-DASLab/qutlass). This would provide no speedups but would fully emulate the effect of quantization.
|
||||
|
||||
> [!TIP]
|
||||
> Find models pre-quantized with FP-Quant in the official ISTA-DASLab [collection](https://huggingface.co/collections/ISTA-DASLab/fp-quant-6877c186103a21d3a02568ee).
|
||||
|
||||
## torch.compile
|
||||
|
||||
FP-Quant is fully compatible with [torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html).
|
||||
|
||||
```python
|
||||
import torch
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer, FPQuantConfig
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
"qwen/Qwen3-8B",
|
||||
quantization_config=FPQuantConfig(),
|
||||
device_map="cuda",
|
||||
torch_dtype=torch.bfloat16,
|
||||
)
|
||||
|
||||
model.forward = torch.compile(model.forward, mode="max-autotune", fullgraph=True)
|
||||
```
|
||||
|
||||
## Speedups
|
||||
|
||||
FP-Quant currently performs best for very large batch size processing.
|
||||
|
||||
See [QuTLASS README](https://github.com/IST-DASLab/qutlass/blob/main/README.md) for speedups.
|
@ -30,6 +30,7 @@ Use the Space below to help you pick a quantization method depending on your har
|
||||
| [bitsandbytes](./bitsandbytes) | 🟢 | 🟡 | 🟢 | 🟡 | 🔴 | 🟡 | 🟢 | 4/8 | 🟢 | 🟢 | 🟢 | https://github.com/bitsandbytes-foundation/bitsandbytes |
|
||||
| [compressed-tensors](./compressed_tensors) | 🔴 | 🟢 | 🟢 | 🟢 | 🔴 | 🔴 | 🔴 | 1/8 | 🟢 | 🟢 | 🟢 | https://github.com/neuralmagic/compressed-tensors |
|
||||
| [EETQ](./eetq) | 🟢 | 🔴 | 🟢 | 🔴 | 🔴 | 🔴 | ? | 8 | 🟢 | 🟢 | 🟢 | https://github.com/NetEase-FuXi/EETQ |
|
||||
| [FP-Quant](./fp_quant) | 🟢 | 🔴 | 🟢 | 🔴 | 🔴 | 🔴 | 🟢 | 4 | 🔴 | 🟢 | 🟢 | https://github.com/IST-DASLab/FP-Quant |
|
||||
| [GGUF / GGML (llama.cpp)](../gguf) | 🟢 | 🟢 | 🟢 | 🔴 | 🟢 | 🔴 | 🔴 | 1/8 | 🔴 | [See Notes](../gguf) | [See Notes](../gguf) | https://github.com/ggerganov/llama.cpp |
|
||||
| [GPTQModel](./gptq) | 🔴 | 🟢 | 🟢 | 🟢 | 🟢 | 🟢 | 🔴 | 2/3/4/8 | 🟢 | 🟢 | 🟢 | https://github.com/ModelCloud/GPTQModel |
|
||||
| [AutoGPTQ](./gptq) | 🔴 | 🔴 | 🟢 | 🟢 | 🔴 | 🔴 | 🔴 | 2/3/4/8 | 🟢 | 🟢 | 🟢 | https://github.com/AutoGPTQ/AutoGPTQ |
|
||||
|
@ -49,7 +49,7 @@ notebook_login()
|
||||
Make sure the [huggingface_hub[cli]](https://huggingface.co/docs/huggingface_hub/guides/cli#getting-started) package is installed and run the command below. Paste your User Access Token when prompted to log in.
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
</hfoption>
|
||||
|
@ -289,7 +289,7 @@ You could also create and use your own dataset if you prefer to train with the [
|
||||
}
|
||||
)
|
||||
|
||||
# step 3: push to Hub (assumes you have ran the huggingface-cli login command in a terminal/notebook)
|
||||
# step 3: push to Hub (assumes you have ran the hf auth login command in a terminal/notebook)
|
||||
dataset.push_to_hub("your-name/dataset-repo")
|
||||
|
||||
# optionally, you can push to a private repo on the Hub
|
||||
|
@ -74,7 +74,7 @@ model = AutoModelForSequenceClassification.from_pretrained("google-bert/bert-bas
|
||||
```
|
||||
|
||||
> [!TIP]
|
||||
> The message above is a reminder that the models pretrained head is discarded and replaced with a randomly initialized classification head. The randomly initialized head needs to be fine-tuned on your specific task to output meanginful predictions.
|
||||
> The message above is a reminder that the models pretrained head is discarded and replaced with a randomly initialized classification head. The randomly initialized head needs to be fine-tuned on your specific task to output meaningful predictions.
|
||||
|
||||
With the model loaded, set up your training hyperparameters in [`TrainingArguments`]. Hyperparameters are variables that control the training process - such as the learning rate, batch size, number of epochs - which in turn impacts model performance. Selecting the correct hyperparameters is important and you should experiment with them to find the best configuration for your task.
|
||||
|
||||
|
@ -40,7 +40,7 @@ vllm serve meta-llama/Llama-3.2-1B \
|
||||
--model-impl transformers
|
||||
```
|
||||
|
||||
Refer to the [vLLM docs](https://docs.vllm.ai/en/latest/models/transformers_backend.html) for more usage examples and tips on using a Transformers as the backend.
|
||||
Refer to the [vLLM docs](https://docs.vllm.ai/en/latest/models/supported_models.html#transformers) for more usage examples and tips on using a Transformers as the backend.
|
||||
|
||||
|
||||
## SGLang
|
||||
|
@ -38,6 +38,8 @@
|
||||
sections:
|
||||
- local: tasks/asr
|
||||
title: Reconocimiento automático del habla
|
||||
- local: tasks/audio_classification
|
||||
title: Clasificación de audio
|
||||
title: Audio
|
||||
- isExpanded: false
|
||||
sections:
|
||||
|
@ -285,7 +285,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
|
||||
Ahora, para enviar el modelo al Hub, asegúrate de haber iniciado sesión. Ejecuta en tu terminal:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
o desde un _notebook_:
|
||||
|
@ -56,7 +56,7 @@ Los archivos son editados fácilmente dentro de un repositorio. Incluso puedes o
|
||||
Antes de compartir un modelo al Hub necesitarás tus credenciales de Hugging Face. Si tienes acceso a una terminal ejecuta el siguiente comando en el entorno virtual donde 🤗 Transformers esté instalado. Esto guardará tu token de acceso dentro de tu carpeta cache de Hugging Face (~/.cache/ by default):
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
Si usas un notebook como Jupyter o Colaboratory, asegúrate de tener instalada la biblioteca [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library). Esta biblioteca te permitirá interactuar por código con el Hub.
|
||||
|
@ -324,7 +324,7 @@ python examples/pytorch/summarization/run_summarization.py
|
||||
Todos los scripts pueden cargar tu modelo final en el [Model Hub](https://huggingface.co/models). Asegúrate de haber iniciado sesión en Hugging Face antes de comenzar:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
Luego agrega el argumento `push_to_hub` al script. Este argumento creará un repositorio con tu nombre de usuario Hugging Face y el nombre de la carpeta especificado en `output_dir`.
|
||||
|
323
docs/source/es/tasks/audio_classification.md
Normal file
323
docs/source/es/tasks/audio_classification.md
Normal file
@ -0,0 +1,323 @@
|
||||
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
# Clasificación de audio
|
||||
|
||||
[[open-in-colab]]
|
||||
|
||||
<Youtube id="KWwzcmG98Ds"/>
|
||||
|
||||
Clasificación de audio - al igual que con texto — asigna una etiqueta de clase como salida desde las entradas de datos. La diferencia única es en vez de entrada de texto, tiene formas de onda de audio. Algunas aplicaciones prácticas de clasificación incluye identificar la intención del hablante, identificación del idioma, y la clasificación de animales por sus sonidos.
|
||||
|
||||
En esta guía te mostraremos como:
|
||||
|
||||
1. Hacer fine-tuning al modelo [Wav2Vec2](https://huggingface.co/facebook/wav2vec2-base) en el dataset [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) para clasificar la intención del hablante.
|
||||
2. Usar tu modelo ajustado para tareas de inferencia.
|
||||
|
||||
|
||||
<Tip>
|
||||
|
||||
Consulta la [página de la tarea](https://huggingface.co/tasks/audio-classification) de clasificación de audio para acceder a más información sobre los modelos, datasets, y métricas asociados.
|
||||
|
||||
</Tip>
|
||||
|
||||
Antes de comenzar, asegúrate de haber instalado todas las librerías necesarias:
|
||||
|
||||
```bash
|
||||
pip install transformers datasets evaluate
|
||||
```
|
||||
|
||||
Te aconsejamos iniciar sesión con tu cuenta de Hugging Face para que puedas subir tu modelo y compartirlo con la comunidad. Cuando se te solicite, ingresa tu token para iniciar sesión:
|
||||
|
||||
```py
|
||||
>>> from huggingface_hub import notebook_login
|
||||
|
||||
>>> notebook_login()
|
||||
```
|
||||
|
||||
## Carga el dataset MInDS-14
|
||||
|
||||
Comencemos cargando el dataset MInDS-14 con la biblioteca de 🤗 Datasets:
|
||||
|
||||
```py
|
||||
>>> from datasets import load_dataset, Audio
|
||||
|
||||
>>> minds = load_dataset("PolyAI/minds14", name="en-US", split="train")
|
||||
```
|
||||
|
||||
Divide el conjunto de `train` (entrenamiento) en un conjunto de entrenamiento y prueba mas pequeño con el método [`~datasets.Dataset.train_test_split`]. De esta forma, tendrás la oportunidad para experimentar y asegúrate de que todo funcióne antes de invertir más tiempo entrenando con el dataset entero.
|
||||
|
||||
```py
|
||||
>>> minds = minds.train_test_split(test_size=0.2)
|
||||
```
|
||||
|
||||
Ahora échale un vistazo al dataset:
|
||||
|
||||
```py
|
||||
>>> minds
|
||||
DatasetDict({
|
||||
train: Dataset({
|
||||
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
|
||||
num_rows: 450
|
||||
})
|
||||
test: Dataset({
|
||||
features: ['path', 'audio', 'transcription', 'english_transcription', 'intent_class', 'lang_id'],
|
||||
num_rows: 113
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
Aunque el dataset contiene mucha información útil, como los campos `land_id` (identificador del lenguaje) y `english_transcription` (transcripción al inglés), en esta guía nos enfocaremos en los campos `audio` y `intent_class` (clase de intención). Puedes quitar las otras columnas con cel método [`~datasets.Dataset.remove_columns`]:
|
||||
|
||||
```py
|
||||
>>> minds = minds.remove_columns(["path", "transcription", "english_transcription", "lang_id"])
|
||||
```
|
||||
|
||||
Aquí está un ejemplo:
|
||||
|
||||
```py
|
||||
>>> minds["train"][0]
|
||||
{'audio': {'array': array([ 0. , 0. , 0. , ..., -0.00048828,
|
||||
-0.00024414, -0.00024414], dtype=float32),
|
||||
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav',
|
||||
'sampling_rate': 8000},
|
||||
'intent_class': 2}
|
||||
```
|
||||
|
||||
Hay dos campos:
|
||||
|
||||
- `audio`: un `array` (arreglo) unidimensional de la señal de voz que se obtiene al cargar y volver a muestrear el archivo de audio.
|
||||
- `intent_class`: representa el identificador de la clase de la intención del hablante.
|
||||
|
||||
Crea un diccionario que asigne el nombre de la etiqueta a un número entero y viceversa para facilitar la obtención del nombre de la etiqueta a partir de su identificador.
|
||||
|
||||
```py
|
||||
>>> labels = minds["train"].features["intent_class"].names
|
||||
>>> label2id, id2label = dict(), dict()
|
||||
>>> for i, label in enumerate(labels):
|
||||
... label2id[label] = str(i)
|
||||
... id2label[str(i)] = label
|
||||
```
|
||||
|
||||
Ahora puedes convertir el identificador de la etiqueta a un nombre de etiqueta:
|
||||
|
||||
```py
|
||||
>>> id2label[str(2)]
|
||||
'app_error'
|
||||
```
|
||||
|
||||
## Preprocesamiento
|
||||
|
||||
Seguidamente carga el feature extractor (función de extracción de características) de Wav2Vec para procesar la señal de audio:
|
||||
|
||||
```py
|
||||
>>> from transformers import AutoFeatureExtractor
|
||||
|
||||
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("facebook/wav2vec2-base")
|
||||
```
|
||||
|
||||
El dataset MInDS-14 tiene una tasa de muestreo de 8kHz (puedes encontrar esta información en su [tarjeta de dataset](https://huggingface.co/datasets/PolyAI/minds14)), lo que significa que tendrás que volver a muestrear el dataset a 16kHZ para poder usar el modelo Wav2Vec2 preentranado:
|
||||
|
||||
```py
|
||||
>>> minds = minds.cast_column("audio", Audio(sampling_rate=16_000))
|
||||
>>> minds["train"][0]
|
||||
{'audio': {'array': array([ 2.2098757e-05, 4.6582241e-05, -2.2803260e-05, ...,
|
||||
-2.8419291e-04, -2.3305941e-04, -1.1425107e-04], dtype=float32),
|
||||
'path': '/root/.cache/huggingface/datasets/downloads/extracted/f14948e0e84be638dd7943ac36518a4cf3324e8b7aa331c5ab11541518e9368c/en-US~APP_ERROR/602b9a5fbb1e6d0fbce91f52.wav',
|
||||
'sampling_rate': 16000},
|
||||
'intent_class': 2}
|
||||
```
|
||||
|
||||
Ahora vamos a crear una función de preprocesamiento:
|
||||
|
||||
1. Invoque la columna `audio` para cargar, y si es necesario, volver a muestrear al archivo de audio.
|
||||
2. Comprueba si la frecuencia de muestreo del archivo de audio coincide con la frecuencia de muestreo de los datos de audio con los que se entrenó previamente el modelo. Puedes encontrar esta información en la [tarjeta de modelo](https://huggingface.co/facebook/wav2vec2-base) de Wav2Vec2.
|
||||
3. Establece una longitud de entrada máxima para agrupar entradas más largas sin truncarlas.
|
||||
|
||||
```py
|
||||
>>> def preprocess_function(examples):
|
||||
... audio_arrays = [x["array"] for x in examples["audio"]]
|
||||
... inputs = feature_extractor(
|
||||
... audio_arrays, sampling_rate=feature_extractor.sampling_rate, max_length=16000, truncation=True
|
||||
... )
|
||||
... return inputs
|
||||
```
|
||||
|
||||
Para aplicar la función de preprocesamiento a todo el dataset, puedes usar la función [`~datasets.Dataset.map`] de 🤗 Datasets. Acelera la función `map` haciendo `batched=True` para procesar varios elementos del dataset a la vez. Quitas las columnas que no necesites con el método `[~datasets.Dataset.remove_columns]` y cambia el nombre de `intent_class` a `label`, como requiere el modelo.
|
||||
|
||||
```py
|
||||
>>> encoded_minds = minds.map(preprocess_function, remove_columns="audio", batched=True)
|
||||
>>> encoded_minds = encoded_minds.rename_column("intent_class", "label")
|
||||
```
|
||||
|
||||
## Evaluación
|
||||
A menudo es útil incluir una métrica durante el entrenamiento para evaluar el rendimiento de tu modelo. Puedes cargar un método de evaluación rapidamente con la biblioteca de 🤗 [Evaluate](https://huggingface.co/docs/evaluate/index). Para esta tarea, puedes usar la métrica de [exactitud](https://huggingface.co/spaces/evaluate-metric/accuracy) (accuracy). Puedes ver la [guía rápida](https://huggingface.co/docs/evaluate/a_quick_tour) de 🤗 Evaluate para aprender más de cómo cargar y computar una métrica:
|
||||
|
||||
```py
|
||||
>>> import evaluate
|
||||
|
||||
>>> accuracy = evaluate.load("accuracy")
|
||||
```
|
||||
|
||||
Ahora crea una función que le pase tus predicciones y etiquetas a [`~evaluate.EvaluationModule.compute`] para calcular la exactitud:
|
||||
|
||||
```py
|
||||
>>> import numpy as np
|
||||
|
||||
|
||||
>>> def compute_metrics(eval_pred):
|
||||
... predictions = np.argmax(eval_pred.predictions, axis=1)
|
||||
... return accuracy.compute(predictions=predictions, references=eval_pred.label_ids)
|
||||
```
|
||||
|
||||
Ahora tu función `compute_metrics` (computar métricas) está lista y podrás usarla cuando estés preparando tu entrenamiento.
|
||||
|
||||
## Entrenamiento
|
||||
|
||||
<frameworkcontent>
|
||||
<pt>
|
||||
<Tip>
|
||||
|
||||
¡Si no tienes experiencia haciéndo *fine-tuning* a un modelo con el [`Trainer`], échale un vistazo al tutorial básico [aquí](../training#train-with-pytorch-trainer)!
|
||||
|
||||
</Tip>
|
||||
|
||||
¡Ya puedes empezar a entrenar tu modelo! Carga Wav2Vec2 con [`AutoModelForAudioClassification`] junto con el especifica el número de etiquetas, y pasa al modelo los *mappings* entre el número entero de etiqueta y la clase de etiqueta.
|
||||
|
||||
```py
|
||||
>>> from transformers import AutoModelForAudioClassification, TrainingArguments, Trainer
|
||||
|
||||
>>> num_labels = len(id2label)
|
||||
>>> model = AutoModelForAudioClassification.from_pretrained(
|
||||
... "facebook/wav2vec2-base", num_labels=num_labels, label2id=label2id, id2label=id2label
|
||||
... )
|
||||
```
|
||||
|
||||
Al llegar a este punto, solo quedan tres pasos:
|
||||
|
||||
1. Define tus hiperparámetros de entrenamiento en [`TrainingArguments`]. El único parámetro obligatorio es `output_dir` (carpeta de salida), el cual especifica dónde guardar tu modelo. Puedes subir este modelo al Hub haciendo `push_to_hub=True` (debes haber iniciado sesión en Hugging Face para subir tu modelo). Al final de cada época, el [`Trainer`] evaluará la exactitud y guardará el punto de control del entrenamiento.
|
||||
2. Pásale los argumentos del entrenamiento al [`Trainer`] junto con el modelo, el dataset, el tokenizer, el data collator y la función `compute_metrics`.
|
||||
3. Llama el método [`~Trainer.train`] para hacerle fine-tuning a tu modelo.
|
||||
|
||||
```py
|
||||
>>> training_args = TrainingArguments(
|
||||
... output_dir="my_awesome_mind_model",
|
||||
... eval_strategy="epoch",
|
||||
... save_strategy="epoch",
|
||||
... learning_rate=3e-5,
|
||||
... per_device_train_batch_size=32,
|
||||
... gradient_accumulation_steps=4,
|
||||
... per_device_eval_batch_size=32,
|
||||
... num_train_epochs=10,
|
||||
... warmup_ratio=0.1,
|
||||
... logging_steps=10,
|
||||
... load_best_model_at_end=True,
|
||||
... metric_for_best_model="accuracy",
|
||||
... push_to_hub=True,
|
||||
... )
|
||||
|
||||
>>> trainer = Trainer(
|
||||
... model=model,
|
||||
... args=training_args,
|
||||
... train_dataset=encoded_minds["train"],
|
||||
... eval_dataset=encoded_minds["test"],
|
||||
... processing_class=feature_extractor,
|
||||
... compute_metrics=compute_metrics,
|
||||
... )
|
||||
|
||||
>>> trainer.train()
|
||||
```
|
||||
|
||||
Una vez que el entrenamiento haya sido completado, comparte tu modelo en el Hub con el método [`~transformers.Trainer.push_to_hub`] para que todo el mundo puede usar tu modelo.
|
||||
|
||||
```py
|
||||
>>> trainer.push_to_hub()
|
||||
```
|
||||
</pt>
|
||||
</frameworkcontent>
|
||||
|
||||
<Tip>
|
||||
|
||||
Para ver un ejemplo más detallado de comó hacerle fine-tuning a un modelo para clasificación, échale un vistazo al correspondiente [PyTorch notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/audio_classification.ipynb).
|
||||
|
||||
</Tip>
|
||||
|
||||
## Inference
|
||||
|
||||
¡Genial, ahora que le has hecho *fine-tuned* a un modelo, puedes usarlo para hacer inferencia!
|
||||
|
||||
Carga el archivo de audio para hacer inferencia. Recuerda volver a muestrear la tasa de muestreo del archivo de audio para que sea la misma del modelo si es necesario.
|
||||
|
||||
```py
|
||||
>>> from datasets import load_dataset, Audio
|
||||
|
||||
>>> dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
|
||||
>>> dataset = dataset.cast_column("audio", Audio(sampling_rate=16000))
|
||||
>>> sampling_rate = dataset.features["audio"].sampling_rate
|
||||
>>> audio_file = dataset[0]["audio"]["path"]
|
||||
```
|
||||
|
||||
La manera más simple de probar tu modelo para hacer inferencia es usarlo en un [`pipeline`]. Puedes instanciar un `pipeline` para clasificación de audio con tu modelo y pasarle tu archivo de audio:
|
||||
|
||||
```py
|
||||
>>> from transformers import pipeline
|
||||
|
||||
>>> classifier = pipeline("audio-classification", model="stevhliu/my_awesome_minds_model")
|
||||
>>> classifier(audio_file)
|
||||
[
|
||||
{'score': 0.09766869246959686, 'label': 'cash_deposit'},
|
||||
{'score': 0.07998877018690109, 'label': 'app_error'},
|
||||
{'score': 0.0781070664525032, 'label': 'joint_account'},
|
||||
{'score': 0.07667109370231628, 'label': 'pay_bill'},
|
||||
{'score': 0.0755252093076706, 'label': 'balance'}
|
||||
]
|
||||
```
|
||||
|
||||
También puedes replicar de forma manual los resultados del `pipeline` si lo deseas:
|
||||
|
||||
<frameworkcontent>
|
||||
<pt>
|
||||
Carga el feature extractor para preprocesar el archivo de audio y devuelve el `input` como un tensor de PyTorch:
|
||||
|
||||
```py
|
||||
>>> from transformers import AutoFeatureExtractor
|
||||
|
||||
>>> feature_extractor = AutoFeatureExtractor.from_pretrained("stevhliu/my_awesome_minds_model")
|
||||
>>> inputs = feature_extractor(dataset[0]["audio"]["array"], sampling_rate=sampling_rate, return_tensors="pt")
|
||||
```
|
||||
|
||||
Pásale tus entradas al modelo y devuelve los logits:
|
||||
|
||||
```py
|
||||
>>> from transformers import AutoModelForAudioClassification
|
||||
|
||||
>>> model = AutoModelForAudioClassification.from_pretrained("stevhliu/my_awesome_minds_model")
|
||||
>>> with torch.no_grad():
|
||||
... logits = model(**inputs).logits
|
||||
```
|
||||
|
||||
Obtén los identificadores de los clases con mayor probabilidad y usa el *mapping* `id2label` del modelo para convertirle a una etiqueta:
|
||||
|
||||
```py
|
||||
>>> import torch
|
||||
|
||||
>>> predicted_class_ids = torch.argmax(logits).item()
|
||||
>>> predicted_label = model.config.id2label[predicted_class_ids]
|
||||
>>> predicted_label
|
||||
'cash_deposit'
|
||||
```
|
||||
</pt>
|
||||
</frameworkcontent>
|
@ -327,7 +327,7 @@ python examples/pytorch/summarization/run_summarization.py
|
||||
Tous les scripts peuvent télécharger votre modèle final sur le Model Hub. Assurez-vous que vous êtes connecté à Hugging Face avant de commencer :
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
Ensuite, ajoutez l'argument `push_to_hub` au script. Cet argument créera un dépôt avec votre nom d'utilisateur Hugging Face et le nom du dossier spécifié dans `output_dir`.
|
||||
|
@ -285,7 +285,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
|
||||
Adesso, per inviare il modello all'Hub, assicurati di aver effettuato l'accesso. Lancia dal tuo terminale:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
O da un notebook:
|
||||
|
@ -56,7 +56,7 @@ Anche i file possono essere modificati facilmente in un repository ed è possibi
|
||||
Prima di condividere un modello nell'Hub, hai bisogno delle tue credenziali di Hugging Face. Se hai accesso ad un terminale, esegui il seguente comando nell'ambiente virtuale in cui è installata la libreria 🤗 Transformers. Questo memorizzerà il tuo token di accesso nella cartella cache di Hugging Face (di default `~/.cache/`):
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
Se stai usando un notebook come Jupyter o Colaboratory, assicurati di avere la libreria [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) installata. Questa libreria ti permette di interagire in maniera programmatica con l'Hub.
|
||||
|
@ -324,7 +324,7 @@ python examples/pytorch/summarization/run_summarization.py
|
||||
Tutti gli script possono caricare il tuo modello finale al [Model Hub](https://huggingface.co/models). Prima di iniziare, assicurati di aver effettuato l'accesso su Hugging Face:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
Poi, aggiungi l'argomento `push_to_hub` allo script. Questo argomento consentirà di creare un repository con il tuo username Hugging Face e la cartella specificata in `output_dir`.
|
||||
|
@ -270,7 +270,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
|
||||
モデルをHubに送信するには、ログインしていることを確認してください。ターミナルで次のコマンドを実行します:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
またはノートブックから:
|
||||
|
@ -56,7 +56,7 @@ Model Hubの組み込みバージョニングはgitおよび[git-lfs](https://gi
|
||||
モデルをHubに共有する前に、Hugging Faceの認証情報が必要です。ターミナルへのアクセス権がある場合、🤗 Transformersがインストールされている仮想環境で以下のコマンドを実行します。これにより、アクセストークンがHugging Faceのキャッシュフォルダに保存されます(デフォルトでは `~/.cache/` に保存されます):
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
JupyterやColaboratoryのようなノートブックを使用している場合、[`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library)ライブラリがインストールされていることを確認してください。
|
||||
|
@ -337,7 +337,7 @@ python examples/pytorch/summarization/run_summarization.py
|
||||
すべてのスクリプトは、最終的なモデルを [Model Hub](https://huggingface.co/models) にアップロードできます。開始する前に Hugging Face にログインしていることを確認してください。
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
次に、スクリプトに `push_to_hub` 引数を追加します。この引数は、Hugging Face のユーザー名と `output_dir` で指定したフォルダ名でリポジトリを作成します。
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -277,7 +277,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
|
||||
터미널에서 다음 코드를 실행해 확인할 수 있습니다:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
주피터 노트북의 경우에는 다음과 같습니다:
|
||||
|
@ -342,60 +342,92 @@ generation_output[:2]
|
||||
|
||||
## 캐시 (Caches) [[transformers.Cache]]
|
||||
|
||||
[[autodoc]] Cache
|
||||
- update
|
||||
|
||||
[[autodoc]] DynamicCache
|
||||
[[autodoc]] CacheLayerMixin
|
||||
- update
|
||||
- get_seq_length
|
||||
- get_mask_sizes
|
||||
- get_max_cache_shape
|
||||
- reset
|
||||
- reorder_cache
|
||||
|
||||
[[autodoc]] DynamicLayer
|
||||
- update
|
||||
- crop
|
||||
- batch_repeat_interleave
|
||||
- batch_select_indices
|
||||
|
||||
[[autodoc]] StaticLayer
|
||||
- update
|
||||
|
||||
[[autodoc]] SlidingWindowLayer
|
||||
- update
|
||||
|
||||
[[autodoc]] CacheProcessor
|
||||
- pre_update
|
||||
- post_update
|
||||
|
||||
[[autodoc]] OffloadedCacheProcessor
|
||||
- pre_update
|
||||
|
||||
[[autodoc]] QuantizedCacheProcessor
|
||||
- post_update
|
||||
|
||||
[[autodoc]] QuantoQuantizedCacheProcessor
|
||||
- post_update
|
||||
|
||||
[[autodoc]] HQQQuantizedCacheProcessor
|
||||
- post_update
|
||||
|
||||
[[autodoc]] Cache
|
||||
- update
|
||||
- get_seq_length
|
||||
- get_mask_sizes
|
||||
- get_max_cache_shape
|
||||
- reset
|
||||
- reorder_cache
|
||||
- crop
|
||||
- batch_repeat_interleave
|
||||
- batch_select_indices
|
||||
|
||||
[[autodoc]] DynamicCache
|
||||
- to_legacy_cache
|
||||
- from_legacy_cache
|
||||
|
||||
[[autodoc]] QuantizedCache
|
||||
- update
|
||||
- get_seq_length
|
||||
|
||||
[[autodoc]] QuantoQuantizedCache
|
||||
|
||||
[[autodoc]] QuantoQuantizedCacheProcessor
|
||||
|
||||
[[autodoc]] HQQQuantizedCache
|
||||
|
||||
[[autodoc]] HQQQuantizedCacheProcessor
|
||||
|
||||
[[autodoc]] OffloadedCache
|
||||
- update
|
||||
- prefetch_layer
|
||||
- evict_previous_layer
|
||||
|
||||
[[autodoc]] StaticCache
|
||||
- update
|
||||
- get_seq_length
|
||||
- reset
|
||||
|
||||
[[autodoc]] OffloadedStaticCache
|
||||
- update
|
||||
- get_seq_length
|
||||
- reset
|
||||
|
||||
[[autodoc]] HybridCache
|
||||
- update
|
||||
- get_seq_length
|
||||
- reset
|
||||
|
||||
[[autodoc]] HybridChunkedCache
|
||||
|
||||
[[autodoc]] SlidingWindowCache
|
||||
- update
|
||||
- reset
|
||||
|
||||
[[autodoc]] EncoderDecoderCache
|
||||
- get_seq_length
|
||||
- to_legacy_cache
|
||||
- from_legacy_cache
|
||||
- reset
|
||||
- reorder_cache
|
||||
|
||||
[[autodoc]] MambaCache
|
||||
- update_conv_state
|
||||
- update_ssm_state
|
||||
- reset
|
||||
|
||||
[[autodoc]] CacheConfig
|
||||
|
||||
[[autodoc]] QuantizedCacheConfig
|
||||
|
||||
## 워터마크 유틸리티 (Watermark Utils) [[transformers.WatermarkDetector]]
|
||||
|
||||
[[autodoc]] WatermarkDetector
|
||||
|
207
docs/source/ko/model_doc/exaone4.md
Normal file
207
docs/source/ko/model_doc/exaone4.md
Normal file
@ -0,0 +1,207 @@
|
||||
<!--Copyright 2025 The LG AI Research and The HuggingFace Team. All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
||||
the License. You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
||||
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
||||
specific language governing permissions and limitations under the License.
|
||||
|
||||
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
||||
rendered properly in your Markdown viewer.
|
||||
|
||||
-->
|
||||
|
||||
# EXAONE 4
|
||||
|
||||
## 개요
|
||||
|
||||
**[EXAONE 4.0](https://github.com/LG-AI-EXAONE/EXAONE-4.0)** 모델군은 [EXAONE 3.5](https://github.com/LG-AI-EXAONE/EXAONE-3.5) 모델군의 높은 실용성과 [EXAONE Deep](https://github.com/LG-AI-EXAONE/EXAONE-Deep) 모델군의 향상된 사고 추론 능력을 각각 Non-reasoning mode와 Reasoning mode로 통합한 자연어 모델(language model)입니다. 에이전틱(agentic) AI 시대에 발맞춰 EXAONE 4.0은 에이전틱 도구 사용 능력과 같은 핵심 기능을 통합했고, 기존의 다국어 능력을 영어, 한국어와 더불어 스페인어까지 확장했습니다.
|
||||
|
||||
EXAONE 4.0 모델군은 두 개의 모델: 높은 성능을 위해 최적화된 32B 중형 모델, 그리고 온-디바이스 활용을 위해 디자인된 1.2B 소형 모델으로 구성되어 있습니다.
|
||||
|
||||
EXAONE 4.0의 모델 구조는 이전 EXAONE 모델들과 다른 아키텍처 디자인을 채택했습니다.
|
||||
|
||||
1. **Hybrid Attention**: 32B 모델은 *Local attention (sliding window attention)*과 *Global attention (full attention)*을 3:1 비율로 연결한 hybrid attention 구조를 채택했습니다. 또한 전체 문맥을 더 잘 이해할 수 있도록 global attention에서 RoPE를 사용하지 않았습니다.
|
||||
2. **QK-Reorder-Norm**: 더 나은 downstream tasks 성능을 위해 연산량의 증가를 감수하며 전통적으로 사용되고 있던 Pre-LN 방식을 변경했습니다. LayerNorm의 위치를 attention과 MLP의 출력에 적용되도록 재배치했고, Q와 K projection 직후에도 RMS normalization을 추가했습니다.
|
||||
|
||||
더 자세한 정보는 [기술 보고서](https://arxiv.org/abs/2507.11407), [HuggingFace 논문](https://huggingface.co/papers/2507.11407), [블로그](https://www.lgresearch.ai/blog/view?seq=576), [공식 GitHub](https://github.com/LG-AI-EXAONE/EXAONE-4.0) 페이지를 참고해주시길 바랍니다.
|
||||
|
||||
공개된 모든 모델 체크포인트는 [HuggingFace 콜렉션](https://huggingface.co/collections/LGAI-EXAONE/exaone-40-686b2e0069800c835ed48375)에서 확인할 수 있습니다.
|
||||
|
||||
|
||||
## 모델 세부 정보
|
||||
|
||||
| Model Configuration | 32B | 1.2B |
|
||||
|:-------------------|:-----:|:------:|
|
||||
| d_model | 5,120 | 2,048 |
|
||||
| Number of layers | 64 | 30 |
|
||||
| Normalization | QK-Reorder-LN | QK-Reorder-LN |
|
||||
| Non-linearity | SwiGLU | SwiGLU |
|
||||
| Feedforward dimension | 27,392 | 4,096 |
|
||||
| Attention type | Hybrid (3:1 Local-Global) | Global |
|
||||
| Head type | GQA | GQA |
|
||||
| Number of heads | 40 | 32 |
|
||||
| Number of KV heads | 8 | 8 |
|
||||
| Head size | 128 | 64 |
|
||||
| Max sequence length | 131,072 | 65,536 |
|
||||
| RoPE theta | 1,000,000 | 1,000,000 |
|
||||
| Tokenizer | BBPE | BBPE |
|
||||
| Vocab size | 102,400 | 102,400 |
|
||||
| Tied word embedding | False | True |
|
||||
| Knowledge cut-off | Nov. 2024 | Nov. 2024 |
|
||||
|
||||
|
||||
## 사용 팁
|
||||
|
||||
### Non-reasoning mode
|
||||
|
||||
일반적인 대화의 경우 아래 예제와 같이 EXAONE 4.0을 사용할 수 있습니다.
|
||||
|
||||
```python
|
||||
from transformers import AutoModelForCausalLM, AutoTokenizer
|
||||
|
||||
model_name = "LGAI-EXAONE/EXAONE-4.0-32B"
|
||||
|
||||
model = AutoModelForCausalLM.from_pretrained(
|
||||
model_name,
|
||||
torch_dtype="bfloat16",
|
||||
device_map="auto"
|
||||
)
|
||||
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
||||
|
||||
# 원하는 입력을 선택하세요
|
||||
prompt = "Explain how wonderful you are"
|
||||
prompt = "Explica lo increíble que eres"
|
||||
prompt = "너가 얼마나 대단한지 설명해 봐"
|
||||
|
||||
messages = [
|
||||
{"role": "user", "content": prompt}
|
||||
]
|
||||
input_ids = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=True,
|
||||
add_generation_prompt=True,
|
||||
return_tensors="pt"
|
||||
)
|
||||
|
||||
output = model.generate(
|
||||
input_ids.to(model.device),
|
||||
max_new_tokens=128,
|
||||
do_sample=False,
|
||||
)
|
||||
print(tokenizer.decode(output[0]))
|
||||
```
|
||||
|
||||
### Reasoning mode
|
||||
|
||||
The EXAONE 4.0 models have reasoning capabilities for handling complex problems. You can activate reasoning mode by using the `enable_thinking=True` argument with the tokenizer, which opens a reasoning block that starts with `<think>` tag without closing it.
|
||||
|
||||
EXAONE 4.0 모델군은 복잡한 문제를 해결하기 위한 사고 추론 능력을 갖추고 있습니다. 토크나이저에서 `enable_thinking=True` 인자를 사용해서 reasoning mode로 모델을 사용할 수 있습니다. 이 경우 `<think>` 토큰으로 추론 블록을 연 뒤, 닫지 않고 추론을 시작합니다.
|
||||
|
||||
```python
|
||||
messages = [
|
||||
{"role": "user", "content": "Which one is bigger, 3.12 vs 3.9?"}
|
||||
]
|
||||
input_ids = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=True,
|
||||
add_generation_prompt=True,
|
||||
return_tensors="pt",
|
||||
enable_thinking=True,
|
||||
)
|
||||
|
||||
output = model.generate(
|
||||
input_ids.to(model.device),
|
||||
max_new_tokens=128,
|
||||
do_sample=True,
|
||||
temperature=0.6,
|
||||
top_p=0.95
|
||||
)
|
||||
print(tokenizer.decode(output[0]))
|
||||
```
|
||||
|
||||
> [!IMPORTANT]
|
||||
> 모델을 reasoning mode로 사용할 경우, 생성되는 답변이 sampling parameters에 굉장히 민감합니다. 따라서 더 나은 생성 품질을 위해 공식 [Usage Guideline](https://github.com/LG-AI-EXAONE/EXAONE-4.0#usage-guideline)를 참조해 주시길 바랍니다.
|
||||
|
||||
### Agentic tool use
|
||||
|
||||
EXAONE 4.0 모델은 도구 사용 능력을 갖춘 덕분에 Agent로 사용할 수 있습니다. 이를 위해서는 아래 예제와 같이 도구 명세를 모델에게 제공해 주어야 합니다.
|
||||
|
||||
```python
|
||||
import random
|
||||
|
||||
def roll_dice(max_num: int):
|
||||
return random.randint(1, max_num)
|
||||
|
||||
tools = [
|
||||
{
|
||||
"type": "function",
|
||||
"function": {
|
||||
"name": "roll_dice",
|
||||
"description": "Roll a dice with the number 1 to N. User can select the number N.",
|
||||
"parameters": {
|
||||
"type": "object",
|
||||
"required": ["max_num"],
|
||||
"properties": {
|
||||
"max_num": {
|
||||
"type": "int",
|
||||
"description": "Max number of the dice"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
|
||||
messages = [
|
||||
{"role": "user", "content": "Roll D6 dice twice!"}
|
||||
]
|
||||
input_ids = tokenizer.apply_chat_template(
|
||||
messages,
|
||||
tokenize=True,
|
||||
add_generation_prompt=True,
|
||||
return_tensors="pt",
|
||||
tools=tools,
|
||||
)
|
||||
|
||||
output = model.generate(
|
||||
input_ids.to(model.device),
|
||||
max_new_tokens=1024,
|
||||
do_sample=True,
|
||||
temperature=0.6,
|
||||
top_p=0.95,
|
||||
)
|
||||
print(tokenizer.decode(output[0]))
|
||||
```
|
||||
|
||||
## Exaone4Config
|
||||
|
||||
[[autodoc]] Exaone4Config
|
||||
|
||||
## Exaone4Model
|
||||
|
||||
[[autodoc]] Exaone4Model
|
||||
- forward
|
||||
|
||||
## Exaone4ForCausalLM
|
||||
|
||||
[[autodoc]] Exaone4ForCausalLM
|
||||
- forward
|
||||
|
||||
## Exaone4ForSequenceClassification
|
||||
|
||||
[[autodoc]] Exaone4ForSequenceClassification
|
||||
- forward
|
||||
|
||||
## Exaone4ForTokenClassification
|
||||
|
||||
[[autodoc]] Exaone4ForTokenClassification
|
||||
- forward
|
||||
|
||||
## Exaone4ForQuestionAnswering
|
||||
|
||||
[[autodoc]] Exaone4ForQuestionAnswering
|
||||
- forward
|
@ -56,7 +56,7 @@ picture-in-picture" allowfullscreen></iframe>
|
||||
모델을 허브에 공유하기 전에 Hugging Face 자격 증명이 필요합니다. 터미널에 액세스할 수 있는 경우, 🤗 Transformers가 설치된 가상 환경에서 다음 명령을 실행합니다. 그러면 Hugging Face 캐시 폴더(기본적으로 `~/.cache/`)에 액세스 토큰을 저장합니다:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
Jupyter 또는 Colaboratory와 같은 노트북을 사용 중인 경우, [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) 라이브러리가 설치되었는지 확인하세요. 이 라이브러리를 사용하면 API로 허브와 상호 작용할 수 있습니다.
|
||||
|
@ -347,7 +347,7 @@ python examples/pytorch/summarization/run_summarization.py
|
||||
모든 스크립트는 최종 모델을 [Model Hub](https://huggingface.co/models)에 업로드할 수 있습니다.
|
||||
시작하기 전에 Hugging Face에 로그인했는지 확인하세요:
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
그런 다음 스크립트에 `push_to_hub` 인수를 추가합니다.
|
||||
|
@ -284,7 +284,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
|
||||
Agora para enviar o modelo para o Hub, certifique-se de estar logado. Ou execute no seu terminal:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
ou a partir do notebook:
|
||||
|
@ -327,7 +327,7 @@ python examples/pytorch/summarization/run_summarization.py
|
||||
Todos os scripts podem enviar seu modelo final para o [Model Hub](https://huggingface.co/models). Certifique-se de estar conectado ao Hugging Face antes de começar:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
Em seguida, adicione o argumento `push_to_hub` ao script. Este argumento criará um repositório com seu nome de usuário do Hugging Face e o nome da pasta especificado em `output_dir`.
|
||||
|
@ -246,7 +246,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
|
||||
现在要将模型推送到集线器,请确保你已登录。你看可以在终端中运行以下命令:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
或者在笔记本中运行以下代码:
|
||||
|
@ -56,7 +56,7 @@ Model Hub的内置版本控制基于git和[git-lfs](https://git-lfs.github.com/)
|
||||
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
如果您正在使用像Jupyter或Colaboratory这样的`notebook`,请确保您已安装了[`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library)库。该库允许您以编程方式与Hub进行交互。
|
||||
|
@ -331,7 +331,7 @@ python examples/pytorch/summarization/run_summarization.py
|
||||
所有脚本都可以将您的最终模型上传到[Model Hub](https://huggingface.co/models)。在开始之前,请确保您已登录Hugging Face:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
hf auth login
|
||||
```
|
||||
|
||||
然后,在脚本中添加`push_to_hub`参数。这个参数会创建一个带有您Hugging Face用户名和`output_dir`中指定的文件夹名称的仓库。
|
||||
|
@ -79,5 +79,5 @@ To specify a given repository name, use the `--hub_model_id` argument. You will
|
||||
|
||||
A few notes on this integration:
|
||||
|
||||
- you will need to be logged in to the Hugging Face website locally for it to work, the easiest way to achieve this is to run `huggingface-cli login` and then type your username and password when prompted. You can also pass along your authentication token with the `--hub_token` argument.
|
||||
- you will need to be logged in to the Hugging Face website locally for it to work, the easiest way to achieve this is to run `hf auth login` and then type your username and password when prompted. You can also pass along your authentication token with the `--hub_token` argument.
|
||||
- the `output_dir` you pick will either need to be a new folder or a local clone of the distant repository you are using.
|
||||
|
@ -186,7 +186,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -172,7 +172,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -173,7 +173,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -179,7 +179,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -173,7 +173,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -60,7 +60,7 @@ from transformers.utils import check_min_version, send_example_telemetry
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
Array = Any
|
||||
Dataset = datasets.arrow_dataset.Dataset
|
||||
@ -159,7 +159,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -59,7 +59,7 @@ from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risk.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=2.14.0", "To fix: pip install -r examples/flax/speech-recognition/requirements.txt")
|
||||
|
||||
|
@ -192,7 +192,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -55,7 +55,7 @@ from transformers.utils import check_min_version, send_example_telemetry
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
Array = Any
|
||||
Dataset = datasets.arrow_dataset.Dataset
|
||||
@ -107,7 +107,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -56,7 +56,7 @@ from transformers.utils.versions import require_version
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/token-classification/requirements.txt")
|
||||
|
||||
@ -155,7 +155,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -163,7 +163,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -294,7 +294,7 @@ class MyNewModel2PreTrainedModel(PreTrainedModel):
|
||||
_supports_flex_attn = True
|
||||
_supports_cache_class = True
|
||||
_supports_quantized_cache = True
|
||||
_supports_static_cache = True
|
||||
_can_compile_fullgraph = True
|
||||
_supports_attention_backend = True
|
||||
_can_record_outputs = {
|
||||
"hidden_states": MyNewModel2DecoderLayer,
|
||||
|
@ -94,7 +94,7 @@ class NewTaskModelPreTrainedModel(PreTrainedModel):
|
||||
_skip_keys_device_placement = "past_key_values"
|
||||
_supports_cache_class = True
|
||||
_supports_quantized_cache = True
|
||||
_supports_static_cache = True
|
||||
_can_compile_fullgraph = True
|
||||
_supports_flash_attn = True
|
||||
_supports_sdpa = True
|
||||
_supports_flex_attn = True
|
||||
|
@ -293,7 +293,7 @@ class SuperPreTrainedModel(PreTrainedModel):
|
||||
_supports_flex_attn = True
|
||||
_supports_cache_class = True
|
||||
_supports_quantized_cache = True
|
||||
_supports_static_cache = True
|
||||
_can_compile_fullgraph = True
|
||||
_supports_attention_backend = True
|
||||
_can_record_outputs = {
|
||||
"hidden_states": SuperDecoderLayer,
|
||||
|
@ -90,7 +90,7 @@ To specify a given repository name, use the `--hub_model_id` argument. You will
|
||||
|
||||
A few notes on this integration:
|
||||
|
||||
- you will need to be logged in to the Hugging Face website locally for it to work, the easiest way to achieve this is to run `huggingface-cli login` and then type your username and password when prompted. You can also pass along your authentication token with the `--hub_token` argument.
|
||||
- you will need to be logged in to the Hugging Face website locally for it to work, the easiest way to achieve this is to run `hf auth login` and then type your username and password when prompted. You can also pass along your authentication token with the `--hub_token` argument.
|
||||
- the `output_dir` you pick will either need to be a new folder or a local clone of the distant repository you are using.
|
||||
|
||||
## Distributed training and mixed precision
|
||||
|
@ -115,10 +115,10 @@ On 4 V100 GPUs (16GB), this script should run in ~1 hour and yield accuracy of *
|
||||
$ apt install git-lfs
|
||||
```
|
||||
|
||||
2. Log in with your HuggingFace account credentials using `huggingface-cli`
|
||||
2. Log in with your HuggingFace account credentials using `hf`
|
||||
|
||||
```bash
|
||||
$ huggingface-cli login
|
||||
$ hf auth login
|
||||
# ...follow the prompts
|
||||
```
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
datasets>=1.14.0
|
||||
datasets[audio]>=1.14.0
|
||||
evaluate
|
||||
librosa
|
||||
torchaudio
|
||||
torch>=1.6
|
||||
torch>=1.6
|
||||
|
@ -13,6 +13,17 @@
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "datasets[audio]>=1.14.0",
|
||||
# "evaluate",
|
||||
# "librosa",
|
||||
# "torchaudio",
|
||||
# "torch>=1.6",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
@ -44,7 +55,7 @@ from transformers.utils.versions import require_version
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=1.14.0", "To fix: pip install -r examples/pytorch/audio-classification/requirements.txt")
|
||||
|
||||
@ -156,7 +167,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -12,6 +12,16 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "torch>=1.5.0",
|
||||
# "torchvision>=0.6.0",
|
||||
# "datasets>=1.8.0",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""
|
||||
Training a CLIP like dual encoder models using text and vision encoders in the library.
|
||||
|
||||
@ -53,7 +63,7 @@ from transformers.utils.versions import require_version
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/contrastive-image-text/requirements.txt")
|
||||
|
||||
@ -90,7 +100,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -129,7 +129,7 @@ dataset = load_dataset("imagefolder", data_files={"train": ["path/to/file1", "pa
|
||||
Next, push it to the hub!
|
||||
|
||||
```python
|
||||
# assuming you have ran the huggingface-cli login command in a terminal
|
||||
# assuming you have ran the hf auth login command in a terminal
|
||||
dataset.push_to_hub("name_of_your_dataset")
|
||||
|
||||
# if you want to push to a private repo, simply pass private=True:
|
||||
@ -152,10 +152,10 @@ $ git config --global user.email "you@example.com"
|
||||
$ git config --global user.name "Your Name"
|
||||
```
|
||||
|
||||
2. Log in with your HuggingFace account credentials using `huggingface-cli`:
|
||||
2. Log in with your HuggingFace account credentials using `hf`:
|
||||
|
||||
```bash
|
||||
$ huggingface-cli login
|
||||
$ hf auth login
|
||||
# ...follow the prompts
|
||||
```
|
||||
|
||||
|
@ -12,6 +12,18 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "accelerate>=0.12.0",
|
||||
# "torch>=1.5.0",
|
||||
# "torchvision>=0.6.0",
|
||||
# "datasets>=2.14.0",
|
||||
# "evaluate",
|
||||
# "scikit-learn",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
@ -56,7 +68,7 @@ from transformers.utils.versions import require_version
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=2.14.0", "To fix: pip install -r examples/pytorch/image-classification/requirements.txt")
|
||||
|
||||
@ -156,7 +168,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -11,6 +11,19 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "accelerate>=0.12.0",
|
||||
# "torch>=1.5.0",
|
||||
# "torchvision>=0.6.0",
|
||||
# "datasets>=2.14.0",
|
||||
# "evaluate",
|
||||
# "scikit-learn",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""Finetuning any 🤗 Transformers model for image classification leveraging 🤗 Accelerate."""
|
||||
|
||||
import argparse
|
||||
@ -48,7 +61,7 @@ from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
@ -239,10 +239,10 @@ $ git config --global user.email "you@example.com"
|
||||
$ git config --global user.name "Your Name"
|
||||
```
|
||||
|
||||
2. Log in with your HuggingFace account credentials using `huggingface-cli`
|
||||
2. Log in with your HuggingFace account credentials using `hf`
|
||||
|
||||
```bash
|
||||
$ huggingface-cli login
|
||||
$ hf auth login
|
||||
# ...follow the prompts
|
||||
```
|
||||
|
||||
|
@ -12,6 +12,15 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "torch>=1.5.0",
|
||||
# "torchvision>=0.6.0",
|
||||
# "datasets>=1.8.0",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
@ -42,7 +51,7 @@ from transformers.utils.versions import require_version
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/image-pretraining/requirements.txt")
|
||||
|
||||
@ -147,7 +156,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -12,6 +12,15 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "torch>=1.5.0",
|
||||
# "torchvision>=0.6.0",
|
||||
# "datasets>=1.8.0",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
@ -47,7 +56,7 @@ Any model supported by the AutoModelForMaskedImageModeling API can be used.
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/image-pretraining/requirements.txt")
|
||||
|
||||
@ -157,7 +166,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -12,6 +12,15 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "torch>=1.5.0",
|
||||
# "torchvision>=0.6.0",
|
||||
# "datasets>=1.8.0",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
import argparse
|
||||
import logging
|
||||
import math
|
||||
@ -52,7 +61,7 @@ Any model supported by the AutoModelForMaskedImageModeling API can be used.
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/image-pretraining/requirements.txt")
|
||||
|
||||
@ -191,7 +200,7 @@ def parse_args():
|
||||
default=None,
|
||||
help=(
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
),
|
||||
)
|
||||
parser.add_argument(
|
||||
|
@ -12,6 +12,17 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "albumentations >= 1.4.16",
|
||||
# "timm",
|
||||
# "datasets",
|
||||
# "torchmetrics",
|
||||
# "pycocotools",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""Finetuning 🤗 Transformers model for instance segmentation leveraging the Trainer API."""
|
||||
|
||||
import logging
|
||||
@ -46,7 +57,7 @@ from transformers.utils.versions import require_version
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=2.0.0", "To fix: pip install -r examples/pytorch/instance-segmentation/requirements.txt")
|
||||
|
||||
@ -86,7 +97,7 @@ class Arguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -12,6 +12,17 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "albumentations >= 1.4.16",
|
||||
# "timm",
|
||||
# "datasets",
|
||||
# "torchmetrics",
|
||||
# "pycocotools",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""Finetuning 🤗 Transformers model for instance segmentation with Accelerate 🚀."""
|
||||
|
||||
import argparse
|
||||
@ -52,7 +63,7 @@ from transformers.utils.versions import require_version
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=2.0.0", "To fix: pip install -r examples/pytorch/instance-segmentation/requirements.txt")
|
||||
|
||||
|
@ -12,6 +12,21 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "albumentations >= 1.4.16",
|
||||
# "accelerate >= 0.12.0",
|
||||
# "torch >= 1.3",
|
||||
# "datasets >= 2.14.0",
|
||||
# "sentencepiece != 0.1.92",
|
||||
# "protobuf",
|
||||
# "evaluate",
|
||||
# "scikit-learn",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""
|
||||
Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...) on a text file or a dataset.
|
||||
|
||||
@ -54,7 +69,7 @@ from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=2.14.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
|
||||
|
||||
@ -115,7 +130,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -12,6 +12,21 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "albumentations >= 1.4.16",
|
||||
# "accelerate >= 0.12.0",
|
||||
# "torch >= 1.3",
|
||||
# "datasets >= 2.14.0",
|
||||
# "sentencepiece != 0.1.92",
|
||||
# "protobuf",
|
||||
# "evaluate",
|
||||
# "scikit-learn",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""
|
||||
Fine-tuning the library models for causal language modeling (GPT, GPT-2, CTRL, ...)
|
||||
on a text file or a dataset without using HuggingFace Trainer.
|
||||
@ -56,7 +71,7 @@ from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
@ -12,6 +12,21 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "albumentations >= 1.4.16",
|
||||
# "accelerate >= 0.12.0",
|
||||
# "torch >= 1.3",
|
||||
# "datasets >= 2.14.0",
|
||||
# "sentencepiece != 0.1.92",
|
||||
# "protobuf",
|
||||
# "evaluate",
|
||||
# "scikit-learn",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""
|
||||
Fine-tuning the library models for causal language modeling using
|
||||
Fill-in-the middle (FIM) objective on a text file or a dataset.
|
||||
@ -57,7 +72,7 @@ from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=2.14.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
|
||||
|
||||
@ -118,7 +133,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -12,6 +12,21 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "albumentations >= 1.4.16",
|
||||
# "accelerate >= 0.12.0",
|
||||
# "torch >= 1.3",
|
||||
# "datasets >= 2.14.0",
|
||||
# "sentencepiece != 0.1.92",
|
||||
# "protobuf",
|
||||
# "evaluate",
|
||||
# "scikit-learn",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""
|
||||
Fine-tuning the library models for causal language modeling using
|
||||
Fill-in-the middle (FIM) objective on a text file or a dataset without using HuggingFace Trainer.
|
||||
@ -59,7 +74,7 @@ from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
logger = get_logger(__name__)
|
||||
|
||||
|
@ -12,6 +12,21 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "albumentations >= 1.4.16",
|
||||
# "accelerate >= 0.12.0",
|
||||
# "torch >= 1.3",
|
||||
# "datasets >= 2.14.0",
|
||||
# "sentencepiece != 0.1.92",
|
||||
# "protobuf",
|
||||
# "evaluate",
|
||||
# "scikit-learn",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""
|
||||
Fine-tuning the library models for masked language modeling (BERT, ALBERT, RoBERTa...) on a text file or a dataset.
|
||||
|
||||
@ -53,7 +68,7 @@ from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=2.14.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
|
||||
|
||||
@ -112,7 +127,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -12,6 +12,21 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "albumentations >= 1.4.16",
|
||||
# "accelerate >= 0.12.0",
|
||||
# "torch >= 1.3",
|
||||
# "datasets >= 2.14.0",
|
||||
# "sentencepiece != 0.1.92",
|
||||
# "protobuf",
|
||||
# "evaluate",
|
||||
# "scikit-learn",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""
|
||||
Fine-tuning the library models for masked language modeling (BERT, ALBERT, RoBERTa...)
|
||||
on a text file or a dataset without using HuggingFace Trainer.
|
||||
@ -56,7 +71,7 @@ from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
logger = get_logger(__name__)
|
||||
require_version("datasets>=2.14.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
|
||||
|
@ -12,6 +12,21 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "albumentations >= 1.4.16",
|
||||
# "accelerate >= 0.12.0",
|
||||
# "torch >= 1.3",
|
||||
# "datasets >= 2.14.0",
|
||||
# "sentencepiece != 0.1.92",
|
||||
# "protobuf",
|
||||
# "evaluate",
|
||||
# "scikit-learn",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""
|
||||
Fine-tuning the library models for permutation language modeling.
|
||||
"""
|
||||
@ -46,7 +61,7 @@ from transformers.utils.versions import require_version
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=2.14.0", "To fix: pip install -r examples/pytorch/language-modeling/requirements.txt")
|
||||
|
||||
@ -99,7 +114,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -12,6 +12,18 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "accelerate >= 0.12.0",
|
||||
# "sentencepiece != 0.1.92",
|
||||
# "protobuf",
|
||||
# "torch >= 1.3",
|
||||
# "evaluate",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""
|
||||
Fine-tuning the library models for multiple choice.
|
||||
"""
|
||||
@ -45,7 +57,7 @@ from transformers.utils import check_min_version, send_example_telemetry
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
@ -82,7 +94,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
@ -12,6 +12,18 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "accelerate >= 0.12.0",
|
||||
# "sentencepiece != 0.1.92",
|
||||
# "protobuf",
|
||||
# "torch >= 1.3",
|
||||
# "evaluate",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""
|
||||
Fine-tuning a 🤗 Transformers model on multiple choice relying on the accelerate library without using a Trainer.
|
||||
"""
|
||||
@ -53,7 +65,7 @@ from transformers.utils import check_min_version, send_example_telemetry
|
||||
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
logger = get_logger(__name__)
|
||||
# You should update this to your particular problem to have better documentation of `model_type`
|
||||
|
@ -217,7 +217,7 @@ dataset = load_dataset("imagefolder", data_dir="custom_dataset/")
|
||||
# ... })
|
||||
# ... })
|
||||
|
||||
# Push to hub (assumes you have ran the huggingface-cli login command in a terminal/notebook)
|
||||
# Push to hub (assumes you have ran the hf auth login command in a terminal/notebook)
|
||||
dataset.push_to_hub("name of repo on the hub")
|
||||
|
||||
# optionally, you can push to a private repo on the hub
|
||||
|
@ -12,6 +12,17 @@
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
|
||||
# /// script
|
||||
# dependencies = [
|
||||
# "transformers==4.54.0",
|
||||
# "albumentations >= 1.4.16",
|
||||
# "timm",
|
||||
# "datasets>=4.0",
|
||||
# "torchmetrics",
|
||||
# "pycocotools",
|
||||
# ]
|
||||
# ///
|
||||
|
||||
"""Finetuning any 🤗 Transformers model supported by AutoModelForObjectDetection for object detection leveraging the Trainer API."""
|
||||
|
||||
import logging
|
||||
@ -48,7 +59,7 @@ from transformers.utils.versions import require_version
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Will error if the minimal version of Transformers is not installed. Remove at your own risks.
|
||||
check_min_version("4.54.0.dev0")
|
||||
check_min_version("4.54.0")
|
||||
|
||||
require_version("datasets>=2.0.0", "To fix: pip install -r examples/pytorch/object-detection/requirements.txt")
|
||||
|
||||
@ -309,7 +320,7 @@ class ModelArguments:
|
||||
metadata={
|
||||
"help": (
|
||||
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
|
||||
"generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
|
||||
"generated when running `hf auth login` (stored in `~/.huggingface`)."
|
||||
)
|
||||
},
|
||||
)
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user