Fix typos in documentation (#41641)

Fix typos

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>
This commit is contained in:
Yuanyuan Chen
2025-10-16 20:58:46 +08:00
committed by GitHub
parent 981370c038
commit 2aff20aff6
5 changed files with 5 additions and 5 deletions

View File

@ -61,7 +61,7 @@ message_list = [
]
]
input_dict = processor(
protein_informations, messages_list, return_tensors="pt", text_max_length=512, protein_max_length=1024
protein_inputs, messages_list, return_tensors="pt", text_max_length=512, protein_max_length=1024
)
with torch.no_grad():
generated_ids = hf_model.generate(**input_dict)

View File

@ -55,7 +55,7 @@ pipeline("UN Chief says there is no military solution in Syria")
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/nllb-200-distilled-600M")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", dtype="auto", attn_implementaiton="sdpa")
model = AutoModelForSeq2SeqLM.from_pretrained("facebook/nllb-200-distilled-600M", dtype="auto", attn_implementation="sdpa")
article = "UN Chief says there is no military solution in Syria"
inputs = tokenizer(article, return_tensors="pt")

View File

@ -19,7 +19,7 @@ rendered properly in your Markdown viewer.
Voxtral is an upgrade of [Ministral 3B and Mistral Small 3B](https://mistral.ai/news/ministraux), extending its language capabilities with audio input support. It is designed to handle tasks such as speech transcription, translation, and audio understanding.
You can read more in Mistral's [realease blog post](https://mistral.ai/news/voxtral).
You can read more in Mistral's [release blog post](https://mistral.ai/news/voxtral).
The model is available in two checkpoints:

View File

@ -33,7 +33,7 @@ This guide will show you how [`Trainer`] works and how to customize it for your
3. update the weights based on the gradients
4. repeat until the predetermined number of epochs is reached
Manually coding this training loop everytime can be inconvenient or a barrier if you're just getting started with machine learning. [`Trainer`] abstracts this process, allowing you to focus on the model, dataset, and training design choices.
Manually coding this training loop every time can be inconvenient or a barrier if you're just getting started with machine learning. [`Trainer`] abstracts this process, allowing you to focus on the model, dataset, and training design choices.
Configure your training with hyperparameters and options from [`TrainingArguments`] which supports many features such as distributed training, torch.compile, mixed precision training, and saving the model to the Hub.

View File

@ -100,7 +100,7 @@ You can open any page of the documentation as a notebook in Colab (there is a bu
### Optimum notebooks
🤗 [Optimum](https://github.com/huggingface/optimum) is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardwares.
🤗 [Optimum](https://github.com/huggingface/optimum) is an extension of 🤗 Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on targeted hardware.
| Notebook | Description | | |
|:----------|:-------------|:-------------|------:|