Files
transformers/docs/source/en/model_doc/ministral.md
Yuanyuan Chen 374ded5ea4 Fix white space in documentation (#41157)
* Fix white space

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

* Revert changes

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

* Fix autodoc

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

---------

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>
2025-09-30 09:41:03 -07:00

3.6 KiB

This model was released on {release_date} and added to Hugging Face Transformers on 2025-09-11.

PyTorch FlashAttention SDPA Tensor parallelism

Ministral

Ministral is a 8B parameter language model that extends the Mistral architecture with alternating attention pattern. Unlike Mistral, that uses either full attention or sliding window attention consistently, Ministral alternates between full attention and sliding window attention layers, in a pattern of 1 full attention layer followed by 3 sliding window attention layers. This allows for a 128K context length support.

This architecture turns out to coincide with Qwen2, with the main difference being the presence of biases in attention projections in Ministral.

You can find the Ministral checkpoints under the Mistral AI organization.

Usage

The example below demonstrates how to use Ministral for text generation:

>>> import torch
>>> from transformers import AutoModelForCausalLM, AutoTokenizer

>>> model = AutoModelForCausalLM.from_pretrained("mistralai/Ministral-8B-Instruct-2410", torch_dtype=torch.bfloat16, attn_implementation="sdpa", device_map="auto")
>>> tokenizer = AutoTokenizer.from_pretrained("mistralai/Ministral-8B-Instruct-2410")

>>> messages = [
...     {"role": "user", "content": "What is your favourite condiment?"},
...     {"role": "assistant", "content": "Well, I'm quite partial to a good squeeze of fresh lemon juice. It adds just the right amount of zesty flavour to whatever I'm cooking up in the kitchen!"},
...     {"role": "user", "content": "Do you have mayonnaise recipes?"}
... ]

>>> model_inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")

>>> generated_ids = model.generate(model_inputs, max_new_tokens=100, do_sample=True)
>>> tokenizer.batch_decode(generated_ids)[0]
"Mayonnaise can be made as follows: (...)"

MinistralConfig

autodoc MinistralConfig

MinistralModel

autodoc MinistralModel - forward

MinistralForCausalLM

autodoc MinistralForCausalLM - forward

MinistralForSequenceClassification

autodoc MinistralForSequenceClassification - forward

MinistralForTokenClassification

autodoc MinistralForTokenClassification - forward

MinistralForQuestionAnswering

autodoc MinistralForQuestionAnswering - forward