Files
transformers/docs/source/en/model_doc/arcee.md
Yuanyuan Chen 374ded5ea4 Fix white space in documentation (#41157)
* Fix white space

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

* Revert changes

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

* Fix autodoc

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

---------

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>
2025-09-30 09:41:03 -07:00

3.5 KiB

This model was released on 2025-06-18 and added to Hugging Face Transformers on 2025-06-24.

PyTorch FlashAttention SDPA

Arcee

Arcee is a decoder-only transformer model based on the Llama architecture with a key modification: it uses ReLU² (ReLU-squared) activation in the MLP blocks instead of SiLU, following recent research showing improved training efficiency with squared activations. This architecture is designed for efficient training and inference while maintaining the proven stability of the Llama design.

The Arcee model is architecturally similar to Llama but uses x * relu(x) in MLP layers for improved gradient flow and is optimized for efficiency in both training and inference scenarios.

Tip

The Arcee model supports extended context with RoPE scaling and all standard transformers features including Flash Attention 2, SDPA, gradient checkpointing, and quantization support.

The example below demonstrates how to generate text with Arcee using [Pipeline] or the [AutoModel].

import torch
from transformers import pipeline

pipeline = pipeline(
    task="text-generation",
    model="arcee-ai/AFM-4.5B",
    dtype=torch.float16,
    device=0
)

output = pipeline("The key innovation in Arcee is")
print(output[0]["generated_text"])
import torch
from transformers import AutoTokenizer, ArceeForCausalLM

tokenizer = AutoTokenizer.from_pretrained("arcee-ai/AFM-4.5B")
model = ArceeForCausalLM.from_pretrained(
    "arcee-ai/AFM-4.5B",
    dtype=torch.float16,
    device_map="auto"
)

inputs = tokenizer("The key innovation in Arcee is", return_tensors="pt")
with torch.no_grad():
    outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

ArceeConfig

autodoc ArceeConfig

ArceeModel

autodoc ArceeModel - forward

ArceeForCausalLM

autodoc ArceeForCausalLM - forward

ArceeForSequenceClassification

autodoc ArceeForSequenceClassification - forward

ArceeForQuestionAnswering

autodoc ArceeForQuestionAnswering - forward

ArceeForTokenClassification

autodoc ArceeForTokenClassification - forward