Files
transformers/docs/source/en/model_doc/deberta.md
Yuanyuan Chen f64354e89a Format empty lines and white space in markdown files. (#41100)
* Remove additional white space and empty lines from markdown files

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

* Add empty lines around code

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

---------

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>
2025-09-23 16:20:01 -07:00

4.6 KiB

This model was released on 2020-06-05 and added to Hugging Face Transformers on 2020-11-16.

PyTorch

DeBERTa

DeBERTa improves the pretraining efficiency of BERT and RoBERTa with two key ideas, disentangled attention and an enhanced mask decoder. Instead of mixing everything together like BERT, DeBERTa separates a word's content from its position and processes them independently. This gives it a clearer sense of what's being said and where in the sentence it's happening.

The enhanced mask decoder replaces the traditional softmax decoder to make better predictions.

Even with less training data than RoBERTa, DeBERTa manages to outperform it on several benchmarks.

You can find all the original DeBERTa checkpoints under the Microsoft organization.

Tip

Click on the DeBERTa models in the right sidebar for more examples of how to apply DeBERTa to different language tasks.

The example below demonstrates how to classify text with [Pipeline], [AutoModel], and from the command line.

import torch
from transformers import pipeline

classifier = pipeline(
    task="text-classification",
    model="microsoft/deberta-base-mnli",
    device=0,
)

classifier({
    "text": "A soccer game with multiple people playing.",
    "text_pair": "Some people are playing a sport."
})
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

model_name = "microsoft/deberta-base-mnli"
tokenizer = AutoTokenizer.from_pretrained("microsoft/deberta-base-mnli")
model = AutoModelForSequenceClassification.from_pretrained("microsoft/deberta-base-mnli", device_map="auto")

inputs = tokenizer(
    "A soccer game with multiple people playing.",
    "Some people are playing a sport.",
    return_tensors="pt"
).to(model.device)

with torch.no_grad():
    logits = model(**inputs).logits
    predicted_class = logits.argmax().item()

labels = ["contradiction", "neutral", "entailment"]
print(f"The predicted relation is: {labels[predicted_class]}")

echo -e '{"text": "A soccer game with multiple people playing.", "text_pair": "Some people are playing a sport."}' | transformers run --task text-classification --model microsoft/deberta-base-mnli --device 0

Notes

  • DeBERTa uses relative position embeddings, so it does not require right-padding like BERT.
  • For best results, use DeBERTa on sentence-level or sentence-pair classification tasks like MNLI, RTE, or SST-2.
  • If you're using DeBERTa for token-level tasks like masked language modeling, make sure to load a checkpoint specifically pretrained or fine-tuned for token-level tasks.

DebertaConfig

autodoc DebertaConfig

DebertaTokenizer

autodoc DebertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary

DebertaTokenizerFast

autodoc DebertaTokenizerFast - build_inputs_with_special_tokens - create_token_type_ids_from_sequences

DebertaModel

autodoc DebertaModel - forward

DebertaPreTrainedModel

autodoc DebertaPreTrainedModel

DebertaForMaskedLM

autodoc DebertaForMaskedLM - forward

DebertaForSequenceClassification

autodoc DebertaForSequenceClassification - forward

DebertaForTokenClassification

autodoc DebertaForTokenClassification - forward

DebertaForQuestionAnswering

autodoc DebertaForQuestionAnswering - forward