Files
transformers/docs/source/en/model_doc/deberta-v2.md
Yuanyuan Chen f64354e89a Format empty lines and white space in markdown files. (#41100)
* Remove additional white space and empty lines from markdown files

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

* Add empty lines around code

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

---------

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>
2025-09-23 16:20:01 -07:00

5.2 KiB

This model was released on 2020-06-05 and added to Hugging Face Transformers on 2021-02-19.

PyTorch

DeBERTa-v2

DeBERTa-v2 improves on the original DeBERTa architecture by using a SentencePiece-based tokenizer and a new vocabulary size of 128K. It also adds an additional convolutional layer within the first transformer layer to better learn local dependencies of input tokens. Finally, the position projection and content projection matrices are shared in the attention layer to reduce the number of parameters.

You can find all the original [DeBERTa-v2] checkpoints under the Microsoft organization.

Tip

This model was contributed by Pengcheng He.

Click on the DeBERTa-v2 models in the right sidebar for more examples of how to apply DeBERTa-v2 to different language tasks.

The example below demonstrates how to classify text with [Pipeline] or the [AutoModel] class.

import torch
from transformers import pipeline

pipeline = pipeline(
    task="text-classification",
    model="microsoft/deberta-v2-xlarge-mnli",
    device=0,
    dtype=torch.float16
)
result = pipeline("DeBERTa-v2 is great at understanding context!")
print(result)
import torch
from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained(
    "microsoft/deberta-v2-xlarge-mnli"
)
model = AutoModelForSequenceClassification.from_pretrained(
    "microsoft/deberta-v2-xlarge-mnli",
    dtype=torch.float16,
    device_map="auto"
)

inputs = tokenizer("DeBERTa-v2 is great at understanding context!", return_tensors="pt").to(model.device)
outputs = model(**inputs)

logits = outputs.logits
predicted_class_id = logits.argmax().item()
predicted_label = model.config.id2label[predicted_class_id]
print(f"Predicted label: {predicted_label}")

echo -e "DeBERTa-v2 is great at understanding context!" | transformers run --task fill-mask --model microsoft/deberta-v2-xlarge-mnli --device 0

Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.

The example below uses bitsandbytes quantization to only quantize the weights to 4-bit.

from transformers import AutoModelForSequenceClassification, AutoTokenizer, BitsAndBytesConfig

model_id = "microsoft/deberta-v2-xlarge-mnli"
quantization_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype="float16",
    bnb_4bit_use_double_quant=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSequenceClassification.from_pretrained(
    model_id,
    quantization_config=quantization_config,
    dtype="float16"
)

inputs = tokenizer("DeBERTa-v2 is great at understanding context!", return_tensors="pt").to(model.device)
outputs = model(**inputs)
logits = outputs.logits
predicted_class_id = logits.argmax().item()
predicted_label = model.config.id2label[predicted_class_id]
print(f"Predicted label: {predicted_label}")

DebertaV2Config

autodoc DebertaV2Config

DebertaV2Tokenizer

autodoc DebertaV2Tokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary

DebertaV2TokenizerFast

autodoc DebertaV2TokenizerFast - build_inputs_with_special_tokens - create_token_type_ids_from_sequences

DebertaV2Model

autodoc DebertaV2Model - forward

DebertaV2PreTrainedModel

autodoc DebertaV2PreTrainedModel - forward

DebertaV2ForMaskedLM

autodoc DebertaV2ForMaskedLM - forward

DebertaV2ForSequenceClassification

autodoc DebertaV2ForSequenceClassification - forward

DebertaV2ForTokenClassification

autodoc DebertaV2ForTokenClassification - forward

DebertaV2ForQuestionAnswering

autodoc DebertaV2ForQuestionAnswering - forward

DebertaV2ForMultipleChoice

autodoc DebertaV2ForMultipleChoice - forward