3.7 KiB
This model was released on 2019-07-26 and added to Hugging Face Transformers on 2020-11-16 and contributed by julien-c.
RoBERTa
RoBERTa: A Robustly Optimized BERT Pretraining Approach builds on Google's BERT model by modifying key hyperparameters, including removing the next-sentence pretraining objective and training with larger mini-batches and learning rates. The study highlights the undertraining of BERT and demonstrates that with these adjustments, RoBERTa can match or exceed the performance of subsequent models on benchmarks like GLUE, RACE, and SQuAD. This underscores the significance of certain design choices in language model pretraining.
import torch
from transformers import pipeline
pipeline = pipeline(task="fill-mask", model="FacebookAI/roberta-base", dtype="auto")
pipeline("Plants create <mask> through a process known as photosynthesis.")
import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer
model = AutoModelForMaskedLM.from_pretrained("FacebookAI/roberta-base", dtype="auto")
tokenizer = AutoTokenizer.from_pretrained("FacebookAI/roberta-base")
inputs = tokenizer("Plants create <mask> through a process known as photosynthesis.", return_tensors="pt")
outputs = model(**inputs)
mask_token_id = tokenizer.mask_token_id
mask_position = (inputs.input_ids == tokenizer.mask_token_id).nonzero(as_tuple=True)[1]
predicted_word = tokenizer.decode(outputs.logits[0, mask_position].argmax(dim=-1))
print(f"Predicted word: {predicted_word}")
Usage tips
- RoBERTa doesn't have
token_type_ids
. You don't need to indicate which token belongs to which segment. - Separate segments with the separation token
tokenizer.sep_token
or</s>
.
RobertaConfig
autodoc RobertaConfig
RobertaTokenizer
autodoc RobertaTokenizer - build_inputs_with_special_tokens - get_special_tokens_mask - create_token_type_ids_from_sequences - save_vocabulary
RobertaTokenizerFast
autodoc RobertaTokenizerFast - build_inputs_with_special_tokens
RobertaModel
autodoc RobertaModel - forward
RobertaForCausalLM
autodoc RobertaForCausalLM - forward
RobertaForMaskedLM
autodoc RobertaForMaskedLM - forward
RobertaForSequenceClassification
autodoc RobertaForSequenceClassification - forward
RobertaForMultipleChoice
autodoc RobertaForMultipleChoice - forward
RobertaForTokenClassification
autodoc RobertaForTokenClassification - forward
RobertaForQuestionAnswering
autodoc RobertaForQuestionAnswering - forward