Files
transformers/docs/source/en/model_doc/openai-gpt.md
Yuanyuan Chen 374ded5ea4 Fix white space in documentation (#41157)
* Fix white space

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

* Revert changes

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

* Fix autodoc

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>

---------

Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>
2025-09-30 09:41:03 -07:00

3.7 KiB

This model was released on 2018-06-11 and added to Hugging Face Transformers on 2023-06-20.

PyTorch SDPA FlashAttention

GPT

GPT (Generative Pre-trained Transformer) (blog post) focuses on effectively learning text representations and transferring them to tasks. This model trains the Transformer decoder to predict the next word, and then fine-tuned on labeled data.

GPT can generate high-quality text, making it well-suited for a variety of natural language understanding tasks such as textual entailment, question answering, semantic similarity, and document classification.

You can find all the original GPT checkpoints under the OpenAI community organization.

Tip

Click on the GPT models in the right sidebar for more examples of how to apply GPT to different language tasks.

The example below demonstrates how to generate text with [Pipeline], [AutoModel], and from the command line.

import torch
from transformers import pipeline

generator = pipeline(task="text-generation", model="openai-community/gpt", dtype=torch.float16, device=0)
output = generator("The future of AI is", max_length=50, do_sample=True)
print(output[0]["generated_text"])
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("openai-community/gpt")
model = AutoModelForCausalLM.from_pretrained("openai-community/openai-gpt", dtype=torch.float16)

inputs = tokenizer("The future of AI is", return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
echo -e "The future of AI is" | transformers run --task text-generation --model openai-community/openai-gpt --device 0

Notes

  • Inputs should be padded on the right because GPT uses absolute position embeddings.

OpenAIGPTConfig

autodoc OpenAIGPTConfig

OpenAIGPTModel

autodoc OpenAIGPTModel - forward

OpenAIGPTLMHeadModel

autodoc OpenAIGPTLMHeadModel - forward

OpenAIGPTDoubleHeadsModel

autodoc OpenAIGPTDoubleHeadsModel - forward

OpenAIGPTForSequenceClassification

autodoc OpenAIGPTForSequenceClassification - forward

OpenAIGPTTokenizer

autodoc OpenAIGPTTokenizer

OpenAIGPTTokenizerFast

autodoc OpenAIGPTTokenizerFast