* Remove additional white space and empty lines from markdown files Signed-off-by: Yuanyuan Chen <cyyever@outlook.com> * Add empty lines around code Signed-off-by: Yuanyuan Chen <cyyever@outlook.com> --------- Signed-off-by: Yuanyuan Chen <cyyever@outlook.com>
6.5 KiB
This model was released on 2025-08-13 and added to Hugging Face Transformers on 2025-08-14.
DINOv3
DINOv3 is a family of versatile vision foundation models that outperforms the specialized state of the art across a broad range of settings, without fine-tuning. DINOv3 produces high-quality dense features that achieve outstanding performance on various vision tasks, significantly surpassing previous self- and weakly-supervised foundation models.
You can find all the original DINOv3 checkpoints under the DINOv3 collection.
Tip
Click on the DINOv3 models in the right sidebar for more examples of how to apply DINOv3 to different vision tasks.
The example below demonstrates how to obtain an image embedding with [Pipeline
] or the [AutoModel
] class.
import torch
from transformers import pipeline
pipe = pipeline(
task="image-feature-extraction",
model="facebook/dinov3-vits16-pretrain-lvd1689m",
dtype=torch.bfloat16,
)
pipe("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg")
import torch
from transformers import AutoImageProcessor, AutoModel
from transformers.image_utils import load_image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = load_image(url)
processor = AutoImageProcessor.from_pretrained("facebook/dinov3-vits16-pretrain-lvd1689m")
model = AutoModel.from_pretrained(
"facebook/dinov3-vits16-pretrain-lvd1689m",
dtype=torch.float16,
device_map="auto",
attn_implementation="sdpa"
)
inputs = processor(images=image, return_tensors="pt").to(model.device)
with torch.inference_mode():
outputs = model(**inputs)
pooled_output = outputs.pooler_output
print("Pooled output shape:", pooled_output.shape)
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the Quantization overview for more available quantization backends.
The example below uses torchao to only quantize the weights to int4.
# pip install torchao
import torch
from transformers import TorchAoConfig, AutoImageProcessor, AutoModel
from torchao.quantization import Int4WeightOnlyConfig
from transformers.image_utils import load_image
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = load_image(url)
processor = AutoImageProcessor.from_pretrained("facebook/dinov3-vitsplus-pretrain-lvd1689m")
quant_type = Int4WeightOnlyConfig(group_size=128)
quantization_config = TorchAoConfig(quant_type=quant_type)
model = AutoModel.from_pretrained(
"facebook/dinov3-vit7b16-pretrain-lvd1689m",
dtype=torch.bfloat16,
device_map="auto",
quantization_config=quantization_config
)
inputs = processor(images=image, return_tensors="pt").to(model.device)
with torch.inference_mode():
outputs = model(**inputs)
pooled_output = outputs.pooler_output
print("Pooled output shape:", pooled_output.shape)
Notes
-
The example below shows how to split the output tensor into:
- one embedding for the whole image, commonly referred to as a
CLS
token, useful for classification and retrieval - register tokens - learnable embeddings that act as dedicated “memory slots” for global information, they reduce high-norm artifacts in patch tokens, yielding cleaner attention maps and better performance on dense prediction tasks.
- a set of local embeddings, one for each
16x16
patch of the input image, useful for dense tasks, such as semantic segmentation
import torch from transformers import AutoImageProcessor, AutoModel from transformers.image_utils import load_image url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = load_image(url) print("Image size:", image.height, image.width) # [480, 640] processor = AutoImageProcessor.from_pretrained("facebook/dinov3-vits16-pretrain-lvd1689m") model = AutoModel.from_pretrained("facebook/dinov3-vits16-pretrain-lvd1689m") patch_size = model.config.patch_size print("Patch size:", patch_size) # 16 print("Num register tokens:", model.config.num_register_tokens) # 4 inputs = processor(images=image, return_tensors="pt") print("Preprocessed image size:", inputs.pixel_values.shape) # [1, 3, 224, 224] batch_size, _, img_height, img_width = inputs.pixel_values.shape num_patches_height, num_patches_width = img_height // patch_size, img_width // patch_size num_patches_flat = num_patches_height * num_patches_width with torch.inference_mode(): outputs = model(**inputs) last_hidden_states = outputs.last_hidden_state print(last_hidden_states.shape) # [1, 1 + 4 + 256, 384] assert last_hidden_states.shape == (batch_size, 1 + model.config.num_register_tokens + num_patches_flat, model.config.hidden_size) cls_token = last_hidden_states[:, 0, :] patch_features_flat = last_hidden_states[:, 1 + model.config.num_register_tokens:, :] patch_features = patch_features_flat.unflatten(1, (num_patches_height, num_patches_width))
- one embedding for the whole image, commonly referred to as a
DINOv3ViTConfig
autodoc DINOv3ViTConfig
DINOv3ConvNextConfig
autodoc DINOv3ConvNextConfig
DINOv3ViTModel
autodoc DINOv3ViTModel - forward
DINOv3ConvNextModel
autodoc DINOv3ConvNextModel - forward
DINOv3ViTImageProcessorFast
autodoc DINOv3ViTImageProcessorFast - preprocess