* added dates to the models with a single hf papers link
* added the dates for models with multiple papers
* half of no_papers models done
* rest of no_papers models also done, only the exceptions left
* added copyright disclaimer to sam_hw, cohere, cohere2 + dates
* some more fixes, hf links + typo
* some new models + a rough script
* the script looks robust, changed all paper links to hf
* minor change to handle technical reports along with blogs
* ran make fixup to remove the white space
* refactor
* build: add TvpImageProcessorFast
- Introduced TvpImageProcessorFast to enhance image processing capabilities.
- Updated image processing auto registration to include the new fast processor.
- Modified tests to accommodate both TvpImageProcessor and TvpImageProcessorFast, ensuring comprehensive coverage for both classes.
* fix: TvpImageProcessorFast with new resize method and update processing logic
* build: add TvpImageProcessorFast
* refactor: clean up whitespace and formatting in TvpImageProcessorFast and related tests
- Removed unnecessary whitespace and ensured consistent formatting in image_processing_tvp_fast.py.
- Updated import order in test_image_processing_tvp.py for clarity.
- Minor adjustments to maintain code readability and consistency.
* fix: Enhance TvpFastImageProcessorKwargs and update documentation
- Added TvpFastImageProcessorKwargs class to define valid kwargs for TvpImageProcessorFast.
- Updated the documentation in tvp.md to include the new class and its parameters.
- Refined the image processing logic in image_processing_tvp_fast.py for better handling of padding and resizing.
- Improved test cases in test_image_processing_tvp.py to ensure compatibility with the new processing logic and tensor inputs.
* fix: tested now with python 3.9
* fix: remove tvp kwargs from docs
* simplify processing
* remove import and fix tests
---------
Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co>
* fix: changed is_causal to be False
* fix: Added original cross attention bug
* fix: fixed the way bordel removal is computed
* fix: added missing normalization on coarse features
* test: fixed integration tests
---------
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* initial comment
* test
* initial conversion for outline
* intermediate commit for configuration
* chore:init files for sam2
* adding arbitary undefined config
* check
* add vision
* make style
* init sam2 base model
* Fix imports
* Linting
* chore:sam to sam2 classes
* Linting
* Add sam2 to models.__init__
* chore:match prompt encoder with sam2 code
* chore:prepare kwargs for mask decoder
* Add image/video predictors
* Add CUDA kernel
* Add output classes
* linting
* Add logging info
* tmp commit
* docs for sam2
* enable image processing
* check difference of original SAM2
- difference is the order of ToTensor()
- please see https://pytorch.org/vision/main/_modules/torchvision/transforms/functional.html#resize
* enable promptencoder of sam2
* fix promprencoder
* Confirmed that PromptEncoder is exactly same (Be aware of bfloat16 and float32 difference)
* Confirmed that ImageEncoder is exactly same (Be aware the linting of init)
* Confirmed that MaskDecoder is exactly same (TO DO: lint variable name)
* SamModel is now available (Need more chore for name)
* make fix-copies
* make style
* make CI happy
* Refactor VisionEncoder and PostioinEmbedding
* TO DO : fix the image_embeddings and sparse_embeddings part
* pure image inference done
* reusable features fix and make style
* styling
* refactor memoryattention
* tmp
* tmp
* refactor memoryencoder
TO DO : convert and inference the video pipeline
* TO DO : fix the image_encoder shape
* conversion finish
TO DO: need to check video inference
* make style
* remove video model
* lint
* change
* python utils/check_docstringspy --check_all
* python utils/check_config_attributes.py
* remove copies for sam2promptencoder due to configuration
* change __init__.py
* remove tensorflow version
* fix that to not use direct comparison
* make style
* add missing import
* fix image_embedding_size
* refactor Sam2 Attention
* add fully working video inference (refactoring todo)
* clarify _prepare_memory_conditioned_features
* simplify modeling code, remove unused paths
* use one model
* use auto_docstring
* refactor rope embeddings
* nit
* not using multimask when several points given
* add all sam2.1
* add video tmp
* add Sam2VideoSessionState + fast image proc + video proc
* remove init_states from model
* fix batch inference
* add image integration tests
* uniformize modeling code with other sam models and use modular
* pass vision tests an most model tests
* All tests passing
* add offloading inference state and video to cpu
* fix inference from image embedding and existing mask
* fix multi_boxes mask inference
* Fix batch images + batch boxes inference
* improve processing for image inference
* add support for mask generation pipeline
* add support for get_connected_components post processing in mask generation
* add fast image processor sam, image processor tests and use modular for sam2 image processor
* fix mistake in sam after #39120
* fix init weights
* refactor convert
* add integration tests for video + other improvements
* add needed missing docstrings
* Improve docstrings and
* improve inference speed by avoiding cuda sync
* add test
* skip test for vision_model
* minor fix for vision_model
* fix vision_model by adding sam2model and change the torch dependencies
* remove patch_size
* remove image_embedding_size
* fix patch_size
* fix test
* make style
* Separate hieradet and vision encoder in sam2
* fixup
* review changes part 1
* remove MemoryEncoderConfig and MemoryAttentionConfig
* pass q_stride instead of q_pool module
* add inference on streamed videos
* explicitely process streamed frames
* nit
* Improve docstrings in Sam2Model
* update sam2 modeling with better gestion of inference state and cache, and separate Sam2Model and Sam2VideoModel
* improve video inference api
* change inference_state to inference_session
* use modular for Sam2Model
* fix convert sam2 hf
* modular
* Update src/transformers/models/sam2/video_processing_sam2.py
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* fix minor config
* fix attention loading error
* update modeling tests to use hub checkpoints
* Use CI A10 runner for integration tests values + higher tolerance for video integration tests
* PR review part 1
* fix doc
* nit improvements
* enforce one input format for points, labels and boxes
* nit
* last few nits from PR review
* fix style
* fix the input type
* fix docs
* add sam2 model as conversion script
* improve sam2 doc
* nit fixes + optimization
* split sam2 and sam2_video in two models
* PR review part 1
* fix None for default slow processor of sam2
* remove unecessary code path in sam2_video
* refactor/simplify RoPE
* replace embedding module list with embedding matrix
* fix tests
* remove kernel
* nit
* use lru_cache for sine_pos_embeddings
* reorder sam2_video methods
* simplify sam2_video
* PR review part 1
* simplify sam2 video a lot
* more simplification
* update integration tests with updated conftest
* more explicit config for hieradet
* do post_processing outside of sam2 video model
* Improve Sam2VideoVisionRotaryEmbedding
* fix tests
* update docs and fix mask2former/oneformer
* avoid unnecessary reshapes/permute
* fix device concatenating points
* small dtype fix
* PR review
* nit
* fix style and finish up doc
* fix style
* fix docstrings
* fix modular
---------
Co-authored-by: RUFFY-369 <prakarshkaushik369@gmail.com>
Co-authored-by: Haitham Khedr <haithamkhedr@meta.com>
Co-authored-by: sangbum choi <sangbumchoi@sangbumui-MacBookAir.local>
Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co>
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* docs: ko: main_classes/optimizer_schedules
* feat: nmt draft
* fix: improve TOC anchors and expressions in optimizer_schedules
- Add TOC anchors to all section headers
- Fix terminology and improve Korean expressions
* fix: Correct translation of 'weight decay fixed' to '가중치 감쇠가 적용된'
Changed '가중치 감쇠가 수정된' to '가중치 감쇠가 적용된' for more accurate translation of 'weight decay fixed' in the context of optimization.
* fix: Use more natural Korean inheritance expression
Changed '에서 상속받는' to '을 상속받는' to follow natural Korean grammar patterns for inheritance terminology.
* fix: Use consistent '미세 조정' translation for 'finetuned models'
Changed '파인튜닝된' to '미세 조정된 모델' to follow the established translation glossary for 'finetuned models' terminology.
* use pil_torch_interpolation_mapping for NEAREST/NEAREST_EXACT
* fix min torchvision version
* use InterpolationMode directly
* remove unused is_torchvision_greater_or_equal,
* nit
* Add initial collated reports script and job definition
* provide commit hash for this run. Also use hash in generated artifact name. Json formatting
* tidy
* Add option to upload collated reports to hf hub
* Add glob pattern for test report folders
* Fix glob
* Use machine_type as path filter instead of glob. Include machine_type in collated report
* fix flash attention
* i got a stroke reading that comment
* change dropout kwarg back to before
* rename _fa3... as it's used for multiple variants and should work as fallback instead
* simplify imports and support kwargs for fa
* style
* fix comments order
* small fix
* skip kernels test (causes cuda illegal memories w/o cleanup), fix fa test in general esp for models like bart
* style
* allow fullgraph by preloading on init
* make globals "private"
* ci pls be happy
* change skip conditions based on backend flag (indicating missing mask interface)
* move globals support to a function to prepare kwargs
* style
* generalize supported kwargs
* small change to doc
* fix
* add comments
* style
* revert prep during generate
* style
* revert weird style changes
* add fa kwarg prep during generate with fixes back
* how did this even happen
* how
* add comment
Currently model_debugging_utils.py would have an unguarded `import torch.distributed.tensor`. This PR ensures that the distributed module is available before including its tensor module.
* Fix PerceptionLM image preprocessing for non-tiled image input.
* Add test for single tile vanilla image processing.
* ruff format
* recover missing test skip
* Simplify test.
* minor test name fix
* Update HuBERT model card according to template
Standardized HuBERT doc, added ASR examples, Flash Attention 2 support, and quantization section.
* Address review comments and changes requested to hubert.md
* Update hubert.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* init
* update
* uupdate
* ruff
* t patch is 2 defalut not 1
* draft
* back
* back1
* update
* config update
* update using glm-41 format
* add self.rope_scaling = config.rope_scaling
* update config
* update
* remove the processor
* update
* fix tests
* update
* for test
* update
* update 2126
* self.rope_scaling is missing in GLM4MOE lets add it
* update
* update
* Update modular_glm4v_moe.py
* change config
* update apply_multimodal_rotary_pos_emb
* format
* update
* Delete 3-rollout_qas_thinking_answers.py
* use right name
* update with place holder
* update
* use right rotary
* Update image_processing_glm4v_fast.py
* rope_config_validation needs to rewrite the entire config file in modular
* update
* changed name
* update
* Update modeling_glm4v_moe.py
* _init_weights shoud be add in Glm4vMoePreTrainedModel
* remove use_qk_norm
* Update modular_glm4v_moe.py
* remove use_qk_norm as it is not use
* fix style
* deprecations are not needed on new models
* fix merge issues
---------
Co-authored-by: raushan <raushan@huggingface.co>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: Arthur <arthur.zucker@gmail.com>
* all modulars and llama
* apply modular
* bert and gpt2 copies
* fix imports
* do it everywhere
* fix import
* finalize it
* fix
* oups set it in modular
* style
* fix
* Add 1 version to deprecation cycle
* Update modeling_layers.py
* Fix missing video inputs for PerceptionLM.
* Minor fix for vanilla input image (only C,H,W, no tiles dim).
* Revert "Minor fix for vanilla input image (only C,H,W, no tiles dim)."
This reverts commit 181d87b964e59c4118035a9fd4f530c6e551ba9f.
* Add amd expectation in internvl
* Add amd expectation to llama
* Added bnb decorator for a llava test that requires bnb
* Added amd expectation for mistral3
* Style
* Support input_embeds in torch exportable decoders
* Hybrid cache update
* Manually change some callsites
* AI changes the rest of the call sites
* Make either input_ids/inputs_embeds mandatory
* Clean up
* Ruff check --fix
* Fix test
* pr review
* Revert config/generation_config changes
* Ruff check
* chore: update Deformable_Detr model card
* fix: added pipeline, automodel examples and checkpoints link
* Update deformable_detr.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Fix MXFP4 quantizer validation to enable CPU dequantization
Move dequantize check before CUDA availability check to allow
CPU inference when quantization_config.dequantize is True.
This enables users to run MXFP4 models on CPU by automatically
converting them to BF16 format.
* Add tests for MXFP4 quantizer CPU dequantization validation
* fix: format mxfp4 test file with ruff
* fix
* nice
* where i am at
* Bro this works
* Update src/transformers/integrations/tensor_parallel.py
* cleanups
* yups that was breaking
* Update src/transformers/models/openai_moe/modeling_openai_moe.py
* gather on experts and not mlp
* add changes for latest convert branch
* adds options to get output_router_logits from config
* bring chat temlate + special tokens back into the script.
* initial commmit
* update
* working with shards
* add model.safetensors.index.json
* fix
* fix
* mxfp4 flag
* rm print
* Fix PAD/EOS/BOS (#18)
* fix pad/eos/bos
* base model maybe one day
* add some doc
* special tokens based on harmony.
* add in tokenizer config as well.
* prepare for rebase with main
* Fix for initialize_tensor_parallelism now returning 4-tuple
```
[rank0]: File "/fsx/edward/work/openai-tsm-examples/examples/generate.py", line 17, in <module>
[rank0]: model = AutoModelForCausalLM.from_pretrained(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/fsx/edward/work/new-model-addition-openai/src/transformers/models/auto/auto_factory.py", line 600, in from_pretrained
[rank0]: return model_class.from_pretrained(
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/fsx/edward/work/new-model-addition-openai/src/transformers/modeling_utils.py", line 316, in _wrapper
[rank0]: return func(*args, **kwargs)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^
[rank0]: File "/fsx/edward/work/new-model-addition-openai/src/transformers/modeling_utils.py", line 4748, in from_pretrained
[rank0]: tp_plan, device_map, device_mesh = initialize_tensor_parallelism(tp_plan, tp_size=None)
[rank0]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank0]: ValueError: too many values to unpack (expected 3)
```
* mxfp4
* mxfp4 draft
* fix
* fix import
* draft
* draft impl
* finally working !
* simplify
* add import
* working version
* consider blocks and scales
* device mesh fix
* initial commit
* add working dequant + quant logic
* update
* non nan, gibberish output
* working EP + quantization finally !
* start cleaning
* remove reversing process
* style
* some cleaning
* initial commmit
* more cleaning
* more cleaning
* simplify
* more cleaning
* rm duplicated function
* changing tp_plan
* update tp plan check
* add loading attribute
* dequantizing logic
* use subfunctions
* import cleaning
* update_param_name
* adds clamped swiglu
* add clamping to training path
* simplify dequant logic
* update
* Bad merge
* more simplifications & tests
* fix !
* fix registering custom attention
* fix order
* fixes
* some test nits
* nits
* nit
* fix
* Clamp sink logits
* Clean
* Soft-max trick
* Clean up
* p
* fix deepspeed
* update both modeling and modular for cleanup
* contiguous
* update tests
* fix top_k router call
* revert renaming
* test nits
* small fixes for EP
* fix path for our local tests
* update as I should not have broken that!
* fix the loss of mixtral
* revert part of the changes related to router_scores, kernel probably no ready for that!
* deleting a small nit
* update arch
* fix post processing
* update
* running version but not expected output
* moving to cuda
* initial commit
* revert
* erroring when loading on cpu
* updates
* del blocks, scales
* fix
* style
* rm comm
* comment
* add comment
* style
* remove duplicated lines
* Fix minor issue with weight_map conversion script
* fix sampling params
* rename to final name
* upate pre-final version of template
* Update src/transformers/models/gpt_oss/convert_gpt_oss_weights_to_hf.py
* fix batched inference
* serve fixes
* swizzle !
* update final chat template by Matt.
* fix responses; pin oai
* sinplify
* Thanks Matt for his tireless efforts!
Co-authored-by: Rocketknight1 <Rocketknight1@users.noreply.github.com>
* Update src/transformers/models/gpt_oss/convert_gpt_oss_weights_to_hf.py
Co-authored-by: Matt <Rocketknight1@users.noreply.github.com>
* fix
* Use ROCm kernels from HUB
* Make kernel modes explicit
* update final chat template by Matt. x2
* Thanks Matt for his tireless efforts!
Co-authored-by: Rocketknight1 <Rocketknight1@users.noreply.github.com>
* Fix installation
* Update setup.py
Co-authored-by: Ákos Hadnagy <akos.hadnagy@gmail.com>
* allow no content
* fix: update message handling in write_tokenizer function
* Fix template logic for user message role
* last nits for CB and flash_paged!
* there was one bad merge
* fix CB (hardcode for now, its just using kv groups instead)
* fix
* better fix for device_map
* minor device fix
* Fix flash paged
* updates
* Revert "remove dtensors, not explicit (#39840)"
This reverts commit 6dfd561d9cd722dfc09f702355518c6d09b9b4e3.
* update
* Revert "remove dtensors, not explicit (#39840)"
This reverts commit 6dfd561d9cd722dfc09f702355518c6d09b9b4e3.
* fix merge
* fix
* Fix line break when custom model indentity
* nits testing
* to locals first and pass sliding window to flash paged
* register modes for MegaBlocksMoeMlp
* add integration test in fixtures -> now update the tests to use it!
* update integration tests
* initial fix
* style and update tests
* fix
* chore(gpt oss): remove mlp_bias from configuration
It was just a leftover.
* stats
* Integration tests
* whoops
* Shouldn't move model
* Ensure assistant messages without thinking always go to "final" channel
* More checks to ensure expected format
* Add pad_token_id to model configuration in write_model function (#51)
* Add oai fix fast tests (#59)
* Fix some fast tests
* Force some updates
* Remove unnecessary fixes
* Update src/transformers/models/gpt_oss/convert_gpt_oss_weights_to_hf.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update src/transformers/models/gpt_oss/convert_gpt_oss_weights_to_hf.py
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
* Update src/transformers/models/gpt_oss/convert_gpt_oss_weights_to_hf.py
* reasoning -> Reasoning
* Add additional integration tests
* fixup
* Slight fixes
* align chat template with harmony
* simplify
* Add comment
* torch testing assert close
* torch testing assert close
* torch testing assert close
* torch testing assert close
* torch testing assert close
* torch testing assert close
* Revert fixup
* skip 2 test remove todo
* merge
* padding side should be left for integration tests
* fix modular wrt to changes made to modeling
* style
* isort
* fix opies for the loss
* mmmm
---------
Co-authored-by: Quentin Gallouédec <gallouedec.quentin@gmail.com>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Marc Sun <marc@huggingface.co>
Co-authored-by: edbeeching <edbeeching@gmail.com>
Co-authored-by: Vaibhavs10 <vaibhavs10@gmail.com>
Co-authored-by: MekkCyber <mekk.cyber@gmail.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
Co-authored-by: Edward Beeching <edbeeching@users.noreply.github.com>
Co-authored-by: Mohamed Mekkouri <93391238+MekkCyber@users.noreply.github.com>
Co-authored-by: Lewis Tunstall <lewis.c.tunstall@gmail.com>
Co-authored-by: Zhuohan Li <zhuohan@openai.com>
Co-authored-by: Pedro Cuenca <pedro@huggingface.co>
Co-authored-by: joao@huggingface.co <joao@ip-10-53-88-32.ec2.internal>
Co-authored-by: Rocketknight1 <Rocketknight1@users.noreply.github.com>
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Akos Hadnagy <akos@ahadnagy.com>
Co-authored-by: Ákos Hadnagy <akos.hadnagy@gmail.com>
Co-authored-by: Alvaro Moran <alvaro.moran@huggingface.co>
Co-authored-by: Lysandre <hi@lysand.re>
Co-authored-by: Matt <rocketknight1@gmail.com>
* Revert "remove dtensors, not explicit (#39840)"
This did not work with generation (lm_head needs extra care!)
This reverts commit 6dfd561d9cd722dfc09f702355518c6d09b9b4e3.
* update
* style?
When users set `report_to="wandb"` but also have `WANDB_DISABLED=true` in their environment,
the previous error message was misleading: "WandbCallback requires wandb to be installed. Run pip install wandb."
This was confusing because wandb was actually installed, just disabled via the environment variable.
The fix detects this specific case and provides a clear, actionable error message explaining
the conflict and how to resolve it.
* Update model card for DETR
* fix: applied suggested changes
* fix: simplified pipeline and modified notes and resources
* Update detr.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* added code for handling video object ,as dictionary of frames and metadata, in chat template
* added new test where videos are passed as objects (dict of frames, metadata) in the chat template
* modified hardcoded video_len check that does not match with increased number of tests cases.
* Modify hardcoded video_len check that fails with increased number of tests
* update documentation of multi-modal chat templating with extra information about including video object in chat template.
* add array handling in load_video()
* temporary test video inlcuded
* skip testing smolvlm with videos that are list of frames
* update documentation & make fixup
* Address review comments
* fix: deprecate plot_keypoint_matching and make visualize_keypoint_matching for all Keypoint Matching models
* refactor: added copied from
* fix: make style
* fix: repo consistency
* fix: make style
* docs: added missing method in SuperGlue docs
* first commit
Added modular implementation for MM Grounding DINO from starting point created by add-new-model-like. Added conversion script from mmdetection to huggingface.
TODO: Some tests are failing so that needs to be fixed.
* fixed a bug with modular definition of MMGroundingDinoForObjectDetection where box and class heads were not correctly assigned to inner model
* cleaned up a hack in the conversion script
* Fixed the expected values in integration tests
Cross att masking and cpu-gpu consistency tests are still failing however.
* changes for make style and quality
* add documentation
* clean up contrastive embedding
* add mm grounding dino to loss mapping
* add model link to config docstring
* hack fix for mm grounding dino consistency tests
* add special cases for unused config attr check
* add all models and update docs
* update model doc to the new style
* Use super_kwargs for modular config
* Move init to the _init_weights function
* Add copied from for tests
* fixup
* update typehints
* Fix-copies for tests
* fix-copies
* Fix init test
* fix snippets in docs
* fix consistency
* fix consistency
* update conversion script
* fix nits in readme and remove old comments from conversion script
* add license
* remove unused config args
* remove unnecessary if/else in model init
* fix quality
* Update references
* fix test
* fixup
---------
Co-authored-by: qubvel <qubvel@gmail.com>
* fix?
* fixme and style
* Update src/transformers/modeling_utils.py
* update
* update
* fix
* small fixees
* nit
* nits
* fix init check?
* fix
* fix default
* or fucks me
* nits
* include a small nit
* does this make it hapy?
* fixup
* fix the remaining ones
* Add cohere2_vision to support CohereLabs/command-a-vision-07-2025
* update and add modualr file
* update processors and check with orig impl later
* delete unused files
* image processor reduce LOC and re-use GotOCR2
* update the config to use modular
* model tests pass
* processor fixes
* check model outputs decorator
* address one more comment
* Update tokens. Temp - need to read from tokenizer'
* fix for multi-gpu
* Fix image token handling
* upadte image token expansion logic
* fix a few issues with remote code loading
* not related but modular forces us to change all files now
* Add overview and code sample to cohere vision docs
* add scripts. TMP.
* Update inference script
* Create script
* set dtype in export script
* TO revert: modular export fix
* Fix scripts
* Revert "TO revert: modular export fix"
This reverts commit bdb2f305b61027a05f0032ce70d6ca698879191c.
* Use modular weights
* Upload to hub
Removed OOD weights ad script
* Updated docs
* fix import error
Update docs
Added pipeline test
* Updated docs
* Run modular script
remove modular for config
Added patch_size
Added docstrings in modular
Fix OOM
Add docs, fixup integration tests. 8-gpu passing
* tiny updates
* address comments + fixup
* add test for chat template
* check model outputs workaround
* aya vision fix check model inputs
* Revert "add test for chat template"
This reverts commit 42c756e397f588d76b449ff1f93292d8ee0202d8.
* reveert more changes
* last revert
* skip and merge
* faulty copy from
---------
Co-authored-by: Julian Mack <julian.mack@cohere.com>
Co-authored-by: kyle-cohere <kyle@cohere.com>
* feat(tokenization): add encode_message to tokenize messages one by one
* Fix the `encode_message` method, remove the `add_generation_prompt` parameter and add the corresponding error handling. Update the document to reflect this change and verify the error handling in the test.
* Optimize the `encode_message` method, improve the processing logic of the empty dialogue history, and ensure that the chat template can be applied correctly when the dialogue history is empty. Update the document to reflect these changes.
* The `_encode_message` method is deleted, the message coding logic is simplified, and the functional integrity of the `encode_message` method is ensured. Update the document to reflect these changes.
* Docs fix
* Revert changes in docstring of pad()
* Revert changes in docstring
* Update src/transformers/tokenization_utils_base.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Repair the call of the `encode_message` method, update it to `encode_message_with_chat_template` to support the chat template, and adjust the relevant test cases to reflect this change.
* Optimize the call format of the `apply_chat_template` method, and merge multi-line calls into a single line to improve code readability.
---------
Co-authored-by: pco111 <15262555+pco111@user.noreply.gitee.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* fix: cache_position: RuntimeError: Boolean value of Tensor with more than one value is ambiguous
* test cache_position
* move test
* propagate changes
---------
Co-authored-by: Masataro Asai <guicho2.71828@gmail.com>
* Add callback to monitor progress in whisper transcription
* Added `` around variables, rewording
* Add example of `monitor_progress`.
---------
Co-authored-by: Eric B <ebezzam@gmail.com>
* docs: ko: main_classes/peft.md
* feat: nmt draft
* docs: add missing TOC to documentation for `PeftAdapterMixin` section
Added a table of contents (TOC) to the documentation, specifically for the `transformers.integrations.PeftAdapterMixin` section, following the structure and content outlined in [this link](https://huggingface.co/docs/transformers/main/en/main_classes/peft#transformers.integrations.PeftAdapterMixin).
* fix: Improve naturalness of purpose expression in Korean
Changed '관리하기 위한' to '관리할 수 있도록' for more natural Korean expression when describing the purpose of providing functions.
* fix: Simplify plural form and make expression more concise
Changed '~할 수 없기 때문에' to '~할 수 없어' for more concise expression while maintaining clarity.
* fix: Replace technical term '주입' with more natural '적용'
Changed '주입할 수 없어' to '적용할 수 없어' for better readability.
Considered alternatives:
'삽입': Too literal translation of 'inject'
'입력': Could be misunderstood as data input
'통합': Implies merging two systems
'추가': Simple but less precise
'적용' was chosen as it's the most natural and widely used term in Korean technical documentation for this context.
* fix: update toctree path for PEFT to lowercase
Changed the toctree path from 'PEFT' (uppercase) to 'peft' (lowercase) to match the correct directory naming convention and prevent broken links.
* docs: update as per reviewer feedback after rebase
* Add Fast Segformer Processor
* Modified the params according to segformer model
* modified test_image_processing_Segformer_fast args
- removed redundant params like do_center_crop,center_crop which aren't present in the original segformer class
* added segmentation_maps processing logic form the slow segformer processing module with references from beitimageprocessing fast
* fixed code_quality
* added recommended fixes and tests to make sure everything processess smoothly
* Fixed SegmentationMapsLogic
- modified the preprocessing of segmentation maps to use tensors
- added batch support
* fixed some mismatched files
* modified the tolerance for tests
* use modular
* fix ci
---------
Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co>
* feat: superpoint fast image processor
* fix: reran fast cli command to generate fast config
* feat: updated test cases
* fix: removed old model add
* fix: format fix
* Update src/transformers/models/superpoint/image_processing_superpoint_fast.py
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
* fix: ported to torch and made requested changes
* fix: removed changes to init
* fix: init fix
* fix: init format fix
* fixed testcases and ported to torch
* fix: format fixes
* failed
test case fix
* fix superpoint fast
* fix docstring
---------
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co>
* Add missing cache_position argument.
* Pass cache_position to language model.
* Overwrite prepare_inputs_for_generation.
* Set model to half precision for Flash Attention test.
* Cast model to bfloat16.
* add tests for helpers
* duplicate test for each model
* why llava next video has no helper
* oops must have been in the commit
* fix test after rebase
* add copy from
* support `typing.Literal` as type of tool parameters
* validate the `args` of `typing.Literal` roughly
* add test to get json schema for `typing.Literal` type hint
* fix: add `"type"` attribute to the parsed result of `typing.Literal`
* test: add argument `booleanish` to test multi-type literal
* style: auto fixup
* EP + updates
Co-authored-by: Nouamane Tazi <NouamaneTazi@users.noreply.github.com>
Co-authored-by: drbh <drbh@users.noreply.github.com>
* remove unrelated change
* not working yet but let's see where it goes!
* update the api a bit
* udpate
* where I am at for now
* fix ep
* refactor the API
* yups
* fix
* fixup
* clean modeling
* just support llama4 for now!
* properly avoid
* fix
* nits
* Update src/transformers/models/llama4/modeling_llama4.py
* Update src/transformers/integrations/tensor_parallel.py
* style
* ,,,,
* update
---------
Co-authored-by: Nouamane Tazi <NouamaneTazi@users.noreply.github.com>
Co-authored-by: drbh <drbh@users.noreply.github.com>
* upload initial code
* update deepseek-vl adaptor
* update hierarchy of vision model classes
* udpate aligner model
* add text model
* Added Image Processor
* Added Image Processor
* Added Image Processor
* apply masks
* remove projection; add aligner
* remove interpolate_pos_encoding
* remove unused params in config
* cleaning
* Add the __init__ file
* added processing deepseek_vl class
* modified the deepseek-vl processor
* modified the deepseek-vl processor
* update __init__
* Update the image processor class name
* Added Deepseek to src/transformers/__init__.py file
* Added Deepseek to image_processing_auto.py
* update the __init__ file
* update deepseek_vl image processor
* Update Deepseek Processor
* upload fast image processor
* Revert "upload fast image processor"
This reverts commit 68c8fd50bafbb9770ac70c9de02448e2519219b4.
* update image processor
* flatten heirarchy
* remove DeepseekVLModel
* major update (complete modeling)
* auto modeling and other files
* formatting
* fix quality
* replace torchvision in modeling
* set default do_normalize to False
* add fast image processor template using tool
* update image processors
* add fast image processor to other files
* update liscense
* Added deepseek image testcases
* update image test
* update processor
* write CHAT_TEMPLATE
* update model for processor
* fix processor
* minor fixes and formatting
* fix image processing and tests
* fix interpolation in sam
* fix output_attentions in DeepseekVLModel
* upload test_modeling
* fix tests because of vocab size
* set use_high_res_vision=False in tests
* fix all modeling tests
* fix styling
* remove explicit background_color from image processors
* added test_processor
* added test_processor
* fix processor tests
* update docs
* update docs
* update docs
* update conversion script
* Fixed typos
* minor fixes from review
- remove model_id comments in examples
- remove from pre-trained auto mapping
- move to image-text-to-text from vision-to-seq in auto mapping
- add image_token_index to __init__ for config
- remove outdated temporary config in conversion script
- update example to use chat_template in docstring example
- update liscense 2021->2025
* fix type in config docstring
Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz>
* update get_image_features
* fix config
* improve DeepseekVLImageProcessor.preprocess
* return image_hidden_states
* use AutoTokenizer and AutoImageProcessor in Processor
* fix model outputs
* make num_image_tokens configurable
* fix docstring of processor
* move system prompt to chat template
* fix repo consistency
* fix return_dict
* replace SamVisionEncoder with SamVisionModel
* update to remove deepcopy
* 🛠️ Major Architectural Changes (Adds DeepseekVLHybrid)
* fix quality checks
* add missing hybrid in auto modeling
* run make style
* update sam_hq
* update high_res_size in test
* update docs following #36979
* update code with auto_docstring
* update conversion scripts
* fix style
* fix failing test because of tuple
* set weights_only=True in conversion script
* use safetensors.torch.load_file instead of torch.load in conversion script
* make output_dir optional in conversion script
* fix code snippets in docs (now the examples work fine)
* integration tests for DeepseekVL
* update expected texts
* make style
* integration tests for DeepseekVLHybrid
* fix class name
* update expected texts for hybrid
* run "make style"
* update since changes in main
* run make-style
* nits since changes in main
* undo changes in sam
* fix tests
* fix tests; update with main
* update with main: output_attention/output_hidden_states
* fix copied part in deepseek_vl
* run fix-copies
* fix output_hidden_states
* sam: fix _init_weigths
* use modular for DeepseekVL
* make image processor more modular
* modular: use JanusPreTrainedModel
* janus: provide kwargs in loss
* update processors in conversion script
* Revert "sam: fix _init_weigths"
This reverts commit db625d0c68956c0dad45edd7a469b6a074905c27.
* run fix-copies
---------
Co-authored-by: Shakib-IO <shakib.khan17@northsouth.edu>
Co-authored-by: Raushan Turganbay <raushan.turganbay@alumni.nu.edu.kz>
* init
* Force qwen2VL image proc to fast
* refactor qwen2 vl fast
* fix copies
* Update after PR review and update tests to use return_tensors="pt"
* fix processor tests
* add BC for min pixels/max pixels
* fix most tests
* skip a few more tests
* address comments
* fix chameleon tests
* forgot to uncomment
* qwen has its own tests with images, rename it as well
* add owlv2 fast image processor
* add Owlv2ImageProcessorFast to Owlv2Processor image_processor_class
* add Owlv2ImageProcessorFast to Owlv2Processor image_processor_class
* change references to owlVit to owlv2 in docstrings for post process methods
* change type hints from List, Dict, Tuple to list, dict, tuple
* remove unused typing imports
* add disable grouping argument to group images by shape
* run make quality and repo-consistency
* use modular
* fix auto_docstring
---------
Co-authored-by: Lewis Marshall <lewism@elderda.co.uk>
Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co>
* docs: Standardize OPT model card with enhanced details
* Remove incorrect link from OPT model card
* Address review feedback on OPT model card
* Update opt.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
- Fix Cyrillic 'Р' to Latin 'P' in Portuguese language link (README.md)
- Fix 'meanginful' to 'meaningful' in training documentation
- Fix duplicate 'Cohere' reference in modular transformers documentation
- Fix duplicate 'the the' in trainer and chat command comments
🤖 Generated with [Claude Code](https://claude.ai/code)
Co-authored-by: Claude <claude@anthropic.com>
Co-authored-by: Claude <noreply@anthropic.com>
* First attempt
* fix
* fix
* Enhance TrackioCallback to log GPU memory usage and allocation
* Enhance Trackio integration in callbacks and training arguments documentation
* re order
* remove unused lines
* fix torch optional
* use partial to wrap around `transformers` utils!
* try to refactor?
* revert one wrong change
* just a nit
* push
* reverter watever was wrong!
* some nits
* fixes when there is no attention mask
* bring the licence back
* some fixes
* nit
* style
* remove prints
* correct dtype
* fa flags for testing
* update
* use paged attention if requested!
* updates
* a clone was needed, not sure why
* automatically create cu seq lens when input is flash, this at least makes sure layers don't re-compute
* simplify and improve?
* flash attention is kinda broken on recent cuda version so allow the opportunity to use something else
* fix!
* protect kernels import
* update
* properly parse generation config being passed
* revert and update
* add two tests
* some fixes
* fix test FA2
* takes comment into account
* fixup
* revert changes
* revert the clone, it is only needed because the metal kernel is not doing it?
* [docs] update attention implementation and cache docs (#39547)
* update docs
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* applu suggestions
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* fix mps on our side for now
* Update src/transformers/integrations/flash_paged.py
* no qa
---------
Co-authored-by: Vasqu <antonprogamer@gmail.com>
Co-authored-by: Raushan Turganbay <raushan@huggingface.co>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* feat: add support for gradient checkpointing in TimmWrapperModel and TimmWrapperForImageClassification
* ruff fix
* refactor + add test for not supported model
* ruff
* Update src/transformers/models/timm_wrapper/modeling_timm_wrapper.py
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* Update src/transformers/models/timm_wrapper/modeling_timm_wrapper.py
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* Update src/transformers/models/timm_wrapper/modeling_timm_wrapper.py
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* Update src/transformers/models/timm_wrapper/modeling_timm_wrapper.py
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
---------
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* initial commit
* Apply suggestions from code review
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* fix: various typos, typehints, refactors from suggestions
* fix: fine_matching method
* Added EfficientLoFTRModel and AutoModelForKeypointMatching class
* fix: got rid of compilation breaking instructions
* docs: added todo for plot
* fix: used correct hub repo
* docs: added comments
* fix: run modular
* doc: added PyTorch badge
* fix: model repo typo in config
* fix: make modular
* fix: removed mask values from outputs
* feat: added plot_keypoint_matching to EfficientLoFTRImageProcessor
* feat: added SuperGlueForKeypointMatching to AutoModelForKeypointMatching list
* fix: reformat
* refactor: renamed aggregation_sizes config parameter into q, kv aggregation kernel size and stride
* doc: added q, kv aggregation kernel size and stride doc to config
* refactor: converted efficientloftr implementation from modular to copied from mechanism
* tests: overwrote batching_equivalence for "keypoints" specific tests
* fix: changed EfficientLoFTRConfig import in test_modeling_rope_utils
* fix: make fix-copies
* fix: make style
* fix: update rope function to make meta tests pass
* fix: rename plot_keypoint_matching to visualize_output for clarity
* refactor: optimize image pair processing by removing redundant target size calculations
* feat: add EfficientLoFTRImageProcessor to image processor mapping
* refactor: removed logger and updated attention forward
* refactor: added auto_docstring and can_return_tuple decorators
* refactor: update type imports
* refactor: update type hints from List/Dict to list/dict for consistency
* refactor: update MODEL_MAPPING_NAMES and __all__ to include LightGlue and AutoModelForKeypointMatching
* fix: change type hint for size parameter in EfficientLoFTRImageProcessor to Optional[dict]
* fix typing
* fix some typing issues
* nit
* a few more typehint fixes
* Remove output_attentions and output_hidden_states from modeling code
* else -> elif to support efficientloftr
* nit
* tests: added EfficientLoFTR image processor tests
* refactor: reorder functions
* chore: update copyright year in EfficientLoFTR test file
* Use default rope
* Add docs
* Update visualization method
* fix doc order
* remove 2d rope test
* Update src/transformers/models/efficientloftr/modeling_efficientloftr.py
* fix docs
* Update src/transformers/models/efficientloftr/image_processing_efficientloftr.py
* update gradient
* refactor: removed unused codepath
* Add motivation to keep postprocessing in modeling code
* refactor: removed unnecessary variable declarations
* docs: use load_image from image_utils
* refactor: moved stage in and out channels computation to configuration
* refactor: set an intermediate_size parameter to be more explicit
* refactor: removed all mentions of attention masks as they are not used
* refactor: moved position_embeddings to be computed once in the model instead of every layer
* refactor: removed unnecessary hidden expansion parameter from config
* refactor: removed completely hidden expansions
* refactor: removed position embeddings slice function
* tests: fixed broken tests because of previous commit
* fix is_grayscale typehint
* not refactoring
* not renaming
* move h/w to embeddings class
* Precompute embeddings in init
* fix: replaced cuda device in convert script to accelerate device
* fix: replaced stevenbucaille repo to zju-community
* Remove accelerator.device from conversion script
* refactor: moved parameter computation in configuration instead of figuring it out when instantiating a Module
* fix: removed unused attributes in configuration
* fix: missing self
* fix: refactoring and tests
* fix: make style
---------
Co-authored-by: steven <steven.bucaille@buawei.com>
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* improve handlike of other image-like inputs in fast image processors
* fix issues with _prepare_images_structure
* update sam image processor fast
* use dict update
* init
* copied from remote
* add proper structure and llama like structure
* fixup
* revert to state that works
* get closer to llama
* slow and steady
* some removal
* masks work
* it is indeed the rope implementation, how dafuq does it mesh with the cache now hmm
* nice
* getting closer
* closer to transformers style
* let's simplify this, batching works now
* simplified
* working version with modular
* it is indeed the rotation per weights, make it complete llama style
* cleanup conversion, next to look at -> tokenizer
* remove llama artefacts
* fix modeling tests (common ones)
* style
* integration test + first look into tokenization (will need more work, focussing on modeling other models first)
* style
* working moe version, based on remote
* lets keep it simple and go step by step - transformers annotations for modular and transformers style rope (complex view)
* more cleanup
* refactor namings and remove addition forXXX classes
* our moe won't cut it it seems, correction bias seems to be missing in remote code version
* tokenization change (remote)
* our moe version works when adding normalization :D
* cleanup moe
* nits
* cleanup modeling -> let's get to modular next
* style
* modular v1
* minor things + attempt at conversion (which doesn't work)
* no conversion follow glm, fixup modular and other nits
* modular cleanup
* fixes
* tests, tests, tests + some moe dtype forcing
* simplify modular, fix fatal fa2 bug, remaining tests
* fix import issue?
* some initial docs, fix bnb faulty behavior --> needs to fix some tests because of gate needing to be float
* fix sdpa test, load on init dtype only
* fixup post merge
* style
* fix doc links
* tokenization cleanup beginnings
* simplify tokenizer by a lot as its basically llama
* tokenizer is full llama with different defaults + extra special tokens
* sync og special tokens of ernie
* fix decoding with numbers (also in remote done what a timing), begin of tok tests
* align with remote and preserve special tokens, adjust tests to ernie legacy behavior, warning for questionable behavior (also in llama)
* nits
* docs
* my daily post merge it is
* check
* tokenization update with explanations and conversion script
* review on modular (til), revert some tokenizer things i did prior, remove mtp comment (low prio)
* post merge fixes
* fixup tokenization, llama fast is the way to go
* more fixups
* check
* import fixes
* correction bias following the paddle code
* fix
* fix TP plan, fix correction bias sharding during forward
* style
* whoops
* fix tied weights
* docs and last nit
* license
* flasky tests
* move repo id, update when merged on the hub
* simplify common get/set
* remove some noise
* change some 5 years old modeling utils
* update examples
* fix copies
* revert some changes
* fixes, gah
* format
* move to Mixin
* remove smolvlm specific require grad
* skip
* force defaults
* remodularise some stuff
* remodularise more stuff
* add safety for audio models
* style
* have a correct fallback, you daft donkey
* remove this argh
* change heuristic for audio models
* fixup
* revert
* this works
* revert again
* 🧠
* aaah ESM has two modelings aaah
* add informative but short comment
* add `input_embed_layer` mixin attribute
* style
* walrus has low precedence
* modular fix
* this was breaking parser
Enable average_tokens_across_devices by default in TrainingArguments
Fixes#39392
This change improves loss calculation correctness for multi-GPU training by enabling proper token averaging across devices by default.
Co-authored-by: Krishnan Vignesh <krishnanvignesh@Krishnans-MacBook-Air.local>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
* fix qwen2 vl packing in FA2
* why? delete!
* qwen2-5-vl seems to work now
* update
* fix tests
* start by adapting FA2 tests
* add similar tests for sdpa/eager
* address comments
* why is this even in conditional model and not base model?
* fix type order
* change all Union[str, dict] to Union[dict, str]
* add hf_parser test && fix test order
* add deepspeed dependency
* replace deepspeed with accelerator
* Scaffolding
* Explicit content
* Naïve Responses API streaming implementation
* Cleanup
* Scaffolding
* Explicit content
* Naïve Responses API streaming implementation
* Cleanup
* use openai
* validate request, including detecting unused fields
* dict indexing
* dict var access
* tmp commit (tests failing)
* add slow
* use oai output type in completions
* (little rebase errors)
* working spec?
* guard type hint
* type hints. fix state (CB can now load different models)
* type hints; fn names; error type
* add docstrings
* responses + kv cache
* metadata support; fix kv cache; error event
* add output_index and content_index
* docstrings
* add test_build_response_event
* docs/comments
* gate test requirements; terminate cb manager on model switch
* nasty type hints
* more type hints
* disable validation by default; enable force models
* todo
* experiment: base model from typed dict
* audio working
* fix bad rebase
* load audio with librosa
* implement timed models
* almost working
* make fixup
* fix tests
* transcription request type
* tokenizer -> processor
* add example in docs
---------
Co-authored-by: Lysandre <hi@lysand.re>
* Add the `device` option for `generate()`
* Add device for default tensors to avoid tensor mismatch
* [test] Enable test_static_cache_exportability for torch_device
* infer device from the prompt_token_ids
* Add device for generated tensor
* [Test] Make `test_export_static_cache` tests to run on devices rather than only CPU
* fix format
* infer device from the model
* wip: adding first version of the IJEPA model card.
* refactor based on the @stevhliu feedbacks
* refactor:
- revert the accidental removal of the autodoc api description and the image reerece architecture
- general context updation.
* - changes of model for example quantization.
- merging the quantization content.
Fix indentation bug in Idefics3 image processor
- Fix KeyError when do_image_splitting=False
- Move split_images_grouped assignment inside loop
- Ensures all image shapes are stored, not just the last one
- This fixes the bug in both Idefics3 and generated SmolVLM processors
cc @yonigozlan
Co-authored-by: Krishnan Vignesh <krishnanvignesh@Krishnans-MacBook-Air.local>
* Fix typo in generation configuration for Janus model weight conversion
* Fix typo
* Update Janus model generation configuration
* Update Janus model to use generation_kwargs
* dump
* push other models
* fix simple greedy generation
* xmod
* add fmst and clean up some mentions of old cache format
* gpt-bigcode now follows standards
* delete tuple cache reference in generation
* fix some models
* fix some models
* fix mambas and support cache in tapas
* fix some more tests
* fix copies
* delete `_reorder_cache`
* another fix copies
* fix typos and delete unnecessary test
* fix rag generate, needs special cache reordering
* fix tapas and superglue
* reformer create special cache
* recurrent gemma `reorder_cache` was a no-op, delete
* fix-copies
* fix blio and musicgen pipeline tests
* fix reformer
* fix reformer, again...
* delete `_supports_cache_class`
* delete `supports_quantized_cache`
* fix failing tests
* fix copies
* some minor clean up
* style
* style
* fix copies
* fix tests
* fix copies
* create causal mask now needs positions?
* fixc copies
* style
* Update tests/test_modeling_common.py
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* clean-up of non-generative model after merging main
* check `is_decoder` for cache
* delete transpose for scores
* remove tuple cache from docs everywhere
* fix tests
* fix copies
* fix copies once more
* properly deprecate `encoder_attention_mask` in Bert-like models
* import `deprecate_kwarg` where needed
* fix copies again
* fix copies
* delete `nex_decoder_cache`
* fix copies asks to update for PLM
* fix copies
* rebasing had a few new models, fix them and merge asap!
* fix copies once more
* fix slow tests
* fix tests and updare PLM checkpoint
* add read token and revert accidentally removed line
* oh com -on, style
* just skip it, read token has no access to PLM yet
---------
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
* Added StableAdamW as an optimizer option for Trainer. Also wrote tests to verify its behaviour.
* Fixed issue with
* Added docs for StableAdamW. Also fixed a typo in schedule free optimizers
---------
Co-authored-by: Gautham Krithiwas <gauthamkrithiwas2003@gmail.com>
* add test scanner
* add doc + license
* refactor for only 1 tree traversal
* add back test of only one method
* document single method scan
* format
* fixup generate tests
* minor fix
* fixup
* fixup doc
* add cosine_with_min_lr_schedule_with_warmup_lr_rate scheduler in trainer
* Update src/transformers/optimization.py
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
* Update optimization.py
fix the error of the unclosed "("
* Update optimization.py
remove whitespace in line 402 in order to pass the quality test
* Update src/transformers/optimization.py
* Update src/transformers/optimization.py
* Apply style fixes
---------
Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: github-actions[bot] <github-actions[bot]@users.noreply.github.com>
Co-authored-by: Marc Sun <57196510+SunMarc@users.noreply.github.com>
fix: 🐛 Fixed a bug in calculating Cross Entropy loss in JetMoeForCausalLM
In the original code, we shift the logits and pass shift_logits into the self.loss_function, but in self.loss_function, the shift_logits will be shifted again, so we are actually doing "next next token prediction", which is incorrect. I have removed the logits shifting before calling self.loss_function.
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* fix vlm with retrieval
* we can't use AutoModel because new ColQwen was released after refactor
* no need for colqwen
* tied weight keys are necessary, if using IMageTextToText
* need to apply renaming in tied weights, only for ColPali
* overwrite tied keys in ColPali
* fix copies, modular can't handle if-statements
* working locally; need to style and test
* added docs and initial tests; need to debug and flesh out
* fixed tests
* working long context; batches
* working fa2 and eager
* update tests
* add missing confnigs
* remove default autoset
* fix spacing
* fix most tests
* fixed tests
* fix to init
* refactor to match new transformers updates
* remove static cache option
* fa2 fix
* fix docs
* in progress
* working on tests
* fixed issue with attn outputs
* remove debug
* fix local config attr
* update doc string
* fix docstring
* add docs to toc
* correct typo in toc
* add new updates from main w.r.t. ModernBERT RoPE
* fix local param
---------
Co-authored-by: oweller2 <oweller2@dsailogin.mgmt.ai.cluster>
Co-authored-by: oweller2 <oweller2@l07.mgmt.ai.cluster>
Co-authored-by: oweller2 <oweller2@n02.mgmt.ai.cluster>
Co-authored-by: oweller2 <oweller2@l08.mgmt.ai.cluster>
Co-authored-by: oweller2 <oweller2@l01.mgmt.ai.cluster>
Co-authored-by: oweller2 <oweller2@l02.mgmt.ai.cluster>
* Update modeling_qwen2_5_vl.py
### 🐛 Bug Description
When using Unsloth’s Qwen2.5-VL vision models (both 3B and 7B) with the latest HuggingFace Transformers (commit: 520b9dcb42cef21662c304583368ff6645116a45), the model crashes due to a type mismatch in the attention mask handling.
---
### 🔥 Error Traceback
* Fix dtype compatibility in attention mask processing
Replace hardcoded torch.finfo() usage with dtype-aware function selection to handle both integer and floating-point attention mask tensors.
Technical Details:
Problem: Line 1292 assumes floating-point dtype for attention_mask_tensor
Solution: Add dtype check to use torch.iinfo() for integer types and torch.finfo() for float types
Files Modified: transformers/models/qwen2_5_vl/modeling_qwen2_5_vl.py
* Update modeling_qwen2_5_vl.py
* Update modeling_qwen2_5_vl.py
* Fix: Cast to float before applying torch.finfo
* # Fix: Use appropriate function based on dtype
* Update modular_qwen2_5_vl.py
* Fix: Cast to float before applying torch.finfo
* Fix: Use appropriate function based on dtype
* Fix: Use appropriate function based on dtype
* Updatet modeling_glm4v.py
* Only apply conversion for floating point tensors (inverted masks)
* corrected the format issue
reformatted modeling_glm4v.py
All done! ✨🍰✨
1 file reformatted
* Fix: Cast to float before applying torch.finfo
Corrected the format issue
* Fix torch.finfo() for integer attention mask
#39333
* Run make fix-copies and make style for CI compliance
- Updated dependency versions table
- Fixed code formatting and style issues
- Sorted auto mappings
- Updated documentation TOC
* Fix torch.finfo() TypeError for
Fix torch.finfo() TypeError for integer attention_mask_tensor #39333
* Fix torch.finfo() TypeError for integer
* Updated CamemBERT model card to new standardized format
* Applied review suggestions for CamemBERT: restored API refs, added examples, badges, and attribution
* Updated CamemBERT usage examples, quantization, badges, and format
* Updated CamemBERT badges
* Fixed CLI Section
* fix ast deprecations for python 3.14: replace node.n by node.value and use `ast.Constant`
More verbose exceptions in `fix_docstring` on docstring formatting issues.
* plm template
* A working plm with fixed image features
* hacked processor
* First version that reproduced PLM output using PE from timm.
* Simplify and fix tie_word_embeddings
* Use PIL resize. Simplify converstion.
* First version that works with video input.
* simplifed image preprocessing (not batched)
* Minor fixes after rebasing on main.
* Video processor based on new API.
* Revert to use _preprocess for image processor.
* refactor with modular
* fix tie_word_embedding
* Testing with timm PE
* check in missed converstion from modular to model.py
* First working version of PLM with Eva PE. PLM-1B and 3B outputs are exactly the same as before. PLM-8B output has some differences.
* address review comments
* Fixed batching if video and image examples mixed.
* Simplify PE configuration.
* Enable AutoModel for PerceptionEncoder.
* Update PE config style.
* update all headers
* Minor fixes.
* Move lm_head to PerceptionLMForConditionalGeneration.
Fix vit_G model specification.
* Fix for testing_modeling_perception_lm.py
* Image processing refactoring to use more common parts.
* Fix processor test.
* update tests to use model from hub
* More test fixes.
* integration test GT update after rebasing; probably due to video preprocessing
* update test media path to hub
* Stop tracking local scripts
* address some review comments
* refactor image processing.
* small fixes
* update documentation and minor fixes
* remove scripts
* Minor fix for CI
* Fix image processing
* CI and doc fix
* CI formatting fix
* ruff fix
* ruff formatting
* ran utils/sort_auto_mappings.py
* update docstring
* more docstring udpates
* add vision_input_type default fallback for image processing
* more verbose variable naming
* test update
* Remove PE and PEConfig use AutoModel(TimmWrapper) instead
* Minor cleanup.
* Minor Fix: remove any ref to PE. Ruff format and check.
* fix docstring
* Fix modular/model consistency.Improvex docstringfor .
* Fix PerceptionLMForConditionalGenerationModelTest
* ruff fix
* fix for check_repo
* minor formatting
* dummy size arg to fix for processor test.
* Update docstring for PerceptionLMConfig
* Minor fixes from review feedback.
* Revert some minor changes per reviewer feedback.
* update base_model_prefix
* address reviewer feedback
* fix comment in modeling file
* address reviewer feedback
* ruff format
* Pre-merge test update.
* reapply modular and fix checkpoint name
* processor test path
* use modular a bit more
* remove dead code
* add token decorator
---------
Co-authored-by: Cyril Vallez <cyril.vallez@huggingface.co>
Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com>
* Updated Switch Transformers model card with standardized format (Issue #36979)
* Apply reviewer suggestions to the new standardised Switch Transformer's model card
* Update switch_transformers.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* changes for video
* update modular
* change get_video_features
* update video token replacement
* update modular
* add test and fix typo
* lint
* fix order
* lint
* fix
* remove dependency
* lint
* lint
* remove todo
* resize video for test
* lint..
* fix test
* new a processor for video_test
* fix test
Also add notes asking users to set `TORCHDYNAMO_CAPTURE_SCALAR_OUTPUTS=1`
or call `torch._dynamo.config.capture_scalar_outputs = True`, as currently
this will cause a graph break.
Signed-off-by: Hollow Man <hollowman@opensuse.org>
* ensure the query is updated during training
avoid unused parameters that DDP does not like
* avoid a crash when `kwargs` contain `padding=True`
trainers often pass this argument automatically
* minor
* Remove mel_spec lazy init, and rename to mel_filters.
this ensures save_pretrained will not crash when saving the processor during training
d5d007a1a0/src/transformers/feature_extraction_utils.py (L595)
* minor - most feature extractors has a `sampling_rate` property
* speedup relative position embeddings
* fix several issues in model saving/loading:
- avoid modifying `self._hf_peft_config_loaded` when saving
- adapter_config automatically points to the original base model - a finetuned version should point to the model save dir.
- fixing model weights names, that are changed by adding an adapter.
* minor
* minor
* minor
* fixing a crash without peft active
* add todo to replace einsum
* granite speech speedups:
1. register attention_dist to avoid cpu-to-gpu transfer every layer.
2. pad_sequence is much faster than per-sample-padding + concat.
3. avoid returning audio back to cpu when using a compute device.
* support audio.shape=(1,L)
* add initial structure
* doc fixes, add model base logic
* update init files
* some fixes to config and modular
* some improvements for attention
* format
* remove unused attn
* some fixes for moe layer and for decoder
* adapt _compute_yarn_parameters for deepseek
* format
* small fix
* fix for decoder forward
* add tests, small refactoring
* fix dummies
* fix init
* fix doc
* fix config docs
* add sequce doc, fix init for gate
* fix issues in tests
* fix config doc
* remove unused args
* some fixes and refactoring after review
* fix doc for config
* small fixes for config args
* revert config refactoring
* small refactoring
* minor fixes after rebase
* small fix after merge
* fix modular
* remove rotaryembd from public init
* small test fix
* some rotary pos calculation improvement
* fix format
* some improvements and fixes
* fix config
* some refactoring
* adjust some unit tests
* skip test
* small fixes and tests adjustment
* reapply modular
* fix all tests except Integration
* fix integration testzs
* cleanup BC stuff
* rope
* fix integrations tests based on a10
* style
---------
Co-authored-by: Cyril Vallez <cyril.vallez@huggingface.co>
Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com>
* Add Doge Model
* Fix code quality
* Rollback an error commit
* Fix config for open-source weights
* Revert "Fix config for open-source weights"
This reverts commit 229cdcac10a6a4274d1dd13b729bc14c98eb0c76.
* Add modular_doge
* Update Doge inherits from Llama
* Fix import bug
* [docs] Add usage of doge model
* Fix Doge import pretrainedconfig from modeling_utils to configuration_utils
* [docs] remove trust remote code from doge
* Fix dynamo bug in doge model
* Update docstrings
* Import apply_rotary_pos_emb and repeat_kv from Llama
* Fix all nits
* Fix code quality
* Fix some bugs
* Fix code quality
* Remove inherited `_update_causal_mask` from Llama
This leads to incorrect weight initialization.
* Fix the wrong tensor orderings in DogeCDMoE
* Fix attention mask bug
We have to provide attention_mask for dynamic mask computation
* Modify most implementations to inherit from Llama
But there are two problems:
1. `flex_attention_forward` is not updated properly
2. `Example` error in the forward method of DogeForCausalLM
* Modify CDMoE for batch efficient implementation
* Uniform MoE configuration names, just like QwenMoE
* Fix code quality
* Fix code quality
* Fix code quality
* Add tp plan of CDMoE Module
* Hybird DMA with sliding window
* Update valid tokens greater than window size
* Fix code quality
* Add `convert_doge_weights_to_hf`
* Fix STATE_DICT_MAPPING in convert_doge_weights_to_hf.py
* Fix nits in modular_doge
* Fix code quality
* Fix all nits
* Fix all nits
* Make sure the attention function is updated inside the class
* Fix code quality issues in the Doge model and add a test for it
* Fix `test_generate`
* Fix code quality
* Fix nits fllowing suggestions
* Fix code quality
* Fix code quality issues
* Fix nits
* Fix code quality nits
* Fix the missing parameters in the configuration.
* Fix the missing parameters in the configuration.
* Fix nits
* Add initialization of attention
* Fix last nits
* Simplify dynamic mask generation logic
* Rename router_logits to gate_logits for matching latest changes of MixtralModel
* Rename typings for matching latest changes of MixtralModel
* Fixes typo in comment
* Update src/transformers/models/doge/modular_doge.py
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
* Fix code quality issues to match other modular
* Fix code quality issues to match other modular
* Fix the static compilation errors
* Update model weights link
* Fix code quality issues to match other modular
* reapply modular and support for new outputs
* style
* simplify a lot
* fix import location
* reapply modular
* fix
* fix integration test
---------
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: Cyril Vallez <cyril.vallez@huggingface.co>
Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com>
* Fix errors when use verl to train GLM4.1v model
* Support glm4v load from AutoModelForVision2Seq
* Set glm4v model _checkpoint_conversion_mapping attr from None to {}
* Update modeling_auto.py
* fix(decoding): stop beam search per-instance when heuristic satisfied
Previously, when early_stopping is set to `False`, the early-stopping heuristic only halted generation when **all** batch instances reached the criterion. This caused instances that are impossible (suggested by the heuristic) to improve keep generating, leading to inconsistent and overlong outputs across the batch.
Now we apply the heuristic **per-instance**: once a certain instance of batch has its all beams impossibe to improve, we mark that instance finished while letting others continue. This restores expected behavior and ensures consistency in batched generation.
* Add test case GenerationIntegrationTests.test_beam_search_early_stop_heuristic
* Update naming improvement_possibility -> is_early_stop_heuristic_unsatisfied
* Add comments for early stop heuristic
* Update src/transformers/generation/utils.py
---------
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
- Complete Apache License text in Italian documentation
- Remove duplicate variable assignment in Perceiver converter
- Fix typo in MODEL_FOR_VISION_2_SEQ_MAPPING_NAMES constant
* chameleon xpu bnb groundtruth update on bnb triton backend since we are
deprecating ipex backend
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* enable hqq uts on XPU, all passed
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* fix style
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* fix comment
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
---------
Signed-off-by: YAO Matrix <matrix.yao@intel.com>
* update the glm4 model readme
* update test
* update GLM-4.1V model
* update as format
* update
* fix some tests
* fix the rest
* fix on a10, not t4
* nit: dummy import
---------
Co-authored-by: raushan <raushan@huggingface.co>
* [video processors] Support float fps for precise frame sampling
Enable fractional fps values (e.g., 1.5, 29.97) in video processors
for more precise frame sampling control.
- Change fps type from int to float across all video processors
- Maintain backward compatibility with integer values
Extends: #38105
* [video processors] Refine fps typing to Union[int, float]
Change fps type from Optional[float] to Optional[Union[int, float]]
for more explicit type information about supporting both integer
and floating-point frame rates.
- Update type hints and docstrings across 8 files
- Maintain backward compatibility
- Clarify support for both int and float values
Extends: #38105
* Revert "[video processors] Support float fps for precise frame sampling"
This reverts commit 7360d6e661b413ca0239e5ef61f9b1abbeab8e65.
* just update 2 files
* update other models as well just making fix-copies
* also add the changes needed to modeling utils
* put this on the pretrained model instead
* nits and fixes
* update generic, fix to use config value
* update other modelings
* use transformers kwargs instead
* update
* update
* update other models
* update
* updates
* update
* update
* update
* fix
* finally
* very small nits
* this fixes more tests
* fix other models as well!
* update modularqwen2
* update models based on qwen2
* update
* update
* remove the **flash stuff in favor of noraml kwargs
* update
* propagate gemma?
* remove output attentions
* propagate
* support cross attention edge case
* same
* test this
* fixes
* more fix
* update
* update
* fix conflicts
* update
* fix emu3
* fix emu3
* move the fix a bit
* quel enfer
* some fixes, loss_kwargs should never had been
* finish fixing gemma3n
* fix small lm3
* fix another one
* fix csm now
* fux csm and mistral
* fix mistral now
* small fixes
* fix janusss
* only for some models
* fixup
* phix phi3
* more fixes?
* dose this fix it?
* update
* holy shit it was just graph breaks
* protect torch
* updates
* fix samhq?
* fix moonshine
* more moonshine fixes, 3 failures left!
* nits
* generic needs to support more
* more fixes to moonshine!
* fix cross attention outputs!
* fix csm!
* nits
* fix stupid kosmos2
* current updates
* fixes
* use output recorder?
* nicer!
* a little bit of magic
* update
* fix protect
* fix
* small fixes
* protect import
* fix a bunch of more models
* fix fixups
* fix some of the last ones
* nit
* partly fix phi
* update
* fix import path
* make something that is fullgraph compatible just to be sure
* typing was wrong on llama so the rest was wrong as well
* fucking ugly but at least it is still exportable
* syle
* supposed to fix moonshine, it still breaks
* fix some default
* fix the last bits of sam
* update samhq
* more fixes to am hq
* nit
* fix all output+hidden states and output_attentions!
* fix?
* fix diffllama
* updates to fix initialization on the sam pips
* ups there was a bug
* fix the last sam hq test
* fix gotocr
* fix gotocr2!
* fixes
* skip stupid tests
* there was one left :)
* fixup
* fix fix copies issues with this test file
* fix copies for sam_hq
* rm some comments
* skip 2 more failing tests
* fix
* fix everything
* Apply suggestions from code review
Co-authored-by: Anton Vlasjuk <73884904+vasqu@users.noreply.github.com>
Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>
* add more doc!
* fix public init
* fix modular qwen3
---------
Co-authored-by: Anton Vlasjuk <73884904+vasqu@users.noreply.github.com>
Co-authored-by: Pablo Montalvo <39954772+molbap@users.noreply.github.com>
* more torch.hpu patches
* increase top_k because it results in flaky behavior when Tempreture, TopP and TopK are used together, which ends up killing beams early.
* remove temporal fix
* fix scatter operation when input and src are the same
* trigger
* fix and reduce
* skip finding batch size as it makes the hpu go loco
* fix fsdp (yay all are passing)
* fix checking equal nan values
* style
* remove models list
* order
* rename to cuda_extensions
* Update src/transformers/trainer.py
* Expectations for llava_next_video
* Updated image src in aria
* Fix test_small_model_integration_test
* Fix small model integration llama
* Fix a bunch of tests
* Style
* Shortened generation in test from 900 to 90
* Fix index out of bounds exception on wrong kv reuse
* Prevent loading same model twice
---------
Co-authored-by: Joao Gante <joaofranciscocardosogante@gmail.com>
Co-authored-by: Lysandre Debut <hi@lysand.re>
* Fixed some devices errors
* Fixed other device issues and more expectations
* Reverted support flags
* style
* More granular support
* Fixed some rebase stuff
* add a not None check before .to
* fix FA2
* update is causal flag and remove mask for FA2
* update for FA2 with varlen path
* how the tests were passing with different devices?
* add comment and ref to the PR
* move mask preparation to base pretrained model
* seq len is the first dim, not second
* fix copies to fix GLM4V
* deprecate for 1 version
* style
* fix some tests
* fix esm
* skip for now, GC requires positional args but we have keyword args
* remove transpose for scores in modified models only
* skip fx trace tests
* remove the skips
* fix the epsilon to a small value (does not make sense otherwise)
* safeguard
* overload test_eager_matches_sdpa
* Update test_modeling_common.py
* skip appropriate tests
* correct no_split_layer
* fix all devices issue
* fix backward
* fix
Updating Gemma 3n docs and docstrings to clarify the relationship
between the newly trained audio encoder used in Gemma 3n and the USM
model from the original paper.
TST Fix PEFT integration test bitsandbytes config
The PEFT integration tests still used load_in_{4,8}_bit, which is
deprecated, moving to properly setting BitsAndBytesConfig. For 4bit,
also ensure that nf4 is being used to prevent
> RuntimeError: quant_type must be nf4 on CPU, got fp4
* Add Fast Image Processor for Chameleon
* add warning to resize and move blend_rgba to convert_to_rgb
* Remove unrelated files
* Update image_processing_chameleon_fast to use auto_docstring
* fix equivalence test
---------
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co>
* add fast image processor nougat
* test fixes
* docstring white space
* last fixes
* docstring_type
* tolerance unit test
* fix tolerance
* fix rtol
* remove traling white space
* remove white space
* note for tolerance unit test
* fix tests
* remove print
---------
Co-authored-by: yonigozlan <yoni.gozlan@huggingface.co>
Co-authored-by: Yoni Gozlan <74535834+yonigozlan@users.noreply.github.com>
Some PEFT integration tests involving text generation pipelines were
failing since #38129 because the base model is too small to generate
longer sequences. Setting max_new_tokens fixes this.
* timestamp token is end of token time !!!
* ensure correct alignment between tokens and timestamp tokens
* ignore input tokens for DTW computation
* use num_frames to avoid token timestamp hallucinations
* token timestamps test updates !
* num_frames: deprecate and use attention_mask instead
* avoid breaking change
* fix the pipeline usage for chunk approach
* make style
* better logging
* better logging
* make style
* update tests with correct values
* Update PEGASUS-X model card
* Add cache_implementation argument in quantization code example
* Update CLI example
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* Remove TensorFlow and Flax badges
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* docs: first draft to more standard SuperPoint documentation
* Apply suggestions from code review
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* docs: reverted changes on Auto classes
* docs: addressed the rest of the comments
* docs: remove outdated reference to keypoint detection task guide in SuperPoint documentation
* Update superpoint.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* remove compile on mask creation, ensure kv blocks do not explode on indices
* trigger ci
* switch dynamic compilation to false
* patch new masking functions as well
* add len check
* i was wrong
* last comment
* Gemma 3n
* initial commit of Gemma 3n scaffold
* Fixing param pass through on Gemm3p5RMSNorm
* Adds Einsum layer to Gemma 3n
* Updating EinsumLayer API
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Adds AltUp to Gemma 3n
* Adding Gemma3p5 overall and text config with vision and audio config placeholders (#3)
* Adding gemma3p5 text configs
* Adding audio config placeholders
* Adding a placeholder for vision configs
* Updating MobileNetVisionConfig, inheriting TimmWrapperConfig
* Updating text configs
* Update src/transformers/models/gemma3p5/modular_gemma3p5.py
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Removing altup configs to accept the suggested configs
* Update src/transformers/models/gemma3p5/modular_gemma3p5.py
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Updating altup config
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Addressing review comments and updating text configs
* Adding a config for activation sparsity
* Updating configs to pass through options to super class init and adjust some name prefixes
* Updating laurel and altup with corrected config values
* Normalizing sub_config initializers
---------
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Updating MLP with activation sparsity (#2)
* Updating DecoderBlock for Gemma 3n (#3)
* Initial Gemm3nTextModel (#4)
NOTE: This implementation WILL CHANGE in the coming weeks, however, changes will be strictly additive and this will remain a suitable baseline for downstream implementations to reference.
* Adding KV Cache Sharing
* Adds Einsum layer to Gemma 3n
* Updating EinsumLayer API
* Refactored kv cache sharing in attention
* Adding KVStore for cache sharing
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update src/transformers/cache_utils.py
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Updating KV Cache Sharing implementation
* Updating the q and k norm definitions in the attention module
* Fixing name error for q,k,v RMS norm to use the right 3n module
* Updating MLP with activation sparsity
* Updating DecoderBlock for Gemma 3.5
* Updating kv cache sharing implementation with the use of a cache buffer and refactoring some lines of code
* Isolating KV Cache logic to relevant components
* Fixing logic error in Gemma3nAttention.forward
* Refactoring caching contributions and fixing kv_store initialization
* Simplifying Configs
* Remove errant self from super init call
* Bug fix in the Attention module - changing self.head_dim to config.head_dim
* Bug fixes in the LaurelBlock and RMS Norm super init call
* removing redundant code from a merge
* Adding per_layer_inputs to TextModel
* Adding preprocess embeddings with altup
* Adds per-layer-to-single output and a host of TODOs
* Integrating altup predict with the model workflow and other minor bug fixes
* Using nn.Embedding temporarily for text model
* It goes forward
* Minor refactor of attention sparsity and RoPE initialization
* Fixing duplicate rope_scaling param bug when loading from pretrained
---------
Co-authored-by: Sindhu Raghuram <sindhuraghuram@google.com>
Co-authored-by: SindhuRaghuram97 <114270661+SindhuRaghuram97@users.noreply.github.com>
* Normalizing on altup_num_inputs config option
* regenerating modeling file after syncing to HEAD
* Use torch.std(..., unbiased=False) for activation sparsity (#8)
* Refactoring to a single QVK Norm (#13)
* AltUp: support scale_corrected_output (#14)
* Converts einsums to nn.Linear (#7)
* Converts einsums to nn.Linear
* Removing unused variables
* Aligning SharedKVCache with HybridCache (#11)
* Alinging SharedKVStore with HybridCache
* Remove KVStore. Refactor apply_rotary_pos_emb for sharing
* Addressing review comments
* Supporting split modality embeddings in Gemma3n (#10)
* Adding the Embedder class
* Update modular
Co-authored-by: Ryan Mullins <ryan@ryanmullins.org>
* Update modular
Co-authored-by: Ryan Mullins <ryan@ryanmullins.org>
* Update modular
Co-authored-by: Ryan Mullins <ryan@ryanmullins.org>
* Update modular
Co-authored-by: Ryan Mullins <ryan@ryanmullins.org>
* Update modular
Co-authored-by: Ryan Mullins <ryan@ryanmullins.org>
* Update modular
Co-authored-by: Ryan Mullins <ryan@ryanmullins.org>
* Addressing review comments, adding audio embedding layers, integrating embedder with the remaining architecture, adding a forward method for conditional generation
* Apply suggestions from code review
Co-authored-by: Ryan Mullins <ryan@ryanmullins.org>
* Update modular
Co-authored-by: Ryan Mullins <ryan@ryanmullins.org>
* Addressing review comments, prop drilling audio and vision configs to the text config
* Removing TODO's that have been addressed
* Simplify Embedder init and add audio embeddings
* Embeddings refactor. Adds Gemma3nAudioEmbedder and Gemma3nVisionEmbedder
* Refactoring vision and audio embeddings into ConditionalGeneration model
---------
Co-authored-by: Ryan Mullins <ryan@ryanmullins.org>
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Updating attention mask for Gemma 3.5 (#15)
* xxx_token_index to xxx_token_id
* remvoing deprecated last_cache_position
* Removing references to SigLIP
* Always init per-layer inputs
* Using torch.finfo().min for epsilon_tensor
* Gemma3nDecoderLayer inherits from Gemma3DecoderLayer. Remove gating lambdas
* fix modular GEMMA3N_INPUTS_DOCSTRING
* Gemma3nAttention inherits from Gemma3Attention
* Modular inheritance fixes
* CausalLM conversion script for 4B model (#16)
* Add Gemma3n Audio Encoder (#6)
* initial commit of Gemma 3.5 scaffold
* Fixing param pass through on Gemm3nRMSNorm
* Adds Einsum layer to Gemma 3.5
* Updating EinsumLayer API
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Adds AltUp to Gemma 3n
* Adding Gemma3n overall and text config with vision and audio config placeholders (#3)
* Adding gemma3n text configs
* Adding audio config placeholders
* Adding a placeholder for vision configs
* Updating MobileNetVisionConfig, inheriting TimmWrapperConfig
* Updating text configs
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Removing altup configs to accept the suggested configs
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Updating altup config
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Addressing review comments and updating text configs
* Adding a config for activation sparsity
* Updating configs to pass through options to super class init and adjust some name prefixes
* Updating laurel and altup with corrected config values
* Normalizing sub_config initializers
---------
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Updating MLP with activation sparsity (#2)
* Updating DecoderBlock for Gemma 3.5 (#3)
* Initial Gemm3nTextModel (#4)
NOTE: This implementation WILL CHANGE in the coming weeks, however, changes will be strictly additive and this will remain a suitable baseline for downstream implementations to reference.
* Adding KV Cache Sharing
* Adds Einsum layer to Gemma 3.5
* Updating EinsumLayer API
* Refactored kv cache sharing in attention
* Adding KVStore for cache sharing
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update src/transformers/cache_utils.py
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Updating KV Cache Sharing implementation
* Updating the q and k norm definitions in the attention module
* Fixing name error for q,k,v RMS norm to use the right Gemma 3n module
* Updating MLP with activation sparsity
* Updating DecoderBlock for Gemma 3.5
* Updating kv cache sharing implementation with the use of a cache buffer and refactoring some lines of code
* Isolating KV Cache logic to relevant components
* Fixing logic error in Gemma3nAttention.forward
* Refactoring caching contributions and fixing kv_store initialization
* Simplifying Configs
* Remove errant self from super init call
* Bug fix in the Attention module - changing self.head_dim to config.head_dim
* Bug fixes in the LaurelBlock and RMS Norm super init call
* removing redundant code from a merge
* Adding per_layer_inputs to TextModel
* Adding preprocess embeddings with altup
* Adds per-layer-to-single output and a host of TODOs
* Integrating altup predict with the model workflow and other minor bug fixes
* Using nn.Embedding temporarily for text model
* It goes forward
* Minor refactor of attention sparsity and RoPE initialization
* Fixing duplicate rope_scaling param bug when loading from pretrained
---------
Co-authored-by: Sindhu Raghuram <sindhuraghuram@google.com>
Co-authored-by: SindhuRaghuram97 <114270661+SindhuRaghuram97@users.noreply.github.com>
* Normalizing on altup_num_inputs config option
* Adding audio encoder config
* Adds high-level components for Audio Encoder
* Implement uniform reducer for Audio Encoder
* Adding placeholders for Conformer components in Audio Encoder
* Adding placeholders for SubSampleConvProjection components in Audio Encoder
* Adding SequenceLayer component placeholders
* Implementing Gemma3nAudioEncoder with nn.Sequential
* Implementing Gemma3nAudioSubSampleConvProjection with nn.Sequential
* Implementing Conformer model with SequenceLayers
* Use OrderedDict in nn.Sequential initializers
* Implements sl.Residual in Torch with nn.Sequential and OrderedDict
* Adopting a base SequenceLayer class with default forward() method
* Implementing sl.GatedLinearUnit in Torch
* Implementing sl.Swish in Torch
* Implementing sl.ReLU in Torch
* Implementing sl.Scale in Torch
* Removing sl.Dropout after tree-shaking
* Implementing sl.RMSNorm in Torch with fake shape
* Implementing sl.GroupNorm in Torch
* Implementing sl.Conv2d in Torch
* Implementing sl.Dense in Torch
* Removing sl.Delay layers, which act as pass-throughs
* Connecting shapes to configs in initializers
* Removing sl.Emit
* Implementing sl.ExpandDims in Torch
* Adding sl.GradientClipping to Torch
* Implementing sl.DenseShaped in Torch
* Implementing sl.LDPA in Torch
* Removing unused sl.CombinedQKVProj class
* Fixing erroneous type hint
* Implemnenting sl.DepthwiseConv1D in Torch
* Implementing sl.MaskInvalid in Torch
* Fixes for initialization
* Fixes for saving weights
* Removing einsums per feedback from HF staff
* Removing Sequence Layers idioms from audio encoder
* Fixes for reviewer comments
* CausalLM conversion script for 4B model
* inv_timescales to non-persistent buffer
* Addressing audio encoder Attention feedback
* Addressing Gemma3nAudioSSCPConvBlock feedback
* Addressing Gemma3nAudioConformerAttention feedback
* Addressing padding feedback
* Weights conversion loads audio state dict
* Always use vision_config so saving works
* Token id updates for configs
* Stubs for interleaving audio embs
* Addressing reviewer feedback
---------
Co-authored-by: SindhuRaghuram97 <114270661+SindhuRaghuram97@users.noreply.github.com>
Co-authored-by: Sindhu Raghuram <sindhuraghuram@google.com>
* Fixing cache access error
* Removing duplicate code from a bad merge
* Gemma 3n Text + Vision Part 1 (#17)
* testing utilities for numerics comparisons
* Corrected einsum to nn.Linear weights conversion
* Inherit scaled word embs from Gemma3 not Bart
* Fixing transposes for collapsed linears
* More transpose fixes
* numpy api fix
* RMSNorm: Explicit kwargs, scale_shift=0.0 when with_scale=True
* Force AltUp to float32
* Updating debugging script for AudioEncoder debugging
* Support divide_weight_by_sqrt_fan_in from JAX for per-layer inputs
* Correcting attention einsum conversions
* RMSNorm in type of x
* Fixing douplicate laurel norm/gating
* KV sharing using the right previous indices
* Refactor kv shared index computation. Correct frac_shared_layers
* Use num_shared_layers instead of inferring from a fraction
* fixing a bug for logging
* Fix shared data_ptrs in altup inits
* rope: adjust proj -> norm -> rope to preserve computation (#20)
* rope: adjust proj -> norm -> rope to preserve computation
* Removing some breaking language model fluff in ConditionalGeneration
* Consolidate query_states transforms
---------
Co-authored-by: Douglas Reid <21148125+douglas-reid@users.noreply.github.com>
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Vectorize the loops in AltUp (#19)
* Vectorize the loops in AltUp
* fix typo
* Expanding to support batched inputs
* remove extra debug script
* Fix AltUp.forward
---------
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Add 'scale_shift=0.0, with_scale=True' to the final norm in TextModel
* Convert norm to 1/sqrt (#21)
* Convert norm to 1/sqrt
* Scale shift change per Phil's rec
* Adding default activation sparsity
* Fixing 2B config in weights conversion script
* Fixing RMSNorm parameters - adding scale_shift and with_scale
* Correcting query pre-attention scaling
* Adding query_rescale_scalar to text config
* Adding layer_idx to MLP
* Permafix for input_layernorm
* Use 1/sqrt instead of rsqrt in DecoderLayer
* Fix o_proj conversion
* Conversion script update for vision encoder
* Removing logging for debugging timm model
* Fixing bugs in Gemma3nForConditionalGeneration for text generation
* Generating the modeling_gemma3n.py file
* Removing the addition of an erroneous line in the modeling file
* Adding gemma3n text model to modeling_auto
* Bugfix: Updating the interleaving of inputs_embeds and vision_embeds
* Updating the modeling file with the latest bugfix changes
* Updating models/auto for Gemma 3n
* using AutoTokenizer in forward test
* Adding processing_gemma3n.py
* Gemma 3n configured for AutoModel. Conversion script updated.
* Removing errant merge artifacts
---------
Co-authored-by: Mayank Chaturvedi <imayank@google.com>
Co-authored-by: Douglas Reid <douglas-reid@users.noreply.github.com>
Co-authored-by: Douglas Reid <21148125+douglas-reid@users.noreply.github.com>
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
Co-authored-by: Sindhu Raghuram <sindhuraghuram@google.com>
* Removing errant debugging statements from Gemma 3
* Gemma3n audio model (#18)
* testing utilities for numerics comparisons
* Implement CumulativeGroupNorm and add to SubSampleConvProjection and SSCPConvBlock
* Add audio version of forward script based on RyanMullins' implementation
* Updating to match encoder tests. WIP: config question needs resolving
* Updates to audio classes to enable end-to-end running
* Removing vestigial classes, cleaning up print statements
* Adding SiLU / Swish to audio conformer feed forward block
* Shifted Gemma3p5Audio naming prefix to Gemma3NanoAudio
* Adding outputs to audio test
* Fixes to padding in SSCP and 1D convolution, align RMS Norm with wider model
* Update forward test to load from local weights
* Update conversion to process / output audio layers
* Update __all__ to export audio encoder
* AutoModel registration for Gemma 3n Audio
* Use AutoModel for ConditionalGeneration.audio_tower
* Fixing input_proj_linear transpose
* Fixing Gemma3NanoAudioConformerAttention.post conversion
* Fixing Gemma3NanoAudioSSCPConvBlock.conv weights conversion
* Correcting indentation issue on Gemma3p5RMSNorm
---------
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Text + Vision Part 2 (#23)
* Updates for ConditionalGeneration.get_image_features
* Adding a WIP draft of image_processing_gemma3p5.py
* Update src/transformers/models/gemma3p5/modular_gemma3p5.py
Co-authored-by: SindhuRaghuram97 <114270661+SindhuRaghuram97@users.noreply.github.com>
* Modular conversion after github suggested change
* Text + image gives good results
* Fixing image size preset
* Updating configs for the 2B variant in the conversion script
* Using final generation config in conversion script
---------
Co-authored-by: Sindhu Raghuram <sindhuraghuram@google.com>
Co-authored-by: SindhuRaghuram97 <114270661+SindhuRaghuram97@users.noreply.github.com>
* Audio Integration (#12)
* initial commit of Gemma 3n scaffold
* Fixing param pass through on Gemm3nRMSNorm
* Adds Einsum layer to Gemma 3n
* Updating EinsumLayer API
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Adds AltUp to Gemma 3n
* Adding Gemma 3n overall and text config with vision and audio config placeholders (#3)
* Adding Gemma 3n text configs
* Adding audio config placeholders
* Adding a placeholder for vision configs
* Updating MobileNetVisionConfig, inheriting TimmWrapperConfig
* Updating text configs
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Removing altup configs to accept the suggested configs
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Updating altup config
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Addressing review comments and updating text configs
* Adding a config for activation sparsity
* Updating configs to pass through options to super class init and adjust some name prefixes
* Updating laurel and altup with corrected config values
* Normalizing sub_config initializers
---------
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Updating MLP with activation sparsity (#2)
* Updating DecoderBlock for Gemma 3n (#3)
* Initial Gemma3nTextModel (#4)
NOTE: This implementation WILL CHANGE in the coming weeks, however, changes will be strictly additive and this will remain a suitable baseline for downstream implementations to reference.
* Adding KV Cache Sharing
* Adds Einsum layer to Gemma 3n
* Updating EinsumLayer API
* Refactored kv cache sharing in attention
* Adding KVStore for cache sharing
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update modular
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Update src/transformers/cache_utils.py
Co-authored-by: Ryan Mullins <ryanmullins@google.com>
* Undoing erroneous force push
* Reverting RMSNorm to with_scale by default
* Adds LAuReL to Gemma 3n
* Updating KV Cache Sharing implementation
* Updating the q and k norm definitions in the attention module
* Fixing name error for q,k,v RMS norm to use the right 3n module
* Updating MLP with activation sparsity
* Updating DecoderBlock for Gemma 3n
* Updating kv cache sharing implementation with the use of a cache buffer and refactoring some lines of code
* Isolating KV Cache logic to relevant components
* Fixing logic error in Gemma3nAttention.forward
* Refactoring caching contributions and fixing kv_store initialization
* Simplifying Configs
* Remove errant self from super init call
* Bug fix in the Attention module - changing self.head_dim to config.head_dim
* Bug fixes in the LaurelBlock and RMS Norm super init call
* removing redundant code from a merge
* Adding per_layer_inputs to TextModel
* Adding preprocess embeddings with altup
* Adds per-layer-to-single output and a host of TODOs
* Integrating altup predict with the model workflow and other minor bug fixes
* Using nn.Embedding temporarily for text model
* It goes forward
* Minor refactor of attention sparsity and RoPE initialization
* Fixing duplicate rope_scaling param bug when loading from pretrained
---------
Co-authored-by: Sindhu Raghuram <sindhuraghuram@google.com>
Co-authored-by: SindhuRaghuram97 <114270661+SindhuRaghuram97@users.noreply.github.com>
* Normalizing on altup_num_inputs config option
* Adding audio encoder config
* Adds high-level components for Audio Encoder
* Implement uniform reducer for Audio Encoder
* Adding placeholders for Conformer components in Audio Encoder
* Adding placeholders for SubSampleConvProjection components in Audio Encoder
* Adding SequenceLayer component placeholders
* Implementing Gemma3nAudioEncoder with nn.Sequential
* Implementing Gemma3nAudioSubSampleConvProjection with nn.Sequential
* Implementing Conformer model with SequenceLayers
* Use OrderedDict in nn.Sequential initializers
* Implements sl.Residual in Torch with nn.Sequential and OrderedDict
* Adopting a base SequenceLayer class with default forward() method
* Implementing sl.GatedLinearUnit in Torch
* Implementing sl.Swish in Torch
* Implementing sl.ReLU in Torch
* Implementing sl.Scale in Torch
* Removing sl.Dropout after tree-shaking
* Implementing sl.RMSNorm in Torch with fake shape
* Implementing sl.GroupNorm in Torch
* Implementing sl.Conv2d in Torch
* Implementing sl.Dense in Torch
* Removing sl.Delay layers, which act as pass-throughs
* Connecting shapes to configs in initializers
* Removing sl.Emit
* Implementing sl.ExpandDims in Torch
* Adding sl.GradientClipping to Torch
* Implementing sl.DenseShaped in Torch
* Implementing sl.LDPA in Torch
* Removing unused sl.CombinedQKVProj class
* Fixing erroneous type hint
* Implemnenting sl.DepthwiseConv1D in Torch
* Implementing sl.MaskInvalid in Torch
* Fixes for initialization
* Fixes for saving weights
* Removing einsums per feedback from HF staff
* Removing Sequence Layers idioms from audio encoder
* Fixes for reviewer comments
* Converting sl.Frontend to FeatureExtractor
* Updates for ConditionalGeneration.get_image_features
* Adding a WIP draft of image_processing_gemma3n.py
* Update modular
Co-authored-by: SindhuRaghuram97 <114270661+SindhuRaghuram97@users.noreply.github.com>
* Modular conversion after github suggested change
* Text + image gives good results
* Fixing image size preset
* Draft of audio data in chat template
* Removing image processing. Using SigLIP instead.
* Audio input going end-to-end
* Fixing dtype issues in audio encoder
* x-lib formatting consistency
* Adding example data
* Save preprocessor_config.json from conversion script
* Instrumentaiton for debugging
* Additional instrumentation for preprocessing debugging
* Updates to preprocessor, padding; produces correct end-to-end results on sample
* Tackling configuraiton TODOs
* Start of feature extractor refatcor
* Adds Numpy version of USM extractor, removes Torch version and dependencies
* Fixing AltUp.correct coef permute
* Supporting batches of single audio segment inputs
* Docstrings updates for config
* In-lining audio feature extraction
* Adjustments to conversion script and smoke test script
---------
Co-authored-by: SindhuRaghuram97 <114270661+SindhuRaghuram97@users.noreply.github.com>
Co-authored-by: Sindhu Raghuram <sindhuraghuram@google.com>
Co-authored-by: pculliton <phillipculliton@gmail.com>
* Gemma 3n renaming
* Removing test data and utilities
* Renaming test files
* Gemma 3n refactor
* Fix tokenizer config in conversion script
* Address reviewer feedback
* FeatureExtractor returns float32 by default
* Adding basic tests for audio, and input name for audio encoder
* Audio integration test, updates to model_id for other integration tests
* Use scales for q and k norms (#26)
* Update audio integration test to use HF dataset
* Reviewer feedback
* Expand embedding table to full vocab size in weights conversion
* Mix-n-match MatFormers for Gemma 3n (#25)
* Remove in-place operations (#30)
* chore: removing inplace ops
* remove [tensor] * n pattern
* chore: reviewer feedback in AudioEncoder and AltUp
* More grad clipping
* Dynamo compatibility
* fix: cache slicing error
* chore: simplify shared kv cache slicing
* chore: vision encoder rename in timm
* fix: image processor do_normalize=False
* fixup: style
* chore: model_doc
* fix: docs for code quality
* chore: repo consistency
* fix: RMSNorm in float as in prior Gemmas
* fix: per_layer_inputs = None
* chore: Gemma3nForCausalLM from Gemma3nForConditionalGeneration checkpoint
* chore: repo consistency
* Add initial unit tests for Gemma3nAudioFeatureExtractor (#27)
* Add initial unit tests for Gemma3nAudioFeatureExtractor
* Add basic unit tests for Gemma3nProcessor (#28)
Co-authored-by: Douglas Reid <21148125+douglas-reid@users.noreply.github.com>
* parameterize tests
---------
Co-authored-by: Douglas Reid <21148125+douglas-reid@users.noreply.github.com>
* chore: code style
* fix: test cases
* style and consistency
* fix config in the test to be coherent with layer cache sharing
* fix hidden states in tests and code
* inits and mappings
* fix modality prefixes
* test order and prefixes
* fix test exception
* fix class order and reduce model size for faster tests
* restore _checkpoint_conversion_mapping to load Caual from Conditional
* fix config mapping!
* fix: reviewer feedback
---------
Co-authored-by: SindhuRaghuram97 <114270661+SindhuRaghuram97@users.noreply.github.com>
Co-authored-by: Sindhu Raghuram <sindhuraghuram@google.com>
Co-authored-by: raushan <raushan@huggingface.co>
Co-authored-by: Mayank Chaturvedi <imayank@google.com>
Co-authored-by: Douglas Reid <douglas-reid@users.noreply.github.com>
Co-authored-by: Douglas Reid <21148125+douglas-reid@users.noreply.github.com>
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
Co-authored-by: pculliton <phillipculliton@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com>
* fix import test
* add model args
* auto_docstring
* replace test path
* consistency
* skip tests for now
* fix docstring for doc builder
* skip unused attr
---------
Co-authored-by: SindhuRaghuram97 <114270661+SindhuRaghuram97@users.noreply.github.com>
Co-authored-by: Sindhu Raghuram <sindhuraghuram@google.com>
Co-authored-by: raushan <raushan@huggingface.co>
Co-authored-by: Mayank Chaturvedi <imayank@google.com>
Co-authored-by: Douglas Reid <douglas-reid@users.noreply.github.com>
Co-authored-by: Douglas Reid <21148125+douglas-reid@users.noreply.github.com>
Co-authored-by: Xuan-Son Nguyen <thichthat@gmail.com>
Co-authored-by: pculliton <phillipculliton@gmail.com>
Co-authored-by: Aritra Roy Gosthipaty <aritra.born2fly@gmail.com>
Co-authored-by: Cyril Vallez <cyril.vallez@gmail.com>
Co-authored-by: Arthur <arthur.zucker@gmail.com>
* rm tf/flax tests
* more flax deletions
* revert fixture change
* reverted test that should not be deleted; rm tf/flax test
* revert
* fix a few add-model-like tests
* fix add-model-like checkpoint source
* a few more
* test_get_model_files_only_pt fix
* fix test_retrieve_info_for_model_with_xxx
* fix test_retrieve_model_classes
* relative paths are the devil
* add todo
* handle long form generation
* add warning
* correct incorrect in place token change
* update test to catch edge case
* make style
* update warning
* add doc
* Image processor compile fix (#38540)
* Added a compile-friendly versiom of resize to BaseImgProcessorFast
* Changed qwen2 processor to use its parent class .resize
* Style
* underlined issue only happens on AMD w/ comment and bool check
* Fixed some utils functions
* Fixed the same issue for bridgetower
* Fixed the same issue for llava_next
* Repo consistency for llava onevision
* Update src/transformers/image_processing_utils_fast.py
Co-authored-by: Mohit Sharma <mohit21sharma.ms@gmail.com>
---------
Co-authored-by: Mohit Sharma <mohit21sharma.ms@gmail.com>
* Added an Expectation to an internvl test
* Made qwen2_vl use the resize method of its parent clas
* Changed to torch.where
---------
Co-authored-by: Mohit Sharma <mohit21sharma.ms@gmail.com>
* add dia model
* add tokenizer files
* cleanup some stuff
* brut copy paste code
* rough cleanup of the modeling code
* nuke some stuff
* more nuking
* more cleanups
* updates
* add mulitLayerEmbedding vectorization
* nits
* more modeling simplifications
* updates
* update rope
* update rope
* just fixup
* update configuration files
* more cleanup!
* default config values
* update
* forgotten comma
* another comma!
* update, more cleanups
* just more nits
* more config cleanups
* time for the encoder
* fix
* sa=mall nit
* nits
* n
* refacto a bit
* cleanup
* update cv scipt
* fix last issues
* fix last nits
* styling
* small fixes
* just run 1 generation
* fixes
* nits
* fix conversion
* fix
* more fixes
* full generate
* ouf!
* fixes!
* updates
* fix
* fix cvrt
* fixup
* nits
* delete wrong test
* update
* update
* test tokenization
* let's start changing things bit by bit - fix encoder step
* removing custom generation, moving to GenerationMixin
* add encoder decoder attention masks for generation
* mask changes, correctness checked against ad29837 in dia repo
* refactor a bit already --> next cache
* too important not to push :)
* minimal cleanup + more todos
* make main overwrite modeling utils
* add cfg filter & eos filter
* add eos countdown & delay pattern
* update eos countdown
* add max step eos countdown
* fix tests
* fix some things
* fix generation with testing
* move cfg & eos stuff to logits processor
* make RepetitionPenaltyLogitsProcessor flexible
- can accept 3D scores like (batch_size, channel, vocab)
* fix input_ids concatenation dimension in GenerationMixin for flexibility
* Add DiaHangoverLogitsProcessor and DiaExponentialDecayLengthPenalty classes; refactor logits processing in DiaForConditionalGeneration to utilize new configurations and improve flexibility.
* Add stopping criteria
* refactor
* move delay pattern from processor to modeling like musicgen.
- add docs
- change eos countdown to eos delay pattern
* fix processor & fix tests
* refactor types
* refactor imports
* format code
* fix docstring to pass ci
* add docstring to DiaConfig & add DiaModel to test
* fix docstring
* add docstring
* fix some bugs
* check
* porting / merging results from other branch - IMPORTANT: it very likely breaks generation, the goal is to have a proper forward path first
* experimental testing of left padding for first channel
* whoops
* Fix merge to make generation work
* fix cfg filter
* add position ids
* add todos, break things
* revert changes to generation --> we will force 2d but go 3d on custom stuff
* refactor a lot, change prepare decoder ids to work with left padding (needs testing), add todos
* some first fixes to get to 10. in generation
* some more generation fixes / adjustment
* style + rope fixes
* move cfg out, simplify a few things, more todos
* nit
* start working on custom logit processors
* nit
* quick fixes
* cfg top k
* more refactor of logits processing, needs a decision if gen config gets the new attributes or if we move it to config or similar
* lets keep changes to core code minimal, only eos scaling is questionable atm
* simpler eos delay logits processor
* that was for debugging :D
* proof of concept rope
* small fix on device mismatch
* cfg fixes + delay logits max len
* transformers rope
* modular dia
* more cleanup
* keep modeling consistently 3D, generate handles 2D internally
* decoder starts with bos if nothing
* post processing prototype
* style
* lol
* force sample / greedy + fixes on padding
* style
* fixup tokenization
* nits
* revert
* start working on dia tests
* fix a lot of tests
* more test fixes
* nit
* more test fixes + some features to simplify code more
* more cleanup
* forgot that one
* autodocs
* small consistency fixes
* fix regression
* small fixes
* dia feature extraction
* docs
* wip processor
* fix processor order
* processing goes brrr
* transpose before
* small fix
* fix major bug but needs now a closer look into the custom processors esp cfg
* small thing on logits
* nits
* simplify indices and shifts
* add simpler version of padding tests back (temporarily)
* add logit processor tests
* starting tests on processor
* fix mask application during generation
* some fixes on the weights conversion
* style + fixup logits order
* simplify conversion
* nit
* remove padding tests
* nits on modeling
* hmm
* fix tests
* trigger
* probably gonna be reverted, just a quick design around audio tokenizer
* fixup typing
* post merge + more typing
* initial design for audio tokenizer
* more design changes
* nit
* more processor tests and style related things
* add to init
* protect import
* not sure why tbh
* add another protect
* more fixes
* wow
* it aint stopping :D
* another missed type issue
* ...
* change design around audio tokenizer to prioritize init and go for auto - in regards to the review
* change to new causal mask function + docstrings
* change ternary
* docs
* remove todo, i dont think its essential tbh
* remove pipeline as current pipelines do not fit in the current scheme, same as csm
* closer to wrapping up the processor
* text to audio, just for demo purposes (will likely be reverted)
* check if it's this
* save audio function
* ensure no grad
* fixes on prefixed audio, hop length is used via preprocess dac, device fixes
* integration tests (tested locally on a100) + some processor utils / fixes
* style
* nits
* another round of smaller things
* docs + some fixes (generate one might be big)
* msytery solved
* small fix on conversion
* add abstract audio tokenizer, change init check to abstract class
* nits
* update docs + fix some processing :D
* change inheritance scheme for audio tokenizer
* delete dead / unnecessary code in copied generate loop
* last nits on new pipeline behavior (+ todo on tests) + style
* trigger
---------
Co-authored-by: Arthur Zucker <arthur.zucker@gmail.com>
Co-authored-by: Arthur <48595927+ArthurZucker@users.noreply.github.com>
Co-authored-by: Vasqu <antonprogamer@gmail.com>
* ensure the query is updated during training
avoid unused parameters that DDP does not like
* avoid a crash when `kwargs` contain `padding=True`
trainers often pass this argument automatically
* minor
* Remove mel_spec lazy init, and rename to mel_filters.
this ensures save_pretrained will not crash when saving the processor during training
d5d007a1a0/src/transformers/feature_extraction_utils.py (L595)
* minor - most feature extractors has a `sampling_rate` property
* speedup relative position embeddings
* fix several issues in model saving/loading:
- avoid modifying `self._hf_peft_config_loaded` when saving
- adapter_config automatically points to the original base model - a finetuned version should point to the model save dir.
- fixing model weights names, that are changed by adding an adapter.
* minor
* minor
* minor
* fixing a crash without peft active
* add todo to replace einsum
* remove trust_remote_code
* again
* Revert "Skip some tests for now (#38931)"
This reverts commit 31d30b72245aacfdf70249165964b53790d9c4d8.
* again
* style
* again
* again
* style
* fix integration test
* fix tests
* style
* fix
* fix
* fix the last ones
* style
* last one
* fix last
* fix
---------
Co-authored-by: ydshieh <ydshieh@users.noreply.github.com>
* fix: astronomical loss with ModernBERT when using gradient checkpointing
* update the modling fix
---------
Co-authored-by: Arthur <arthur.zucker@gmail.com>
* Support `flash_attn_3`
Implements fwd and tests for Flash Attention 3 https://github.com/Dao-AILab/flash-attention/commits/main/hopper
- Includes checks for dropout>0 and ALiBi in `modeling_utils.PreTrainedModel._check_and_enable_flash_attn_3` (Dropout will likely be supported soon, so this will need to be updated and `modeling_flash_attention_utils._flash_attention_forward` at the `if _IS_FLASH_ATTN_3_AVAILABLE: ...`
An example Llama implementation is included in `modeling_llama.py` but other models would still need to be updated
Based on https://github.com/huggingface/transformers/pull/36190 which has model implementations and examples which could be merged
* Add tests for Flash Attention 2 and 3 parity
* ci fix
* FA2 compatibiity
- `_prepare_flash_attention_from_position_ids` ->`prepare_fa2_from_position_ids`
- Remove bettertransformer check in Flash Attention 3
- Merge tests
- Add licensing
* ci fix
* Test naming consistency
* ci fix
* Deprecation warning for `prepare_fa2_from_position_ids`
* ci fix
* Initial submit
* Fix bugs:
1. add __init__ file
2. tied word embedding
3. support flash/flex attention
4. model saving and loading
* Code refactor:
* Rename encdecgemma to t5gemma.
* Split attention into self- and cross-attention
* Split stack into encoder and decoder
* Add test cases
* Add auto configuration
* Update configurations.
* Fix bugs related to copy and attribute checks
* Fix type union
* Fix merge errors
* run ruff format
* Run make style and update tests.
* Add t5gemma model doc.
* ruff and style formatting.
* Add missed module config.
* Add dummy checkpoint link to pass tests (need updated when real checkpoints are uplioaded.).
* Update model doc.
* Minor updates following Arthur's comments:
* replace docstrings with auto_docstrings
* remove checkpoint layers
* remove deprecate_kwargs
* fix rebase errors
* Fix docstring issues.
* fix t5gemma doc issue.
* run ruff format
* Updates:
* split encoder-only model out
* make t5gemmamodel encoder-decoder only
* update token and sequence classification
* update tests
* don't move the whole video to GPU
* add torchcodec
* add tests
* make style
* instrucblip as well
* consistency
* Update src/transformers/utils/import_utils.py
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* Update src/transformers/utils/import_utils.py
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* Update src/transformers/video_utils.py
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
---------
Co-authored-by: Pavel Iakubovskii <qubvel@gmail.com>
* Fix graph break in torch.compile when using FA2 with attention_mask=None and batch size > 1
* fix code format
* add test; replace position_ids with query_states becasue position_ids.shape[0] is always 1
* add assert loss is not nan
* Add zero dim tensor check when using flash_attention
Signed-off-by: ranzhejiang <zhejiang.ran@intel.com>
* Add zero dim tensor check when using flash_attention
Signed-off-by: ranzhejiang <zhejiang.ran@intel.com>
---------
Signed-off-by: ranzhejiang <zhejiang.ran@intel.com>
* Add Hugging Face authentication procedure for IDEs (PyCharm, VS Code, etc.)
* Update quicktour.md
---------
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
* ensure the query is updated during training
avoid unused parameters that DDP does not like
* avoid a crash when `kwargs` contain `padding=True`
trainers often pass this argument automatically
* minor
* Remove mel_spec lazy init, and rename to mel_filters.
this ensures save_pretrained will not crash when saving the processor during training
d5d007a1a0/src/transformers/feature_extraction_utils.py (L595)
* minor - most feature extractors has a `sampling_rate` property
* Add Arcee model support to transformers
- Add ArceeConfig and model mappings for all task types (CausalLM, SequenceClassification, QuestionAnswering, TokenClassification)
- Add auto-loading support through AutoModel, AutoConfig, and AutoTokenizer
- Use LlamaTokenizer for tokenization
- Add FX graph support for Arcee models
- Create lazy loading module structure for Arcee
* feat: update YARN scaling and RoPE validation for Arcee model
* feat: add auto_docstring checkpoint config to Arcee model classes
* docs: add pre-trained model weights reference to Arcee configuration files
* refactor: move RoPE utilities to dedicated modeling_rope_utils module
* Add comprehensive test suite for Arcee model
- Add test_modeling_arcee.py following standard transformers test patterns
- Include tests for all model variants (CausalLM, SequenceClassification, QuestionAnswering, TokenClassification)
- Add specific test for ReLU² activation in ArceeMLP
- Add RoPE scaling tests including YARN support
- Follow CausalLMModelTest pattern used by similar models
* Add documentation for Arcee model
- Add comprehensive model documentation with usage examples
- Include all model variants in autodoc
- Add to table of contents in proper alphabetical order
- Fixes documentation coverage for Arcee model classes
* Make style/fixup
* fix copyright year
* Sync modular conversion
* revert in legacy supported models in src/transformers/utils/fx
* cleaned redundant code in modular_arcee.py
* cleaned testing
* removed pretraining tp
* fix styles
* integration testing
---------
Co-authored-by: Pranav <veldurthipranav@gmail.com>
Co-authored-by: Pranav <56645758+pranav4501@users.noreply.github.com>
* some fixes
* some fixes
* now the pipeline can take list of tokens as input and is_split_into_words argument
* now the pipeline can take list of tokens as input and is_split_into_words argument
* now the pipeline can take list of tokens as input and is_split_into_words argument and we can handle batches of tokenized input
* now the pipeline can take list of tokens as input and is_split_into_words argument and we can handle batches of tokenized input
* solving test problems
* some fixes
* some fixes
* modify tests
* aligning start and end correctly
* adding tests
* some formatting
* some formatting
* some fixes
* some fixes
* some fixes
* resolve conflicts
* removing unimportant lines
* removing unimportant lines
* generalize to other languages
* generalize to other languages
* generalize to other languages
* generalize to other languages
* fix: add __bool__ operator to tokenizer to avoid bloated asserts
When a user does 'assert tokenizer' to ensure that the tokenizer is not None, they inadvertently set off a rather expensive process in the '__len__()' operator. This fix adds a trivial '__bool__()' that returns True, so that a None tokenizer asserts and an actual tokenizer returns True when asserted, without calling length op.
* typo
* add working idefics2 fast and improvements for fast nested images processing
* add fast image processors idefics 3 and smolvlm
* cleanup tests
* fic doc idefics2
* PR review and fix issues after merge
* Force providing disable_grouping to group_images_by_shape
* simplify group_images_by_shape
* fix modular
* Fix nits after review
* Fix(time_series): Correct scaler tensor shape in base model
The create_network_inputs function in TimeSeriesTransformerModel
handled the scaler's loc and scale tensors inconsistently.
When input_size=1, the tensors were not squeezed, leading to
downstream dimension errors for models like Informer.
This commit refactors the logic to unconditionally apply .squeeze(1),
which correctly handles all input_size cases and fixes the bug at its source.
Fixes#38745
* Fix(time_series): Correct scaler tensor shape in base model
The create_network_inputs function in TimeSeriesTransformerModel
handled the scaler's loc and scale tensors inconsistently.
When input_size=1, the tensors were not squeezed, leading to
downstream dimension errors for models like Informer.
This commit refactors the logic to unconditionally apply .squeeze(1),
which correctly handles all input_size cases and fixes the bug at its source.
Fixes#38745
---------
Co-authored-by: Kashif Rasul <kashif.rasul@gmail.com>
name:Self-hosted runner scale set (AMD mi325 scheduled CI caller)
# Note: For every job in this workflow, the name of the runner scale set is finalized in the runner yaml i.e. huggingface/hf-workflows/.github/workflows/transformers_amd_ci_scheduled_arc_scale_set.yaml
# For example, 1gpu scale set: amd-mi325-ci-1gpu
# 2gpu scale set: amd-mi325-ci-2gpu
on:
workflow_run:
workflows:["Self-hosted runner (AMD scheduled CI caller)"]
name:Self-hosted runner scale set (AMD mi355 scheduled CI caller)
# Note: For every job in this workflow, the name of the runner scale set is finalized in the runner yaml i.e. huggingface/hf-workflows/.github/workflows/transformers_amd_ci_scheduled_arc_scale_set.yaml
# For example, 1gpu : amd-mi355-ci-1gpu
# 2gpu : amd-mi355-ci-2gpu
on:
workflow_run:
workflows:["Self-hosted runner (AMD scheduled CI caller)"]
- This library is not a modular toolbox of building blocks for neural nets. The code in the model files is not refactored with additional abstractions on purpose, so that researchers can quickly iterate on each of the models without diving into additional abstractions/files.
- The training API is optimized to work with PyTorch models provided by Transformers. For generic machine learning loops, you should use another library like [Accelerate](https://huggingface.co/docs/accelerate).
- The [example scripts]((https://github.com/huggingface/transformers/tree/main/examples)) are only *examples*. They may not necessarily work out-of-the-box on your specific use case and you'll need to adapt the code for it to work.
- The [example scripts](https://github.com/huggingface/transformers/tree/main/examples) are only *examples*. They may not necessarily work out-of-the-box on your specific use case and you'll need to adapt the code for it to work.
## 100 projects using Transformers
@ -280,8 +280,8 @@ Expand each modality below to see a few example models for various use cases.
- Automatic mask generation with [SAM](https://huggingface.co/facebook/sam-vit-base)
- Depth estimation with [DepthPro](https://huggingface.co/apple/DepthPro-hf)
- Image classification with [DINO v2](https://huggingface.co/facebook/dinov2-base)
- Keypoint detection with [SuperGlue](https://huggingface.co/magic-leap-community/superglue_outdoor)
- Keypoint matching with [SuperGlue](https://huggingface.co/magic-leap-community/superglue)
- Keypoint detection with [SuperPoint](https://huggingface.co/magic-leap-community/superpoint)
- Keypoint matching with [SuperGlue](https://huggingface.co/magic-leap-community/superglue_outdoor)
- Object detection with [RT-DETRv2](https://huggingface.co/PekingU/rtdetr_v2_r50vd)
- Pose Estimation with [VitPose](https://huggingface.co/usyd-community/vitpose-base-simple)
- Universal segmentation with [OneFormer](https://huggingface.co/shi-labs/oneformer_ade20k_swin_large)
@ -14,7 +14,7 @@ Models uploaded on the Hugging Face Hub come in different formats. We heavily re
models in the [`safetensors`](https://github.com/huggingface/safetensors) format (which is the default prioritized
by the transformers library), as developed specifically to prevent arbitrary code execution on your system.
To avoid loading models from unsafe formats(e.g. [pickle](https://docs.python.org/3/library/pickle.html), you should use the `use_safetensors` parameter. If doing so, in the event that no .safetensors file is present, transformers will error when loading the model.
To avoid loading models from unsafe formats(e.g. [pickle](https://docs.python.org/3/library/pickle.html), you should use the `use_safetensors` parameter. If doing so, in the event that no .safetensors file is present, transformers will error when loading the model.
DALL·E Flow is an interactive workflow for generating high-definition images from a text prompt. Itt leverages DALL·E-Mega, GLID-3 XL, and Stable Diffusion to generate image candidates, and then calls CLIP-as-service to rank the candidates w.r.t. the prompt.
DALL·E Flow is an interactive workflow for generating high-definition images from a text prompt. It leverages DALL·E-Mega, GLID-3 XL, and Stable Diffusion to generate image candidates, and then calls CLIP-as-service to rank the candidates w.r.t. the prompt.
The preferred candidate is fed to GLID-3 XL for diffusion, which often enriches the texture and background. Finally, the candidate is upscaled to 1024x1024 via SwinIR.
[underthesea](https://github.com/undertheseanlp/underthesea) is a Vietnamese NLP toolkit. Underthesea is a suite of open source Python modules data sets and tutorials supporting research and development in Vietnamese Natural Language Processing. We provides extremely easy API to quickly apply pretrained NLP models to your Vietnamese text, such as word segmentation, part-of-speech tagging (PoS), named entity recognition (NER), text classification and dependency parsing.
[underthesea](https://github.com/undertheseanlp/underthesea) is a Vietnamese NLP toolkit. Underthesea is a suite of open source Python modules data sets and tutorials supporting research and development in Vietnamese Natural Language Processing. We provide extremely easy API to quickly apply pretrained NLP models to your Vietnamese text, such as word segmentation, part-of-speech tagging (PoS), named entity recognition (NER), text classification and dependency parsing.
# `kernels` may give different outputs (within 1e-5 range) even with the same model (weights) and the same inputs
RUN python3 -m pip uninstall -y kernels
# Uninstall flash-attn installed by autoawq, it causes issues here : https://github.com/huggingface/transformers/actions/runs/15915442841/job/44892146131
RUN python3 -m pip uninstall -y flash-attn
# When installing in editable mode, `transformers` is not recognized as a package.
# this line must be added in order for python to be aware of transformers.
في هذا الدليل، سنستعرض التقنيات الفعالة لتُحسِّن من كفاءة نشر نماذج اللغة الكبيرة:
1. سنتناول تقنية "دقة أقل" التي أثبتت الأبحاث فعاليتها في تحقيق مزايا حسابية دون التأثير بشكل ملحوظ على أداء النموذج عن طريق العمل بدقة رقمية أقل [8 بت و4 بت](/main_classes/quantization.md).
1. سنتناول تقنية "دقة أقل" التي أثبتت الأبحاث فعاليتها في تحقيق مزايا حسابية دون التأثير بشكل ملحوظ على أداء النموذج عن طريق العمل بدقة رقمية أقل [8 بت و4 بت](/main_classes/quantization).
2.**اFlash Attention:** إن Flash Attention وهي نسخة مُعدَّلة من خوارزمية الانتباه التي لا توفر فقط نهجًا أكثر كفاءة في استخدام الذاكرة، ولكنها تحقق أيضًا كفاءة متزايدة بسبب الاستخدام الأمثل لذاكرة GPU.
3.**الابتكارات المعمارية:** حيث تم اقتراح هياكل متخصصة تسمح باستدلال أكثر فعالية نظرًا لأن نماذج اللغة الكبيرة يتم نشرها دائمًا بنفس الطريقة أثناء عملية الاستدلال، أي توليد النص التنبؤي التلقائي مع سياق الإدخال الطويل، فقد تم اقتراح بنيات نموذج متخصصة تسمح بالاستدلال الأكثر كفاءة. أهم تقدم في بنيات النماذج هنا هو [عذر](https://huggingface.co/papers/2108.12409)، [الترميز الدوار](https://huggingface.co/papers/2104.09864)، [الاهتمام متعدد الاستعلامات (MQA)](https://huggingface.co/papers/1911.02150) و [مجموعة الانتباه بالاستعلام (GQA)]((https://huggingface.co/papers/2305.13245)).
3.**الابتكارات المعمارية:** حيث تم اقتراح هياكل متخصصة تسمح باستدلال أكثر فعالية نظرًا لأن نماذج اللغة الكبيرة يتم نشرها دائمًا بنفس الطريقة أثناء عملية الاستدلال، أي توليد النص التنبؤي التلقائي مع سياق الإدخال الطويل، فقد تم اقتراح بنيات نموذج متخصصة تسمح بالاستدلال الأكثر كفاءة. أهم تقدم في بنيات النماذج هنا هو [عذر](https://huggingface.co/papers/2108.12409)، [الترميز الدوار](https://huggingface.co/papers/2104.09864)، [الاهتمام متعدد الاستعلامات (MQA)](https://huggingface.co/papers/1911.02150) و [مجموعة الانتباه بالاستعلام (GQA)](https://huggingface.co/papers/2305.13245).
على مدار هذا الدليل، سنقدم تحليلًا للتوليد التنبؤي التلقائي من منظور المُوتِّرات. نتعمق في مزايا وعيوب استخدام دقة أقل، ونقدم استكشافًا شاملاً لخوارزميات الانتباه الأحدث، ونناقش بنيات نماذج نماذج اللغة الكبيرة المحسنة. سندعم الشرح بأمثلة عملية تُبرِز كل تحسين على حدة.
قبل مشاركة نموذج على Hub، ستحتاج إلى بيانات اعتماد حساب Hugging Face الخاصة بك. إذا كنت تستخدم منصة الأوامر، فقم بتشغيل الأمر التالي في بيئة افتراضية حيث تم تثبيت 🤗 Transformers. سيقوم هذا الأمر بتخزين رمز الدخول الخاص بك في مجلد تخزين المؤقت لـ Hugging Face (`~/.cache/` بشكل افتراضي):
```bash
huggingface-cli login
hf auth login
```
إذا كنت تستخدم دفتر ملاحظات مثل Jupyter أو Colaboratory، فتأكد من تثبيت مكتبة [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library). تسمح لك هذه المكتبة بالتفاعل برمجيًا مع Hub.
@ -56,7 +56,7 @@ Dateien lassen sich auch in einem Repository leicht bearbeiten, und Sie können
Bevor Sie ein Modell für den Hub freigeben, benötigen Sie Ihre Hugging Face-Anmeldedaten. Wenn Sie Zugang zu einem Terminal haben, führen Sie den folgenden Befehl in der virtuellen Umgebung aus, in der 🤗 Transformers installiert ist. Dadurch werden Ihre Zugangsdaten in Ihrem Hugging Face-Cache-Ordner (standardmäßig `~/.cache/`) gespeichert:
```bash
huggingface-cli login
hf auth login
```
Wenn Sie ein Notebook wie Jupyter oder Colaboratory verwenden, stellen Sie sicher, dass Sie die [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) Bibliothek installiert haben. Diese Bibliothek ermöglicht Ihnen die programmatische Interaktion mit dem Hub.
Alle Skripte können Ihr endgültiges Modell in den [Model Hub](https://huggingface.co/models) hochladen. Stellen Sie sicher, dass Sie bei Hugging Face angemeldet sind, bevor Sie beginnen:
```bash
huggingface-cli login
hf auth login
```
Dann fügen Sie dem Skript das Argument `push_to_hub` hinzu. Mit diesem Argument wird ein Repository mit Ihrem Hugging Face-Benutzernamen und dem in `output_dir` angegebenen Ordnernamen erstellt.
@ -13,7 +13,7 @@ rendered properly in your Markdown viewer.
-->
# Adding a new model to Transformers
# Legacy model contribution
> [!TIP]
> Try adding new models with a more [modular](./modular_transformers) approach first. This makes it significantly easier to contribute a model to Transformers!
@ -14,5 +14,9 @@ rendered properly in your Markdown viewer.
-->
# Agents
(deprecated)
> [!WARNING]
> Agents and tools were spun out into the standalone [smolagents](https://huggingface.co/docs/smolagents/index) library. They were removed from `transformers` in v4.52.
and it will stop printing the statements, as it now uses the `sdpa` attention.
This allows to quickly change an attention function, without needing to reload the model!
## Different attention per backbone in multimodal models
For multimodal models different attention functions may work better for each backbone module. For example, some vision backbones perform better in fp32, but are incompatible with FlashAttention. To continue using FlashAttention while keeping the vision encoder in fp32, create a dict and map each config to an attention implementation as shown below.
@ -14,43 +14,26 @@ rendered properly in your Markdown viewer.
-->
# Utilizing the @auto_docstring Decorator
# Documenting a model
The `@auto_docstring` decorator in the Hugging Face Transformers library helps generate docstrings for model classes and their methods, which will be used to build the documentation for the library. It aims to improve consistency and reduce boilerplate by automatically including standard argument descriptions and allowing for targeted overrides and additions.
The `@auto_docstring` decorator in Transformers generates consistent docstrings for model classes and their methods. It reduces boilerplate by automatically including standard argument descriptions while also allowing overrides to add new or custom arguments. [Contributing a new model](./modular_transformers) is easier because you don't need to manually add the standard docstrings, and only focus on documenting new arguments.
---
This guide describes how to use the `@auto_docstring` decorator and how it works.
## 📜 How it Works
## @auto_docstring
The `@auto_docstring` decorator constructs docstrings by:
1.**Signature Inspection:** It inspects the signature (arguments, types, defaults) of the decorated class's `__init__` method or the decorated function.
2.**Centralized Docstring Fetching:** It retrieves predefined docstrings for common arguments (e.g., `input_ids`, `attention_mask`) from internal library sources (like `ModelArgs` or `ImageProcessorArgs` in `utils/args_doc.py`).
3.**Overriding or Adding Arguments Descriptions:**
* **Direct Docstring Block:** It incorporates custom docstring content from an `r""" """` (or `""" """`) block below the method signature or within the `__init__` docstring. This is for documenting new arguments or overriding standard descriptions.
* **Decorator Arguments (`custom_args`):** A `custom_args` docstring block can be passed to the decorator to provide docstrings for specific arguments directly in the decorator call. This can be used to define the docstring block for new arguments once if they are repeated in multiple places in the modeling file.
4.**Adding Classes and Functions Introduction:**
* **`custom_intro` argument:** Allows prepending a custom introductory paragraph to a class or function docstring.
* **Automatic Introduction Generation:** For model classes with standard naming patterns (like `ModelForCausalLM`) or belonging to a pipeline, the decorator automatically generates an appropriate introductory paragraph using `ClassDocstring` in `utils/args_doc.py` as the source.
5.**Templating:** The decorator uses a templating system, allowing predefined docstrings to include dynamic information deduced from the `auto_modules` of the library, such as `{{processor_class}}` or `{{config_class}}`.
6.**Deducing Relevant Examples:** The decorator attempts to find appropriate usage examples based on the model's task or pipeline compatibility. It extracts checkpoint information from the model's configuration class to provide concrete examples with real model identifiers.
7.**Adding Return Value Documentation:** For methods like `forward`, the decorator can automatically generate the "Returns" section based on the method's return type annotation. For example, for a method returning a `ModelOutput` subclass, it will extracts field descriptions from that class's docstring to create a comprehensive return value description. A custom `Returns` section can also be manually specified in the function docstring block.
8.**Unrolling Kwargs Typed With Unpack Operator:** For specific methods (defined in `UNROLL_KWARGS_METHODS`) or classes (defined in `UNROLL_KWARGS_CLASSES`), the decorator processes `**kwargs` parameters that are typed with `Unpack[KwargsTypedDict]`. It extracts the documentation from the TypedDict and adds each parameter to the function's docstring. Currently, this functionality is only supported for `FastImageProcessorKwargs`.
---
## 🚀 How to Use @auto_docstring
### 1. Importing the Decorator
Import the decorator into your modeling file:
Start by importing the decorator in the modeling file (`modular_model.py` or `modeling_model.py`).
```python
from...utilsimportauto_docstring
```
### 2. Applying to Classes
Place `@auto_docstring` directly above the class definition. It uses the `__init__` method's signature and its docstring for parameter descriptions.
Select whether you'd like to apply `@auto_docstring` to a class or function below to see how to use it.
<hfoptionsid="type">
<hfoptionid="classes">
Place `@auto_docstring` directly above the class definition. The decorator derives parameter descriptions from the `__init__` method's signature and docstring.
@ -73,9 +56,7 @@ class MyAwesomeModel(PreTrainedModel):
# ... other methods
```
#### Advanced Class Decoration:
Arguments can be passed directly to `@auto_docstring` for more control:
Arguments can also be passed directly to `@auto_docstring` for more control. Use the `custom_intro` parameter to describe the argument and the `custom_args` parameter to describe the arguments.
```python
@auto_docstring(
@ -83,9 +64,9 @@ Arguments can be passed directly to `@auto_docstring` for more control:
It builds upon the standard Transformer architecture with unique modifications.""",
custom_args="""
custom_parameter (`type`, *optional*, defaults to `default_value`):
A concise description for custom_parameter if not defined or overriding the description in `args_doc.py`.
A concise description for custom_parameter if not defined or overriding the description in `auto_docstring.py`.
internal_helper_arg (`type`, *optional*, defaults to `default_value`):
A concise description for internal_helper_arg if not defined or overriding the description in `args_doc.py`.
A concise description for internal_helper_arg if not defined or overriding the description in `auto_docstring.py`.
"""
)
classMySpecialModel(PreTrainedModel):
@ -93,7 +74,7 @@ class MySpecialModel(PreTrainedModel):
# ...
```
Or:
You can also choose to only use `custom_intro` and define the custom arguments directly in the class.
```python
@auto_docstring(
@ -104,15 +85,44 @@ class MySpecialModel(PreTrainedModel):
custom_parameter (`type`, *optional*, defaults to `default_value`):
A concise description for custom_parameter if not defined or overriding the description in `args_doc.py`.
A concise description for custom_parameter if not defined or overriding the description in `auto_docstring.py`.
internal_helper_arg (`type`, *optional*, defaults to `default_value`):
A concise description for internal_helper_arg if not defined or overriding the description in `args_doc.py`.
A concise description for internal_helper_arg if not defined or overriding the description in `auto_docstring.py`.
"""
# ...
```
### 3. Applying to Functions (e.g., `forward` method)
Apply the decorator above method definitions, such as the `forward` method.
You should also use the `@auto_docstring` decorator for classes that inherit from [`~utils.ModelOutput`].
```python
@dataclass
@auto_docstring(
custom_intro="""
Custom model outputs with additional fields.
"""
)
classMyModelOutput(ImageClassifierOutput):
r"""
loss (`torch.FloatTensor`, *optional*):
The loss of the model.
custom_field (`torch.FloatTensor` of shape `(batch_size, hidden_size)`, *optional*):
A custom output field specific to this model.
"""
# Standard fields like hidden_states, logits, attentions etc. can be automatically documented if the description is the same as the standard arguments.
# However, given that the loss docstring is often different per model, you should document it in the docstring above.
# Custom fields need to be documented in the docstring above
custom_field:Optional[torch.FloatTensor]=None
```
</hfoption>
<hfoptionid="functions">
Place `@auto_docstring` directly above the method definition. The decorator derives parameter descriptions from the function signature.
```python
@auto_docstring
@ -131,9 +141,10 @@ Apply the decorator above method definitions, such as the `forward` method.
# ...
```
#### Advanced Function Decoration:
Arguments can also be passed directly to `@auto_docstring` for more control. Use the `custom_intro` parameter to describe the argument and the `custom_args` parameter to describe the arguments.
The `Returns` and `Examples` parts of the docstring can also be manually specified.
Arguments can be passed directly to `@auto_docstring` for more control. `Returns` and `Examples` sections can also be manually specified:
```python
MODEL_COMMON_CUSTOM_ARGS=r"""
@ -180,100 +191,117 @@ class MyModel(PreTrainedModel):
*`@auto_docstring` retrieves descriptions from a central source. Do not redefine these locally if their description and shape are the same as in `args_doc.py`.
There are some rules for documenting different types of arguments and they're listed below.
- Standard arguments (`input_ids`, `attention_mask`, `pixel_values`, etc.) are defined and retrieved from `auto_docstring.py`. It is the single source of truth for standard arguments and should not be redefined locally if an argument's description and shape is the same as an argument in `auto_docstring.py`.
If a standard argument behaves differently in your model, then you can override it locally in a `r""" """` block. This local definition has a higher priority. For example, the `labels` argument is often customized per model and typically requires overriding.
- New or custom arguments should be documented within an `r""" """` block after the signature if it is a function or in the `__init__` method's docstring if it is a class.
```py
argument_name (`type`, *optional*, defaults to `X`):
Description of the argument.
Explain its purpose, expected shape/type if complex, and default behavior.
This can span multiple lines.
```
2.**New or Custom Arguments:**
* **Primary Method:** Document these within an `r""" """` docstring block following the signature (for functions) or in the `__init__` method's docstring (for class parameters).
* **Format:**
```
argument_name (`type`, *optional*, defaults to `X`):
Description of the argument.
Explain its purpose, expected shape/type if complex, and default behavior.
This can span multiple lines.
```
* Include `type` in backticks.
* Add "*optional*" if the argument is not required (has a default value).
* Add "defaults to `X`" if it has a default value (no need to specify "defaults to `None`" if the default value is `None`).
* Add *optional* if the argument is not required or has a default value.
* Add "defaults to X" if it has a default value. You don't need to add "defaults to `None`" if the default value is `None`.
3. **Overriding Standard Arguments:**
* If a standard argument behaves differently (e.g., different expected shape, model-specific behavior), provide its complete description in the local `r""" """` docstring. This local definition takes precedence.
* The `labels` argument is often customized per model and typically requires a specific docstring.
These arguments can also be passed to `@auto_docstring` as a `custom_args` argument. It is used to define the docstring block for new arguments once if they are repeated in multiple places in the modeling file.
4. **Using Decorator Arguments for Overrides or New Arguments (`custom_args`):**
* New or custom arguments docstrings can also be passed to `@auto_docstring` as a `custom_args` argument. This can be used to define the docstring block for new arguments once if they are repeated in multiple places in the modeling file.
```py
class MyModel(PreTrainedModel):
# ...
@auto_docstring(
custom_intro="""
This is a custom introduction for the function.
"""
custom_args=r"""
common_arg_1 (`torch.Tensor`, *optional*, defaults to `default_value`):
Description of common_arg_1
"""
)
```
---
## Checking the docstrings
### Usage with [modular files](./modular_transformers)
Transformers includes a utility script to validate the docstrings when you open a Pull Request which triggers CI (continuous integration) checks. The script checks for the following criteria.
When working with modular files, follow these guidelines for applying the `@auto_docstring` decorator:
* Ensures `@auto_docstring` is applied to relevant mode classes and public methods.
* Ensures arguments are complete and consistent. It checks that documented arguments exist in the signature and verifies whether the types and default values in the docstring match the signature. Arguments that aren't known standard arguments or if they lack a local description are flagged.
* Reminds you to complete placeholders like `<fill_type>` and `<fill_docstring>`.
* Ensures docstrings are formatted according to the expected docstring style.
- **For standalone models in modular files:**
Apply the `@auto_docstring` decorator just as you would in regular modeling files.
- **For models inheriting from other library models:**
- When inheriting from a parent model, decorators (including `@auto_docstring`) are automatically carried over to the generated modeling file without needing to add them in your modular file.
- If you need to modify the `@auto_docstring` behavior, apply the customized decorator in your modular file, making sure to *include all other decorators* that were present on the original function/class.
> **Warning**: When overriding any decorator in a modular file, you must include ALL decorators that were applied to that function/class in the parent model. If you only override some decorators, the others won't be included in the generated modeling file.
**Note**: The `check_auto_docstrings` tool doesn't check modular files directly, but it will check (and modify when using `--fix_and_overwrite`) the generated modeling files. If issues are found in the generated files, you'll need to update your modular files accordingly.
---
## ✅ Checking Your Docstrings with `check_auto_docstrings`
The library includes a utility script to validate docstrings. This check is typically run during Continuous Integration (CI).
#### What it Checks:
* **Decorator Presence:** Ensures `@auto_docstring` is applied to relevant model classes and public methods. (TODO)
* **Argument Completeness & Consistency:**
* Flags arguments in the signature that are not known standard arguments and lack a local description.
* Ensures documented arguments exist in the signature. (TODO)
* Verifies that types and default values in the docstring match the signature. (TODO)
* **Placeholder Detection:** Reminds you to complete placeholders like `<fill_type>` or `<fill_docstring>`.
* **Formatting:** Adherence to the expected docstring style.
#### Running the Check Locally:
Run this check locally before committing. The common command is:
You can run this check locally - before committing - by running the following command.
```bash
make fix-copies
```
Alternatively, to only perform docstrings and auto-docstring checks, you can use:
`make fix-copies` runs several other checks as well. If you don't need those checks, run the command below to only perform docstring and auto-docstring checks.
```bash
python utils/check_docstrings.py # to only check files included in the diff without fixing them
# Or: python utils/check_docstrings.py --fix_and_overwrite # to fix and overwrite the files in the diff
# Or: python utils/check_docstrings.py --fix_and_overwrite --check_all # to fix and overwrite all files
# python utils/check_docstrings.py --fix_and_overwrite # to fix and overwrite the files in the diff
# python utils/check_docstrings.py --fix_and_overwrite --check_all # to fix and overwrite all files
```
#### Workflow with the Checker:
## modular_model.py files
1. Add `@auto_docstring(...)` to the class or method.
2. For new, custom, or overridden arguments, add descriptions in an `r""" """` block.
3. Run `make fix-copies` (or the `check_docstrings.py` utility).
* For unrecognized arguments lacking documentation, the utility will create placeholder entries.
4. Manually edit these placeholders with accurate types and descriptions.
5. Re-run the check to ensure all issues are resolved.
When working with modular files (`modular_model.py`), follow the guidelines below for applying `@auto_docstring`.
---
- For standalone models in modular files, apply `@auto_docstring` like you would in a `modeling_model.py` file.
- For models that inherit from other library models, `@auto_docstring` is automatically carried over to the generated modeling file. You don't need to add `@auto_docstring` in your modular file.
## 🔑 Key Takeaways & Best Practices
If you need to modify the `@auto_docstring` behavior, apply the customized decorator in your modular file. Make sure to **include all other decorators** that are present in the original function or class.
* Use `@auto_docstring` for new PyTorch model classes (`PreTrainedModel` subclasses) and their primary for methods (e.g., `forward`, `get_text_features` etc.).
* For classes, the `__init__` method's docstring is the main source for parameter descriptions when using `@auto_docstring` on the class.
* Rely on standard docstrings; do not redefine common arguments unless their behavior is different in your specific model.
> [!WARNING]
> When overriding any decorator in a modular file, you must include **all** decorators that were applied to that function or class in the parent model. If you only override some decorators, the others won't be included in the generated modeling file.
## How it works
The `@auto_docstring` decorator automatically generates docstrings by:
1. Inspecting the signature (arguments, types, defaults) of the decorated class' `__init__` method or the decorated function.
2. Retrieving the predefined docstrings for common arguments (`input_ids`, `attention_mask`, etc.) from internal library sources like [`ModelArgs`], [`ImageProcessorArgs`], and the `auto_docstring.py` file.
3. Adding argument descriptions in one of two ways as shown below.
| method | description | usage |
|---|---|---|
| `r""" """` | add custom docstring content directly to a method signature or within the `__init__` docstring | document new arguments or override standard descriptions |
| `custom_args` | add custom docstrings for specific arguments directly in `@auto_docstring` | define docstring for new arguments once if they're repeated in multiple places in the modeling file |
4. Adding class and function descriptions. For model classes with standard naming patterns, like `ModelForCausalLM`, or if it belongs to a pipeline, `@auto_docstring` automatically generates the appropriate descriptions with `ClassDocstring` from `auto_docstring.py`.
`@auto_docstring` also accepts the `custom_intro` argument to describe a class or function.
5. Using a templating system to allow predefined docstrings to include dynamic information from Transformers' [auto_modules](https://github.com/huggingface/transformers/tree/main/src/transformers/models/auto) such as `{{processor_class}}` and `{{config_class}}`.
6. Finding appropriate usage examples based on the model's task or pipeline compatibility. It extracts checkpoint information form the model's configuration class to provide concrete examples with real model identifiers.
7. Adding return values to the docstring. For methods like `forward`, the decorator automatically generates the `Returns` field in the docstring based on the method's return type annotation.
For example, if a method returns a [`~transformers.utils.ModelOutput`] subclass, `@auto_docstring` extracts the field descriptions from the class' docstring to create a comprehensive return value description. You can also manually specifiy a custom `Returns` field in a functions docstring.
8. Unrolling kwargs typed with the unpack operator. For specific methods (defined in `UNROLL_KWARGS_METHODS`) or classes (defined in `UNROLL_KWARGS_CLASSES`), the decorator processes `**kwargs` parameters that are typed with `Unpack[KwargsTypedDict]`. It extracts the documentations from the `TypedDict` and adds each parameter to the function's docstring.
Currently only supported for [`FastImageProcessorKwargs`].
## Best practices
Follow the best practices below to help maintain consistent and informative documentation for Transformers!
* Use `@auto_docstring` for new PyTorch model classes ([`PreTrainedModel`] subclasses) and their primary methods like `forward` or `get_text_features`.
* For classes, `@auto_docstring` retrieves parameter descriptions from the `__init__` method's docstring.
* Rely on standard docstrings and do not redefine common arguments unless their behavior is different in your model.
* Document new or custom arguments clearly.
* Run `check_docstrings` locally and iteratively.
By following these guidelines, you help maintain consistent and informative documentation for the Hugging Face Transformers library 🤗.
@ -82,24 +82,18 @@ When you use Transformers' [`Cache`] class, the self-attention module performs s
## Cache storage implementation
The actual storage of key-value pairs varies between cache implementations. As an example, consider the [`DynamicCache`].
Caches are structured as a list of layers, where each layer contains a key and value cache. The key and value caches are tensors with the shape `[batch_size, num_heads, seq_len, head_dim]`.
Layers can be of different types (e.g. `DynamicLayer`, `StaticLayer`, `SlidingWindowLayer`), which mostly changes how sequence length is handled and how the cache is updated.
In [`DynamicCache`], the key-value pairs are stored as two lists of tensors. Each tensor in the lists have the shape `[batch_size, num_heads, seq_len, head_dim]`.
-`key_cache`: A list of tensors, one for each layer.
-`value_cache`: A list of tensors, one for each layer.
The simplest is a `DynamicLayer` that grows as more tokens are processed. The sequence length dimension (`seq_len`) increases with each new token:
When new tokens are processed:
1. For each layer, the new key and value states are concatenated with the existing cache.
2. The cache grows dynamically as more tokens are processed. The sequence length dimension (`seq_len`) increases with each new token.
3. The cache maintains a count of seen tokens through `self._seen_tokens`. This is updated when the first layer processes a new token.
Other layer types like `StaticLayer` and `SlidingWindowLayer` have a fixed sequence length that is set when the cache is created. This makes them compatible with `torch.compile`. In the case of `SlidingWindowLayer`, existing tokens are shifted out of the cache when a new token is added.
The example below demonstrates how to create a generation loop with [`DynamicCache`]. As discussed, the attention mask is a concatenation of past and current token values and `1` is added to the cache position for the next token.
@ -134,6 +128,34 @@ for _ in range(max_new_tokens):
"[INST] Hello, what's your name. [/INST] Hello! My name is LLaMA,"
```
## Cache position
The cache position tracks where to insert new tokens in the attention cache. It represents the *absolute* position of each token in the context, independent of padding or batch structure. Suppose you already cached `N` tokens and are now processing `K` new tokens. The cache position for the new tokens will range from `N` to `N + K - 1`. In other words, you're processing tokens at positions - `[N, N + 1, N + 2, ..., N + K - 1]`.
Cache position is used internally for two purposes:
1. Selecting new tokens to process in the input sequence and ensuring only tokens that haven’t been cached yet are passed to the model's `forward`.
2. Storing key/value pairs at the correct positions in the cache. This is especially important for fixed-size caches, like [`StaticCache`], that pre-allocates a specific cache length.
The generation loop usually takes care of the cache position, but if you're writing a custom generation method, it is important that cache positions are accurate since they are used to write and read key/value states into fixed slots.
Before the [`Cache`] class, the cache used to be stored as a tuple of tuples of tensors. This format is dynamic because it grows as text is generated, similar to [`DynamicCache`].
@ -143,7 +165,7 @@ The legacy format is essentially the same data structure but organized different
- The tensors have the same shape `[batch_size, num_heads, seq_len, head_dim]`.
- The format is less flexible and doesn't support features like quantization or offloading.
If your project depends on this legacy format, you can convert between [`DynamicCache`] and a tuple of tuples as shown below with the [`~DynamicCache.from_legacy_cache`] and [`DynamicCache.to_legacy_cache`] functions. This is helpful if you have custom logic for manipulating a cache in a specific format.
If your project depends on this legacy format, we recommend to convert to [`DynamicCache`] with [`~DynamicCache.from_legacy_cache`]. Note that legacy cache format is deprecated and not used anymore in `Transformers`. You can convert back to tuple format with [`DynamicCache.to_legacy_cache`] functions, which is helpful if you have custom logic for manipulating a cache in a specific format.
@ -111,6 +111,7 @@ Some vision models also support video inputs. The message format is very similar
- The content `"type"` should be `"video"` to indicate the content is a video.
- For videos, it can be a link to the video (`"url"`) or it could be a file path (`"path"`). Videos loaded from a URL can only be decoded with [PyAV](https://pyav.basswood-io.com/docs/stable/) or [Decord](https://github.com/dmlc/decord).
- In addition to loading videos from a URL or file path, you can also pass decoded video data directly. This is useful if you’ve already preprocessed or decoded video frames elsewhere in memory (e.g., using OpenCV, decord, or torchvision). You don't need to save to files or store it in an URL.
> [!WARNING]
> Loading a video from `"url"` is only supported by the PyAV or Decord backends.
"content":[{"type":"text","text":"You are a friendly chatbot who always responds in the style of a pirate"}],
},
{
"role":"user",
"content":[
{"type":"video","video":video_object2},
{"type":"text","text":"What do you see in this video?"}
],
},
]
```
Pass `messages` to [`~ProcessorMixin.apply_chat_template`] to tokenize the input content. There are a few extra parameters to include in [`~ProcessorMixin.apply_chat_template`] that controls the sampling process.
The `video_load_backend` parameter refers to a specific framework to load a video. It supports [PyAV](https://pyav.basswood-io.com/docs/stable/), [Decord](https://github.com/dmlc/decord), [OpenCV](https://github.com/opencv/opencv), and [torchvision](https://pytorch.org/vision/stable/index.html).
@ -25,9 +25,9 @@ Check model leaderboards like [OpenLLM](https://hf.co/spaces/HuggingFaceH4/open_
This guide shows you how to quickly start chatting with Transformers from the command line, how build and format a conversation, and how to chat using the [`TextGenerationPipeline`].
## transformers CLI
## chat CLI
After you've [installed Transformers](./installation.md), chat with a model directly from the command line as shown below. It launches an interactive session with a model, with a few base commands listed at the start of the session.
After you've [installed Transformers](./installation), chat with a model directly from the command line as shown below. It launches an interactive session with a model, with a few base commands listed at the start of the session.
```bash
transformers chat Qwen/Qwen2.5-0.5B-Instruct
@ -49,7 +49,8 @@ For a full list of options, run the command below.
transformers chat -h
```
The chat is implemented on top of the [AutoClass](./model_doc/auto), using tooling from [text generation](./llm_tutorial) and [chat](./chat_templating).
The chat is implemented on top of the [AutoClass](./model_doc/auto), using tooling from [text generation](./llm_tutorial) and [chat](./chat_templating). It uses the `transformers serve` CLI under the hood ([docs](./serving.md#serve-cli)).
## TextGenerationPipeline
@ -157,4 +158,4 @@ The easiest solution for improving generation speed is to either quantize a mode
You can also try techniques like [speculative decoding](./generation_strategies#speculative-decoding), where a smaller model generates candidate tokens that are verified by the larger model. If the candidate tokens are correct, the larger model can generate more than one token per `forward` pass. This significantly alleviates the bandwidth bottleneck and improves generation speed.
> [!TIP]
> Parameters may not be active for every generated token in MoE models such as [Mixtral](./model_doc/mixtral), [Qwen2MoE](./model_doc/qwen2_moe.md), and [DBRX](./model_doc/dbrx). As a result, MoE models generally have much lower memory bandwidth requirements and can be faster than a regular LLM of the same size. However, techniques like speculative decoding are ineffective with MoE models because parameters become activated with each new speculated token.
> Parameters may not be active for every generated token in MoE models such as [Mixtral](./model_doc/mixtral), [Qwen2MoE](./model_doc/qwen2_moe), and [DBRX](./model_doc/dbrx). As a result, MoE models generally have much lower memory bandwidth requirements and can be faster than a regular LLM of the same size. However, techniques like speculative decoding are ineffective with MoE models because parameters become activated with each new speculated token.
This example shows how to use `transformers serve` as a local LLM provider for [Cursor](https://cursor.com/), the popular IDE. In this particular case, requests to `transformers serve` will come from an external IP (Cursor's server IPs), which requires some additional setup. Furthermore, some of Cursor's requests require [CORS](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CORS), which is disabled by default for security reasons.
To launch a server with CORS enabled, run
```shell
transformers serve --enable-cors
```
You'll also need to expose your server to external IPs. A potential solution is to use [`ngrok`](https://ngrok.com/), which has a permissive free tier. After setting up your `ngrok` account and authenticating on your server machine, you run
```shell
ngrok http [port]
```
where `port` is the port used by `transformers serve` (`8000` by default). On the terminal where you launched `ngrok`, you'll see a https address in the "Forwarding" row, as in the image below. This is the address to send requests to.
You're now ready to set things up on the app side! In Cursor, while you can't set a new provider, you can change the endpoint for OpenAI requests in the model selection settings. First, navigate to "Settings" > "Cursor Settings", "Models" tab, and expand the "API Keys" collapsible. To set your `transformers serve` endpoint, follow this order:
1. Unselect ALL models in the list above (e.g. `gpt4`, ...);
2. Add and select the model you want to use (e.g. `Qwen/Qwen3-4B`)
3. Add some random text to OpenAI API Key. This field won't be used, but it can’t be empty;
4. Add the https address from `ngrok` to the "Override OpenAI Base URL" field, appending `/v1` to the address (i.e. `https://(...).ngrok-free.app/v1`);
5. Hit "Verify".
After you follow these steps, your "Models" tab should look like the image below. Your server should also have received a few requests from the verification step.
You are now ready to use your local model in Cursor! For instance, if you toggle the AI Pane, you can select the model you added and ask it questions about your local files.
Custom decoding methods enable specialized generation behavior such as the following:
Custom generation methods enable specialized behavior such as:
- have the model continue thinking if it is uncertain;
- roll back generation if the model gets stuck;
- handle special tokens with custom logic;
-enhanced input preparation for advanced models;
-use specialized KV caches;
We enable custom decoding methods through model repositories, assuming a specific model tag and file structure (see subsection below). This feature is an extension of [custom modeling code](./models.md#custom-models) and, like such, requires setting `trust_remote_code=True`.
We enable custom generation methods through model repositories, assuming a specific model tag and file structure (see subsection below). This feature is an extension of [custom modeling code](./models.md#custom-models) and, like such, requires setting `trust_remote_code=True`.
If a model repository holds a custom decoding method, the easiest way to try it out is to load the model and generate with it:
If a model repository holds a custom generation method, the easiest way to try it out is to load the model and generate with it:
'The quick brown fox jumps over a lazy dog, and the dog is a type of animal. Is'
```
Model repositories with custom decoding methods have a special property: their decoding method can be loaded from **any** model through [`~GenerationMixin.generate`]'s `custom_generate` argument. This means anyone can create and share their custom generation method to potentially work with any Transformers model, without requiring users to install additional Python packages.
Model repositories with custom generation methods have a special property: their generation method can be loaded from **any** model through [`~GenerationMixin.generate`]'s `custom_generate` argument. This means anyone can create and share their custom generation method to potentially work with any Transformers model, without requiring users to install additional Python packages.
You should read the `README.md` file of the repository containing the custom generation strategy to see what the new arguments and output type differences are, if they exist. Otherwise, you can assume it works like the base [`~GenerationMixin.generate`] method.
> [!TIP]
> You can find all custom decoding methods by [searching for their custom tag.](https://huggingface.co/models?other=custom_generate), `custom_generate`
> You can find all custom generation methods by [searching for their custom tag.](https://huggingface.co/models?other=custom_generate), `custom_generate`.
Consider the Hub repository [transformers-community/custom_generate_example](https://huggingface.co/transformers-community/custom_generate_example) as an example. The `README.md` states that it has an additional input argument, `left_padding`, which adds a number of padding tokens before the prompt.
Updating your Python requirements accordingly will remove this error message.
### Creating a custom decoding method
### Creating a custom generation method
To create a new decoding method, you need to create a new [**Model**](https://huggingface.co/new) repository and push a few files into it.
1. The model you've designed your decoding method with.
2.`custom_generate/generate.py`, which contains all the logic for your custom decoding method.
To create a new generation method, you need to create a new [**Model**](https://huggingface.co/new) repository and push a few files into it.
1. The model you've designed your generation method with.
2.`custom_generate/generate.py`, which contains all the logic for your custom generation method.
3.`custom_generate/requirements.txt`, used to optionally add new Python requirements and/or lock specific versions to correctly use your method.
4.`README.md`, where you should add the `custom_generate` tag and document any new arguments or output type differences of your custom method here.
@ -409,7 +409,7 @@ your_repo/
#### Adding the base model
The starting point for your custom decoding method is a model repository just like any other. The model to add to this repository should be the model you've designed your method with, and it is meant to be part of a working self-contained model-generate pair. When the model in this repository is loaded, your custom decoding method will override `generate`. Don't worry -- your decoding method can still be loaded with any other Transformers model, as explained in the section above.
The starting point for your custom generation method is a model repository just like any other. The model to add to this repository should be the model you've designed your method with, and it is meant to be part of a working self-contained model-generate pair. When the model in this repository is loaded, your custom generation method will override `generate`. Don't worry -- your generation method can still be loaded with any other Transformers model, as explained in the section above.
If you simply want to copy an existing model, you can do
@ -418,13 +418,13 @@ from transformers import AutoModelForCausalLM, AutoTokenizer
This is the core of your decoding method. It *must* contain a method named `generate`, and this method *must* contain a `model` argument as its first argument. `model` is the model instance, which means you have access to all attributes and methods in the model, including the ones defined in [`GenerationMixin`] (like the base `generate` method).
This is the core of your generation method. It *must* contain a method named `generate`, and this method *must* contain a `model` argument as its first argument. `model` is the model instance, which means you have access to all attributes and methods in the model, including the ones defined in [`GenerationMixin`] (like the base `generate` method).
> [!WARNING]
> `generate.py` must be placed in a folder named `custom_generate`, and not at the root level of the repository. The file paths for this feature are hardcoded.
Follow the recommended practices below to ensure your custom decoding method works as expected.
Follow the recommended practices below to ensure your custom generation method works as expected.
- Feel free to reuse the logic for validation and input preparation in the original [`~GenerationMixin.generate`].
- Pin the `transformers` version in the requirements if you use any private method/attribute in `model`.
- You can add other files in the `custom_generate` folder, and use relative imports.
- Consider adding model validation, input validation, or even a separate test file to help users sanity-check your code in their environment.
Your custom `generate` method can relative import code from the `custom_generate` folder. For example, if you have a `utils.py` file, you can import it like this:
```py
from.utilsimportsome_function
```
Only relative imports from the same-level `custom_generate` folder are supported. Parent/sibling folder imports are not valid. The `custom_generate` argument also works locally with any directory that contains a `custom_generate` structure. This is the recommended workflow for developing your custom generation method.
#### requirements.txt
You can optionally specify additional Python requirements in a `requirements.txt` file inside the `custom_generate` folder. These are checked at runtime and an exception will be thrown if they're missing, nudging users to update their environment accordingly.
#### README.md
The root level `README.md` in the model repository usually describes the model therein. However, since the focus of the repository is the custom decoding method, we highly recommend to shift its focus towards describing the custom decoding method. In addition to a description of the method, we recommend documenting any input and/or output differences to the original [`~GenerationMixin.generate`]. This way, users can focus on what's new, and rely on Transformers docs for generic implementation details.
The root level `README.md` in the model repository usually describes the model therein. However, since the focus of the repository is the custom generation method, we highly recommend to shift its focus towards describing the custom generation method. In addition to a description of the method, we recommend documenting any input and/or output differences to the original [`~GenerationMixin.generate`]. This way, users can focus on what's new, and rely on Transformers docs for generic implementation details.
For discoverability, we highly recommend you to add the `custom_generate` tag to your repository. To do so, the top of your `README.md` file should look like the example below. After you push the file, you should see the tag in your repository!
@ -496,6 +504,36 @@ Recommended practices:
- Add self-contained examples to enable quick experimentation.
- Describe soft-requirements such as if the method only works well with a certain family of models.
### Reusing `generate`’s input preparation
If you're adding a new decoding loop, you might want to preserve the input preparation present in `generate` (batch expansion, attention masks, logits processors, stopping criteria, etc.). You can also pass a **callable** to `custom_generate` to reuse [`~GenerationMixin.generate`]’s full preparation pipeline while overriding only the decoding loop.
> If you publish a `custom_generate` repository, your `generate` implementation can itself define a callable and pass it to `model.generate()`. This lets you customize the decoding loop while still benefiting from Transformers’ built-in input preparation logic.
### Finding custom generation methods
You can find all custom generation methods by [searching for their custom tag.](https://huggingface.co/models?other=custom_generate), `custom_generate`. In addition to the tag, we curate two collections of `custom_generate` methods:
- [Custom generation methods - Community](https://huggingface.co/collections/transformers-community/custom-generation-methods-community-6888fb1da0efbc592d3a8ab6) -- a collection of powerful methods contributed by the community;
- [Custom generation methods - Tutorials](https://huggingface.co/collections/transformers-community/custom-generation-methods-tutorials-6823589657a94940ea02cfec) -- a collection of reference implementations for methods that previously were part of `transformers`, as well as tutorials for `custom_generate`.
@ -247,3 +247,114 @@ first and last layer will be shown. This is useful when some layers (typically c
layers.
[[autodoc]] model_addition_debugger_context
## Analyzer of skipped tests
### Scan skipped tests - for model adders and maintainers
This small util is a power user tool intended for model adders and maintainers. It lists all test methods
existing in `test_modeling_common.py`, inherited by all model tester classes, and scans the repository to measure
how many tests are being skipped and for which models.
### Rationale
When porting models to transformers, tests fail as they should, and sometimes `test_modeling_common` feels irreconcilable with the peculiarities of our brand new model. But how can we be sure we're not breaking everything by adding a seemingly innocent skip?
This utility:
- scans all test_modeling_common methods
- looks for times where a method is skipped
- returns a summary json you can load as a DataFrame/inspect
**For instance test_inputs_embeds is skipped in a whooping 39% proportion at the time of writing this util.**
📄 JSON saved to /home/pablo/git/transformers/all_tests_scan_result.json
```
And it will generate `all_tests_scan_result.json` file that you can inspect. The JSON is indexed by method name, and each entry follows this schema, indicating the origin as well (from `common`or `GenerationMixin`.)
```json
{
"<method_name>":{
"origin":"<test suite>"
"models_ran":["<model_name>",...],
"models_skipped":["<model_name>",...],
"skipped_proportion":<float>,
"reasons_skipped":["<model_name>: <reason>",
...
]
},
...
}
```
Which you can visualise as above with e.g. `pandas`
# Jan: using the serving API as a local LLM provider
This example shows how to use `transformers serve` as a local LLM provider for the [Jan](https://jan.ai/) app. Jan is a ChatGPT-alternative graphical interface, fully running on your machine. The requests to `transformers serve` come directly from the local app -- while this section focuses on Jan, you can extrapolate some instructions to other apps that make local requests.
## Running models locally
To connect `transformers serve` with Jan, you'll need to set up a new model provider ("Settings" > "Model Providers"). Click on "Add Provider", and set a new name. In your new model provider page, all you need to set is the "Base URL" to the following pattern:
```shell
http://[host]:[port]/v1
```
where `host` and `port` are the `transformers serve` CLI parameters (`localhost:8000` by default). After setting this up, you should be able to see some models in the "Models" section, hitting "Refresh". Make sure you add some text in the "API key" text field too -- this data is not actually used, but the field can't be empty. Your custom model provider page should look like this:
> You can add any `transformers`-compatible model to Jan through `transformers serve`. In the custom model provider you created, click on the "+" button in the "Models" section and add its Hub repository name, e.g. `Qwen/Qwen3-4B`.
## Running models on a separate machine
To conclude this example, let's look into a more advanced use-case. If you have a beefy machine to serve models with, but prefer using Jan on a different device, you need to add port forwarding. If you have `ssh` access from your Jan machine into your server, this can be accomplished by typing the following to your Jan machine's terminal
inputs=tokenizer("I like rock music because",return_tensors="pt").to(model.device)
past_key_values=DynamicCache()
@ -134,7 +134,7 @@ The [`QuantizedCache`] reduces memory requirements by quantizing the KV values t
> [!WARNING]
> Quantizing the cache can harm latency if the context length is short and there is enough GPU memory available for generation without enabling cache quantization. Try to find a balance between memory efficiency and latency.
Enable [`QuantizedCache`] by configuring `cache_implementation="quantized"` in [`GenerationConfig`], and indicate the quantization backend in [`QuantizedCacheConfig`]. Any additional quantization related parameters should also be passed either as a dict or an instance of [`QuantizedCacheConfig`]. You should use the default values for these additional parameters unless you're running out-of-memory. In that case, consider decreasing the residual length.
Enable [`QuantizedCache`] by configuring `cache_implementation="quantized"` in [`GenerationConfig`], and the quantization backend, as well as any additional quantization related parameters should also be passed either as a dict. You should use the default values for these additional parameters unless you're running out-of-memory. In that case, consider decreasing the residual length.
<hfoptionsid="quantized-cache">
<hfoptionid="HQQQuantizedCache">
@ -142,13 +142,14 @@ Enable [`QuantizedCache`] by configuring `cache_implementation="quantized"` in [
For [`HQQQuantizedCache`], we recommend setting the `axis-key` and `axis-value` parameters to `1`.
@ -341,7 +338,7 @@ A known issue with transformer models is that the self-attention mechanism grows
FlashAttention and [FlashAttention-2](./perf_infer_gpu_one#flashattention-2) break up the attention computation into smaller chunks and reduces the number of intermediate read/write operations to the GPU memory to speed up inference. FlashAttention-2 improves on the original FlashAttention algorithm by also parallelizing over sequence length dimension and better partitioning work on the hardware to reduce synchronization and communication overhead.
To use FlashAttention-2, set [attn_implementation](https://hf.co/docs/transformers/main/en/main_classes/text_generation#transformers.PreTrainedModel.from_pretrained.attn_implementation) to `"flash_attention_2"` in [`~PreTrainedModel.from_pretrained`].
To use FlashAttention-2, set [attn_implementation](https://hf.co/docs/transformers/main/en/main_classes/text_generation#transformers.PreTrainedModel.from_pretrained.attn_implementation) to `"flash_attention_2"` in [`~PreTrainedModel.from_pretrained`] or set with `model.set_attention_implementation("flash_attention_2")` to dynamically update the [attention interface](./attention_interface) after the model is loaded.
@ -360,7 +365,7 @@ model = AutoModelForCausalLM.from_pretrained(
Scaled dot product attention (SDPA) is automatically enabled in PyTorch 2.0 and it supports FlashAttention, xFormers, and PyTorch's C++ implementation. SDPA chooses the most performant attention algorithm if you're using a CUDA backend. For other backends, SDPA defaults to the PyTorch C++ implementation.
> [!TIP]
> SDPA automaticallysupports FlashAttention-2 as long as you have the latest PyTorch version installed.
> SDPA automaticallysupports FlashAttention-2 as long as you have the latest PyTorch version installed.
Use the [torch.nn.attention.sdpa_kernel](https://pytorch.org/docs/stable/generated/torch.nn.attention.sdpa_kernel.html) context manager to explicitly enable or disable any of the four attention algorithms. For example, use `SDPBackend.FLASH_ATTENTION` to enable FlashAttention.
| `max_new_tokens` | `int` | Controls the maximum generation length. Be sure to define it, as it usually defaults to a small value. |
| `do_sample` | `bool` | Defines whether generation will sample the next token (`True`), or is greedy instead (`False`). Most use cases should set this flag to `True`. Check [this guide](./generation_strategies.md) for more information. |
| `do_sample` | `bool` | Defines whether generation will sample the next token (`True`), or is greedy instead (`False`). Most use cases should set this flag to `True`. Check [this guide](./generation_strategies) for more information. |
| `temperature` | `float` | How unpredictable the next selected token will be. High values (`>0.8`) are good for creative tasks, low values (e.g. `<0.4`) for tasks that require "thinking". Requires `do_sample=True`. |
| `num_beams` | `int` | When set to `>1`, activates the beam search algorithm. Beam search is good on input-grounded tasks. Check [this guide](./generation_strategies.md) for more information. |
| `num_beams` | `int` | When set to `>1`, activates the beam search algorithm. Beam search is good on input-grounded tasks. Check [this guide](./generation_strategies) for more information. |
| `repetition_penalty` | `float` | Set it to `>1.0` if you're seeing the model repeat itself often. Larger values apply a larger penalty. |
| `eos_token_id` | `list[int]` | The token(s) that will cause generation to stop. The default value is usually good, but you can specify a different token. |
@ -23,11 +23,11 @@ The crux of these challenges lies in augmenting the computational and memory cap
In this guide, we will go over the effective techniques for efficient LLM deployment:
1.**Lower Precision:** Research has shown that operating at reduced numerical precision, namely [8-bit and 4-bit](./main_classes/quantization.md) can achieve computational advantages without a considerable decline in model performance.
1.**Lower Precision:** Research has shown that operating at reduced numerical precision, namely [8-bit and 4-bit](./main_classes/quantization) can achieve computational advantages without a considerable decline in model performance.
2.**Flash Attention:** Flash Attention is a variation of the attention algorithm that not only provides a more memory-efficient approach but also realizes increased efficiency due to optimized GPU memory utilization.
3.**Architectural Innovations:** Considering that LLMs are always deployed in the same way during inference, namely autoregressive text generation with a long input context, specialized model architectures have been proposed that allow for more efficient inference. The most important advancement in model architectures hereby are [Alibi](https://huggingface.co/papers/2108.12409), [Rotary embeddings](https://huggingface.co/papers/2104.09864), [Multi-Query Attention (MQA)](https://huggingface.co/papers/1911.02150) and [Grouped-Query-Attention (GQA)]((https://huggingface.co/papers/2305.13245)).
3.**Architectural Innovations:** Considering that LLMs are always deployed in the same way during inference, namely autoregressive text generation with a long input context, specialized model architectures have been proposed that allow for more efficient inference. The most important advancement in model architectures hereby are [Alibi](https://huggingface.co/papers/2108.12409), [Rotary embeddings](https://huggingface.co/papers/2104.09864), [Multi-Query Attention (MQA)](https://huggingface.co/papers/1911.02150) and [Grouped-Query-Attention (GQA)](https://huggingface.co/papers/2305.13245).
Throughout this guide, we will offer an analysis of auto-regressive generation from a tensor's perspective. We delve into the pros and cons of adopting lower precision, provide a comprehensive exploration of the latest attention algorithms, and discuss improved LLM architectures. While doing so, we run practical examples showcasing each of the feature improvements.
<!--Copyright 2025 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
*This model was released on 2024-11-21 and added to Hugging Face Transformers on 2025-07-08.*
# AIMv2
## Overview
The AIMv2 model was proposed in [Multimodal Autoregressive Pre-training of Large Vision Encoders](https://huggingface.co/papers/2411.14402) by Enrico Fini, Mustafa Shukor, Xiujun Li, Philipp Dufter, Michal Klein, David Haldimann, Sai Aitharaju, Victor Guilherme Turrisi da Costa, Louis Béthune, Zhe Gan, Alexander T Toshev, Marcin Eichner, Moin Nabi, Yinfei Yang, Joshua M. Susskind, Alaaeldin El-Nouby.
The abstract from the paper is the following:
*We introduce a novel method for pre-training of large-scale vision encoders. Building on recent advancements in autoregressive pre-training of vision models, we extend this framework to a multimodal setting, i.e., images and text. In this paper, we present AIMV2, a family of generalist vision encoders characterized by a straightforward pre-training process, scalability, and remarkable performance across a range of downstream tasks. This is achieved by pairing the vision encoder with a multimodal decoder that autoregressively generates raw image patches and text tokens. Our encoders excel not only in multimodal evaluations but also in vision benchmarks such as localization, grounding, and classification. Notably, our AIMV2-3B encoder achieves 89.5% accuracy on ImageNet-1k with a frozen trunk. Furthermore, AIMV2 consistently outperforms state-of-the-art contrastive models (e.g., CLIP, SigLIP) in multimodal image understanding across diverse settings.*
This model was contributed by [Yaswanth Gali](https://huggingface.co/yaswanthgali).
The original code can be found [here](https://github.com/apple/ml-aim).
## Usage Example
Here is an example of Image Feature Extraction using specific checkpoints on resized images and native resolution images:
[Arcee](https://www.arcee.ai/blog/deep-dive-afm-4-5b-the-first-arcee-foundational-model) is a decoder-only transformer model based on the Llama architecture with a key modification: it uses ReLU² (ReLU-squared) activation in the MLP blocks instead of SiLU, following recent research showing improved training efficiency with squared activations. This architecture is designed for efficient training and inference while maintaining the proven stability of the Llama design.
The Arcee model is architecturally similar to Llama but uses `x * relu(x)` in MLP layers for improved gradient flow and is optimized for efficiency in both training and inference scenarios.
> [!TIP]
> The Arcee model supports extended context with RoPE scaling and all standard transformers features including Flash Attention 2, SDPA, gradient checkpointing, and quantization support.
The example below demonstrates how to generate text with Arcee using [`Pipeline`] or the [`AutoModel`].
@ -9,6 +9,7 @@ Unless required by applicable law or agreed to in writing, software distributed
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
-->
*This model was released on 2023-04-09 and added to Hugging Face Transformers on 2023-07-17.*
# Bark
@ -19,7 +20,7 @@ specific language governing permissions and limitations under the License.
## Overview
Bark is a transformer-based text-to-speech model proposed by Suno AI in [suno-ai/bark](https://github.com/suno-ai/bark).
[Bark](https://huggingface.co/suno/bark) is a transformer-based text-to-speech model proposed by Suno AI in [suno-ai/bark](https://github.com/suno-ai/bark).
[BARThez](https://huggingface.co/papers/2010.12321) is a [BART](./bart) model designed for French language tasks. Unlike existing French BERT models, BARThez includes a pretrained encoder-decoder, allowing it to generate text as well. This model is also available as a multilingual variant, mBARThez, by continuing pretraining multilingual BART on a French corpus.
## Overview
You can find all of the original BARThez checkpoints under the [BARThez](https://huggingface.co/collections/dascim/barthez-670920b569a07aa53e3b6887) collection.
The BARThez model was proposed in [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://huggingface.co/papers/2010.12321) by Moussa Kamal Eddine, Antoine J.-P. Tixier, Michalis Vazirgiannis on 23 Oct,
2020.
The abstract of the paper:
> [!TIP]
> This model was contributed by [moussakam](https://huggingface.co/moussakam).
> Refer to the [BART](./bart) docs for more usage examples.
*Inductive transfer learning, enabled by self-supervised learning, have taken the entire Natural Language Processing
(NLP) field by storm, with models such as BERT and BART setting new state of the art on countless natural language
understanding tasks. While there are some notable exceptions, most of the available models and research have been
conducted for the English language. In this work, we introduce BARThez, the first BART model for the French language
(to the best of our knowledge). BARThez was pretrained on a very large monolingual French corpus from past research
that we adapted to suit BART's perturbation schemes. Unlike already existing BERT-based French language models such as
CamemBERT and FlauBERT, BARThez is particularly well-suited for generative tasks, since not only its encoder but also
its decoder is pretrained. In addition to discriminative tasks from the FLUE benchmark, we evaluate BARThez on a novel
summarization dataset, OrangeSum, that we release with this paper. We also continue the pretraining of an already
pretrained multilingual BART on BARThez's corpus, and we show that the resulting model, which we call mBARTHez,
provides a significant boost over vanilla BARThez, and is on par with or outperforms CamemBERT and FlauBERT.*
The example below demonstrates how to predict the `<mask>` token with [`Pipeline`], [`AutoModel`], and from the command line.
This model was contributed by [moussakam](https://huggingface.co/moussakam). The Authors' code can be found [here](https://github.com/moussaKam/BARThez).
<hfoptionsid="usage">
<hfoptionid="Pipeline">
<Tip>
```py
importtorch
fromtransformersimportpipeline
BARThez implementation is the same as BART, except for tokenization. Refer to [BART documentation](bart) for information on
configuration classes and their parameters. BARThez-specific tokenizers are documented below.
pipeline=pipeline(
task="fill-mask",
model="moussaKam/barthez",
torch_dtype=torch.float16,
device=0
)
pipeline("Les plantes produisent <mask> grâce à un processus appelé photosynthèse.")
print(f"The predicted token is: {predicted_token}")
```
</hfoption>
<hfoptionid="transformers CLI">
```bash
echo -e "Les plantes produisent <mask> grâce à un processus appelé photosynthèse."| transformers run --task fill-mask --model moussaKam/barthez --device 0
[BARTpho](https://huggingface.co/papers/2109.09701) is a large-scale Vietnamese sequence-to-sequence model. It offers a word-based and syllable-based version. This model is built on the [BART](./bart) large architecture with its denoising pretraining.
## Overview
You can find all the original checkpoints under the [VinAI](https://huggingface.co/vinai/models?search=bartpho) organization.
The BARTpho model was proposed in [BARTpho: Pre-trained Sequence-to-Sequence Models for Vietnamese](https://huggingface.co/papers/2109.09701) by Nguyen Luong Tran, Duong Minh Le and Dat Quoc Nguyen.
> [!TIP]
> This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen).
> Check out the right sidebar for examples of how to apply BARTpho to different language tasks.
The abstract from the paper is the following:
The example below demonstrates how to summarize text with [`Pipeline`] or the [`AutoModel`] class.
*We present BARTpho with two versions -- BARTpho_word and BARTpho_syllable -- the first public large-scale monolingual
sequence-to-sequence models pre-trained for Vietnamese. Our BARTpho uses the "large" architecture and pre-training
scheme of the sequence-to-sequence denoising model BART, thus especially suitable for generative NLP tasks. Experiments
on a downstream task of Vietnamese text summarization show that in both automatic and human evaluations, our BARTpho
outperforms the strong baseline mBART and improves the state-of-the-art. We release BARTpho to facilitate future
research and applications of generative Vietnamese NLP tasks.*
This model was contributed by [dqnguyen](https://huggingface.co/dqnguyen). The original code can be found [here](https://github.com/VinAIResearch/BARTpho).
- This implementation is only for tokenization: "monolingual_vocab_file" consists of Vietnamese-specialized types
extracted from the pre-trained SentencePiece model "vocab_file" that is available from the multilingual XLM-RoBERTa.
Other languages, if employing this pre-trained multilingual SentencePiece model "vocab_file" for subword
segmentation, can reuse BartphoTokenizer with their own language-specialized "monolingual_vocab_file".
</hfoption>
<hfoptionid="transformers CLI">
```bash
echo -e "Quang tổng hợp hay gọi tắt là quang hợp là quá trình thu nhận và chuyển hóa năng lượng ánh sáng Mặt trời của thực vật,
tảo và một số vi khuẩn để tạo ra hợp chất hữu cơ phục vụ bản thân cũng như làm nguồn thức ăn cho hầu hết các sinh vật
trên Trái Đất. Quang hợp trong thực vật thường liên quan đến chất tố diệp lục màu xanh lá cây và tạo ra oxy như một sản phẩm phụ"|\
transformers run --task summarization --model vinai/bartpho-word --device 0
```
</hfoption>
</hfoptions>
## Notes
- BARTpho uses the large architecture of BART with an additional layer-normalization layer on top of the encoder and decoder. The BART-specific classes should be replaced with the mBART-specific classes.
- This implementation only handles tokenization through the `monolingual_vocab_file` file. This is a Vietnamese-specific subset of token types taken from that multilingual vocabulary. If you want to use this tokenizer for another language, replace the `monolingual_vocab_file` with one specialized for your target language.
[BigBirdPegasus](https://huggingface.co/papers/2007.14062) is an encoder-decoder (sequence-to-sequence) transformer model for long-input summarization. It extends the [BigBird](./big_bird) architecture with an additional pretraining objective borrowed from [Pegasus](./pegasus) called gap sequence generation (GSG). Whole sentences are masked and the model has to fill in the gaps in the document. BigBirdPegasus's ability to keep track of long contexts makes it effective at summarizing lengthy inputs, surpassing the performance of base Pegasus models.
## Overview
You can find all the original BigBirdPegasus checkpoints under the [Google](https://huggingface.co/google/models?search=bigbird-pegasus) organization.
The BigBird model was proposed in [Big Bird: Transformers for Longer Sequences](https://huggingface.co/papers/2007.14062) by
Zaheer, Manzil and Guruganesh, Guru and Dubey, Kumar Avinava and Ainslie, Joshua and Alberti, Chris and Ontanon,
Santiago and Pham, Philip and Ravula, Anirudh and Wang, Qifan and Yang, Li and others. BigBird, is a sparse-attention
based transformer which extends Transformer based models, such as BERT to much longer sequences. In addition to sparse
attention, BigBird also applies global attention as well as random attention to the input sequence. Theoretically, it
has been shown that applying sparse, global, and random attention approximates full attention, while being
computationally much more efficient for longer sequences. As a consequence of the capability to handle longer context,
BigBird has shown improved performance on various long document NLP tasks, such as question answering and
summarization, compared to BERT or RoBERTa.
> [!TIP]
> This model was contributed by [vasudevgupta](https://huggingface.co/vasudevgupta).
>
> Click on the BigBirdPegasus models in the right sidebar for more examples of how to apply BigBirdPegasus to different language tasks.
The abstract from the paper is the following:
The example below demonstrates how to summarize text with [`Pipeline`], [`AutoModel`], and from the command line.
*Transformers-based models, such as BERT, have been one of the most successful deep learning models for NLP.
Unfortunately, one of their core limitations is the quadratic dependency (mainly in terms of memory) on the sequence
length due to their full attention mechanism. To remedy this, we propose, BigBird, a sparse attention mechanism that
reduces this quadratic dependency to linear. We show that BigBird is a universal approximator of sequence functions and
is Turing complete, thereby preserving these properties of the quadratic, full attention model. Along the way, our
theoretical analysis reveals some of the benefits of having O(1) global tokens (such as CLS), that attend to the entire
sequence as part of the sparse attention mechanism. The proposed sparse attention can handle sequences of length up to
8x of what was previously possible using similar hardware. As a consequence of the capability to handle longer context,
BigBird drastically improves performance on various NLP tasks such as question answering and summarization. We also
propose novel applications to genomics data.*
<hfoptionsid="usage">
<hfoptionid="Pipeline">
The original code can be found [here](https://github.com/google-research/bigbird).
```py
importtorch
fromtransformersimportpipeline
## Usage tips
pipeline=pipeline(
task="summarization",
model="google/bigbird-pegasus-large-arxiv",
torch_dtype=torch.float32,
device=0
)
pipeline("""Plants are among the most remarkable and essential life forms on Earth, possessing a unique ability to produce their own food through a process known as photosynthesis. This complex biochemical process is fundamental not only to plant life but to virtually all life on the planet.
Through photosynthesis, plants capture energy from sunlight using a green pigment called chlorophyll, which is located in specialized cell structures called chloroplasts. In the presence of light, plants absorb carbon dioxide from the atmosphere through small pores in their leaves called stomata, and take in water from the soil through their root systems.
These ingredients are then transformed into glucose, a type of sugar that serves as a source of chemical energy, and oxygen, which is released as a byproduct into the atmosphere. The glucose produced during photosynthesis is not just used immediately; plants also store it as starch or convert it into other organic compounds like cellulose, which is essential for building their cellular structure.
This energy reserve allows them to grow, develop leaves, produce flowers, bear fruit, and carry out various physiological processes throughout their lifecycle.""")
```
</hfoption>
<hfoptionid="AutoModel">
- For an in-detail explanation on how BigBird's attention works, see [this blog post](https://huggingface.co/blog/big-bird).
- BigBird comes with 2 implementations: **original_full**&**block_sparse**. For the sequence length <1024,using
input_text="""Plants are among the most remarkable and essential life forms on Earth, possessing a unique ability to produce their own food through a process known as photosynthesis. This complex biochemical process is fundamental not only to plant life but to virtually all life on the planet.
Through photosynthesis, plants capture energy from sunlight using a green pigment called chlorophyll, which is located in specialized cell structures called chloroplasts. In the presence of light, plants absorb carbon dioxide from the atmosphere through small pores in their leaves called stomata, and take in water from the soil through their root systems.
These ingredients are then transformed into glucose, a type of sugar that serves as a source of chemical energy, and oxygen, which is released as a byproduct into the atmosphere. The glucose produced during photosynthesis is not just used immediately; plants also store it as starch or convert it into other organic compounds like cellulose, which is essential for building their cellular structure.
This energy reserve allows them to grow, develop leaves, produce flowers, bear fruit, and carry out various physiological processes throughout their lifecycle."""
echo -e "Plants are among the most remarkable and essential life forms on Earth, possessing a unique ability to produce their own food through a process known as photosynthesis. This complex biochemical process is fundamental not only to plant life but to virtually all life on the planet. Through photosynthesis, plants capture energy from sunlight using a green pigment called chlorophyll, which is located in specialized cell structures called chloroplasts."| transformers-cli run --task summarization --model google/bigbird-pegasus-large-arxiv --device 0
```
</hfoption>
</hfoptions>
Quantization reduces the memory burden of large models by representing the weights in a lower precision. Refer to the [Quantization](../quantization/overview) overview for more available quantization backends.
The example below uses [bitsandbytes](../quantization/bitsandbytes) to only quantize the weights to int4.
input_text="""Plants are among the most remarkable and essential life forms on Earth, possessing a unique ability to produce their own food through a process known as photosynthesis. This complex biochemical process is fundamental not only to plant life but to virtually all life on the planet.
Through photosynthesis, plants capture energy from sunlight using a green pigment called chlorophyll, which is located in specialized cell structures called chloroplasts. In the presence of light, plants absorb carbon dioxide from the atmosphere through small pores in their leaves called stomata, and take in water from the soil through their root systems.
These ingredients are then transformed into glucose, a type of sugar that serves as a source of chemical energy, and oxygen, which is released as a byproduct into the atmosphere. The glucose produced during photosynthesis is not just used immediately; plants also store it as starch or convert it into other organic compounds like cellulose, which is essential for building their cellular structure.
This energy reserve allows them to grow, develop leaves, produce flowers, bear fruit, and carry out various physiological processes throughout their lifecycle."""
- BigBirdPegasus also uses the [`PegasusTokenizer`].
- Inputs should be padded on the right because BigBird uses absolute position embeddings.
- BigBirdPegasus supports `original_full` and `block_sparse` attention. If the input sequence length is less than 1024, it is recommended to use `original_full` since sparse patterns don't offer much benefit for smaller inputs.
- The current implementation uses window size of 3 blocks and 2 global blocks, only supports the ITC-implementation, and doesn't support `num_random_blocks=0`.
- The sequence length must be divisible by the block size.
Read the [Understanding BigBird's Block Sparse Attention](https://huggingface.co/blog/big-bird) blog post for more details about how BigBird's attention works.
@ -13,6 +13,7 @@ specific language governing permissions and limitations under the License.
rendered properly in your Markdown viewer.
-->
*This model was released on 2025-04-16 and added to Hugging Face Transformers on 2025-04-28.*
# BitNet
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.