Files
pytorch/benchmarks/dynamo/huggingface_models_list.txt
Boyuan Feng f76fdcaaf8 [Benchmark] cleanup huggingface models (#164815)
Prune models from TorchInductor dashboard to reduce ci cost. This PR prunes for hugging face models according to the [doc](https://docs.google.com/document/d/1nLPNNAU-_M9Clx9FMrJ1ycdPxe-xRA54olPnsFzdpoU/edit?tab=t.0), which reduces from 46 to 27 models.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/164815
Approved by: https://github.com/anijain2305, https://github.com/seemethere, https://github.com/huydhn, https://github.com/malfet
2025-10-08 03:21:04 +00:00

40 lines
907 B
Plaintext

AlbertForMaskedLM,8
AllenaiLongformerBase,8
BartForCausalLM,8
BertForMaskedLM,32
BlenderbotForCausalLM,32
BlenderbotForConditionalGeneration,16
DebertaV2ForMaskedLM,8
DistilBertForMaskedLM,256
DistillGPT2,32
ElectraForCausalLM,64
GPT2ForSequenceClassification,8
GPTJForCausalLM,1
GPTJForQuestionAnswering,1
GPTNeoForCausalLM,32
GPTNeoForSequenceClassification,32
GoogleFnet,32
LayoutLMForMaskedLM,32
M2M100ForConditionalGeneration,64
MBartForCausalLM,8
MT5ForConditionalGeneration,32
MegatronBertForCausalLM,16
MobileBertForMaskedLM,256
OPTForCausalLM,4
PLBartForCausalLM,16
PegasusForCausalLM,128
RobertaForCausalLM,32
T5ForConditionalGeneration,8
T5Small,8
TrOCRForCausalLM,64
XGLMForCausalLM,32
XLNetLMHeadModel,16
YituTechConvBert,32
meta-llama/Llama-3.2-1B,8
google/gemma-2-2b,8
google/gemma-3-4b-it,8
openai/whisper-tiny,8
Qwen/Qwen3-0.6B,8
mistralai/Mistral-7B-Instruct-v0.3, 8
openai/gpt-oss-20b, 8