Remove llava from ci_expected_accuracy as it's flaky (#121322)

https://github.com/pytorch/pytorch/pull/121029 added it into the CI but the test is flaky on hud. It alternates between fail_accuracy and fail_to_run

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121322
Approved by: https://github.com/desertfire
This commit is contained in:
angelayi
2024-03-06 20:47:01 +00:00
committed by PyTorch MergeBot
parent 23fb37fa41
commit 58ac4a2007
3 changed files with 1 additions and 8 deletions

View File

@ -178,10 +178,6 @@ llama,pass,0
llava,fail_accuracy,0
maml,pass_due_to_skip,0 maml,pass_due_to_skip,0

1 name accuracy graph_breaks
178
179
180
181
182
183

View File

@ -138,10 +138,6 @@ llama,pass,0
llava,fail_accuracy,0
maml,pass_due_to_skip,0 maml,pass_due_to_skip,0

1 name accuracy graph_breaks
138
139
140
141
142
143

View File

@ -182,6 +182,7 @@ skip:
# works on cuda, accuracy failure on cpu # works on cuda, accuracy failure on cpu
- hf_Whisper - hf_Whisper
- stable_diffusion_text_encoder - stable_diffusion_text_encoder
- llava
cuda: [] cuda: []