mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Summary: Before we would recompile the model unnecessarily even if the model is already in the desired mode. For training frameworks that assume `model.train()` is idempotent and calls this before every single training step, this led to a bunch of tiny graphs and poor performance. This commit makes these calls no-ops if we're already in the target train/eval mode. Test Plan: python test/test_quantization -k TestQuantizePT2E.test_allow_exported_model_train_eval_idempotent Pull Request resolved: https://github.com/pytorch/pytorch/pull/142239 Approved by: https://github.com/jerryzh168