Change BN to eval before QAT Convert phase (#130598)

**Summary**
In the QAT convert phase, we fold bn into conv and do DCE to this BN node. We should change `torch.ops.aten._native_batch_norm_legit.default` to `torch.ops.aten._native_batch_norm_legit_no_training.default`  for a safe DCE.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/130598
Approved by: https://github.com/jgong5, https://github.com/yushangdi
This commit is contained in:
leslie-fang-intel
2024-07-11 18:19:01 -07:00
committed by PyTorch MergeBot
parent 18418a7dbb
commit 2a1f22e57f

View File

@ -2958,6 +2958,6 @@ def _generate_qdq_quantized_model(
else prepare_pt2e(export_model, quantizer)
)
prepare_model(*inputs)
torch.ao.quantization.move_exported_model_to_eval(prepare_model)
convert_model = convert_pt2e(prepare_model)
torch.ao.quantization.move_exported_model_to_eval(convert_model)
return convert_model