Add some minor doc improvement and ban using training IR for unflattener (#135549)

Title

Differential Revision: [D62412490](https://our.internmc.facebook.com/intern/diff/D62412490/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/135549
Approved by: https://github.com/yushangdi
This commit is contained in:
Tugsbayasgalan Manlaibaatar
2024-09-09 17:26:20 -07:00
committed by PyTorch MergeBot
parent c0d2f991b1
commit c18052da0e
2 changed files with 7 additions and 1 deletions

View File

@ -88,7 +88,11 @@ def export_for_training(
flow and data structures (with certain exceptions), and (3) records the set of
shape constraints needed to show that this normalization and control-flow elimination
is sound for future inputs. This API is intended for PT2 quantization training use cases
and will soon be the default IR of torch.export.export in the near future.
and will soon be the default IR of torch.export.export in the near future. To read further about
the motivation behind this change, please refer to
https://dev-discuss.pytorch.org/t/why-pytorch-does-not-need-a-new-standardized-operator-set/2206
With this API, and :func:`run_decompositions()`, you should be able to get inference IR with
your custom decomposition behaviour.
**Soundness Guarantee**

View File

@ -548,6 +548,8 @@ def unflatten(
An instance of :class:`UnflattenedModule`, which has the same module
hierarchy as the original eager module pre-export.
"""
if module.verifier.dialect == "TRAINING":
raise RuntimeError("Unflattener doesn't support non-functional training IR yet")
module = _remove_effect_tokens(module)
return UnflattenedModule(module, flat_args_adapter)