Files
pytorch/torch/testing/_internal/distributed/fake_pg.py
Edward Yang 09cb34c1dc [RELAND] Always build USE_DISTRIBUTED (#160449) and Make distributed modules importable even when backend not built (#159889) (#162594)
Summary:
Original: D81957844 and D81957923

Also, https://github.com/pytorch/pytorch/pull/162142 is patched in as well

#buildall

Test Plan:
sandcastle and oss ci

Rollback Plan:

Reviewed By: H-Huang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162594
Approved by: https://github.com/H-Huang, https://github.com/dcci
2025-09-22 21:12:18 +00:00

33 lines
1.1 KiB
Python

# mypy: allow-untyped-defs
import torch.distributed as dist
from torch.distributed._distributed_c10d import FakeProcessGroup
class FakeStore(dist.Store):
"""
A fake store is a fake Key-Value store simply for initialization usage
the of fake process group, one can either use FakeStore or HashStore.
"""
def _create_fake_pg(common_opts, backend_opts):
"""
A fake process group (not related to FakeTensor) is a process group which
doesn't actually do any communication, it just hallucinates some
communication. You can run a single rank with a fake process group
without needing multiple processes (simulates per-rank behavior)
NOTE: This is not a real process group, and it would produce wrong results
for every collective. It should be used as a convenient tool when playing
with distributed but don't care about the actual data.
"""
return FakeProcessGroup(
common_opts.group_rank, common_opts.group_size, backend_opts
)
dist.Backend.register_backend(
"fake", _create_fake_pg, extended_api=True, devices=["cpu", "cuda", "hpu", "xpu"]
)