Fix meta load tensor imcompatible issue (#7073)

The partition tensor doesn't need to move to the current device when
meta load is used.

Signed-off-by: Lai, Yejing <yejing.lai@intel.com>
Co-authored-by: Olatunji Ruwase <olruwase@microsoft.com>
This commit is contained in:
Yejing-Lai
2025-02-25 00:57:26 +08:00
committed by GitHub
parent e1903f0d0a
commit 4b7e2c909f

View File

@ -48,7 +48,8 @@ def move(tensor, device):
# to save host resources when DP > 1。
if tensor.is_meta:
return torch.empty_like(tensor, device=device)
# Keep tensor in meta device if tensor is meta.
return tensor
else:
# Using new tensors help in freeing memory (after split for example) was done before by calling clone().
# Using copy=True instead of clone() will help in case of cpu --> cpu.