Revert "Fix test failure in TestCudaMultiGPU.test_cuda_device_memory_allocated (#105501)"

This reverts commit e6fd8ca3eef2b85b821936829e86beb7d832575c.

Reverted https://github.com/pytorch/pytorch/pull/105501 on behalf of https://github.com/zou3519 due to We've agreed that the PR is wrong. It didn't actually break anything. ([comment](https://github.com/pytorch/pytorch/pull/105501#issuecomment-1648005842))
This commit is contained in:
PyTorch MergeBot
2023-07-24 14:18:44 +00:00
parent 33b855e906
commit 1600585219

View File

@ -1285,7 +1285,7 @@ t2.start()
device_count = torch.cuda.device_count()
current_alloc = [memory_allocated(idx) for idx in range(device_count)]
x = torch.ones(10, device="cuda:0")
self.assertGreaterEqual(memory_allocated(0), current_alloc[0])
self.assertGreater(memory_allocated(0), current_alloc[0])
self.assertTrue(all(memory_allocated(torch.cuda.device(idx)) == current_alloc[idx] for idx in range(1, device_count)))