mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 12:54:11 +08:00
Revert "Fix test failure in TestCudaMultiGPU.test_cuda_device_memory_allocated (#105501)"
This reverts commit e6fd8ca3eef2b85b821936829e86beb7d832575c. Reverted https://github.com/pytorch/pytorch/pull/105501 on behalf of https://github.com/zou3519 due to We've agreed that the PR is wrong. It didn't actually break anything. ([comment](https://github.com/pytorch/pytorch/pull/105501#issuecomment-1648005842))
This commit is contained in:
@ -1285,7 +1285,7 @@ t2.start()
|
||||
device_count = torch.cuda.device_count()
|
||||
current_alloc = [memory_allocated(idx) for idx in range(device_count)]
|
||||
x = torch.ones(10, device="cuda:0")
|
||||
self.assertGreaterEqual(memory_allocated(0), current_alloc[0])
|
||||
self.assertGreater(memory_allocated(0), current_alloc[0])
|
||||
self.assertTrue(all(memory_allocated(torch.cuda.device(idx)) == current_alloc[idx] for idx in range(1, device_count)))
|
||||
|
||||
|
||||
|
Reference in New Issue
Block a user