Generalize torch._C._set_allocator_settings to be generic (#156175)

# Motivation
This PR moves the implementation of `torch.cuda.memory._set_allocator_settings` to `torch._C._accelerator_setAllocatorSettings`.
Since the original API was intended as a temporary/internal utility, I am not exposing the new function as a public API.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/156175
Approved by: https://github.com/albanD
ghstack dependencies: #149601, #157908, #150312, #156165
This commit is contained in:
Yu, Guangye
2025-07-30 09:14:37 +00:00
committed by PyTorch MergeBot
parent 1fc010a9d8
commit d3ce45012e
9 changed files with 26 additions and 34 deletions

View File

@ -4471,28 +4471,28 @@ class TestCudaMallocAsync(TestCase):
with self.assertRaises(RuntimeError):
torch.cuda.memory._set_allocator_settings("foo:1,bar:2")
with self.assertRaises(RuntimeError):
with self.assertRaises(ValueError):
torch.cuda.memory._set_allocator_settings(
"garbage_collection_threshold:1.2"
)
with self.assertRaises(RuntimeError):
with self.assertRaises(ValueError):
torch.cuda.memory._set_allocator_settings("max_split_size_mb:2")
with self.assertRaises(RuntimeError):
with self.assertRaises(ValueError):
torch.cuda.memory._set_allocator_settings("release_lock_on_cudamalloc:none")
with self.assertRaises(RuntimeError):
with self.assertRaises(ValueError):
torch.cuda.memory._set_allocator_settings(
"pinned_use_cuda_host_register:none"
)
with self.assertRaises(RuntimeError):
with self.assertRaises(ValueError):
torch.cuda.memory._set_allocator_settings(
"pinned_num_register_threads:none"
)
with self.assertRaises(RuntimeError):
with self.assertRaises(ValueError):
torch.cuda.memory._set_allocator_settings(
"pinned_num_register_threads:1024"
)