mirror of
https://github.com/pytorch/pytorch.git
synced 2025-11-11 22:34:53 +08:00
[BE] @serialTest decorator must be called (#157388)
Otherwise it turns test into a trivial one(that always succeeds), as following example demonstrates
```python
import torch
from torch.testing._internal.common_utils import serialTest, run_tests, TestCase
class MegaTest(TestCase):
@serialTest
def test_foo(self):
if hasattr(self.test_foo, "pytestmark"):
print("foo has attr and it is", self.test_foo.pytestmark)
print("foo")
@serialTest()
def test_bar(self):
if hasattr(self.test_bar, "pytestmark"):
print("bar has attr and it is", self.test_bar.pytestmark)
print("bar")
if __name__ == "__main__":
run_tests()
```
That will print
```
test_bar (__main__.MegaTest.test_bar) ... bar has attr and it is [Mark(name='serial', args=(), kwargs={})]
bar
ok
test_foo (__main__.MegaTest.test_foo) ... ok
----------------------------------------------------------------------
Ran 2 tests in 0.013s
```
Added assert that arg is boolean in the decorator to prevent such silent skips in the future
Pull Request resolved: https://github.com/pytorch/pytorch/pull/157388
Approved by: https://github.com/clee2000
This commit is contained in:
committed by
PyTorch MergeBot
parent
eaf32fffb7
commit
5e636d664a
@ -9336,7 +9336,7 @@ class TestSDPA(TestCaseMPS):
|
||||
)
|
||||
self._compare_tensors(y.cpu(), y_ref)
|
||||
|
||||
@serialTest
|
||||
@serialTest()
|
||||
def test_sdpa_fp32_no_memory_leak(self):
|
||||
def get_mps_memory_usage():
|
||||
return (torch.mps.current_allocated_memory() / (1024 * 1024),
|
||||
|
||||
Reference in New Issue
Block a user