Commit Graph

4 Commits

Author SHA1 Message Date
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
ffe0c1ae4d Make test_torch.py pass cuda-memcheck (#29243)
Summary:
Make the following changes:
- When there are more than 10k errors, cuda-memcheck only shows 10k errors, in this case we shouldn't raise an Exception
- Add UNDER_CUDA_MEMCHECK environment to allow disabling `pin_memory` tests when running cuda-memcheck.
- Add a `--ci` command option, when turned on, then this script would run output to stdout instead of writing a file, and exit with an error if cuda-memcheck fails
- Add a `--nohang` command option. When turned on, then hang would be treated as pass instead of error
- Do simple filtering on the test to run: if `'cpu'` in the test name but not `'cuda'` is not in the test name
- Add `--split` and `--rank` to allowing splitting the work (NVIDIA CI has a limitation of 3 hours, we have to split the work to satisfy this limitation)
- The error summary could be `ERROR SUMMARY: 1 error`, or `ERROR SUMMARY: 2 errors`, the tail could be `error` or `errors`, it is not of the same length. The script is fixed to handle this case.
- Ignore errors from `cufft`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29243

Differential Revision: D18941701

Pulled By: mruberry

fbshipit-source-id: 2048428f32b66ef50c67444c03ce4dd9491179d2
2019-12-14 20:29:58 -08:00
f42768f8c0 Add scripts to run cuda-memcheck (#28127)
Summary:
This PR adds scripts that could be used for https://github.com/pytorch/pytorch/issues/26052

Example output:

```
Success: TestTorchDeviceTypeCPU.test_advancedindex_big_cpu
Success: TestTorchDeviceTypeCPU.test_addcmul_cpu
Success: TestTorchDeviceTypeCPU.test_addbmm_cpu_float32
Success: TestTorchDeviceTypeCPU.test_advancedindex_cpu_float16
Success: TestTorchDeviceTypeCPU.test_addmv_cpu
Success: TestTorchDeviceTypeCPU.test_addcdiv_cpu
Success: TestTorchDeviceTypeCPU.test_all_any_empty_cpu
Success: TestTorchDeviceTypeCPU.test_atan2_cpu
Success: TestTorchDeviceTypeCPU.test_advancedindex_cpu_float64
Success: TestTorchDeviceTypeCPU.test_baddbmm_cpu_float32
Success: TestTorchDeviceTypeCPU.test_atan2_edgecases_cpu
Success: TestTorchDeviceTypeCPU.test_add_cpu
Success: TestTorchDeviceTypeCPU.test_addr_cpu_bfloat16
Success: TestTorchDeviceTypeCPU.test_addr_cpu_float32
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28127

Differential Revision: D18184255

Pulled By: mruberry

fbshipit-source-id: 7fd4bd9faf9f8b37b369f631c63f26eb965b16e7
2019-10-29 12:05:29 -07:00