43 Commits

Author SHA1 Message Date
cyy
e9e1aacef8 Enable -Wunused on torch targets (#150077)
For GCC, ``-Wunused`` contains:
```
-Wunused-function
Warn whenever a static function is declared but not defined or a non\-inline static function is unused.

-Wunused-label
Warn whenever a label is declared but not used.
To suppress this warning use the unused attribute.

-Wunused-parameter
Warn whenever a function parameter is unused aside from its declaration.
To suppress this warning use the unused attribute.

-Wunused-variable
Warn whenever a local variable or non-constant static variable is unused aside from its declaration
To suppress this warning use the unused attribute.
```
For Clang, some of the diagnostics controlled by ``-Wunused`` are enabled by default:
```
Controls [-Wunused-argument](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-argument),
[-Wunused-but-set-variable](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-but-set-variable),
[-Wunused-function](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-function),
[-Wunused-label](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-label), [-Wunused-lambda-capture](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-lambda-capture),
[-Wunused-local-typedef](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-local-typedef),
[-Wunused-private-field](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-private-field),
[-Wunused-property-ivar](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-property-ivar),
[-Wunused-value](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-value), [-Wunused-variable](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-variable).
```
These checks are all usefull. This PR aims to enable ``-Wunused`` without breaking code.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150077
Approved by: https://github.com/zou3519, https://github.com/wdvr
2025-05-02 07:14:19 +00:00
6dadfc4457 Revert "Enable -Wunused on torch targets (#150077)"
This reverts commit 688adc9941f855e78dd4d595682eea16317b7f54.

Reverted https://github.com/pytorch/pytorch/pull/150077 on behalf of https://github.com/wdvr due to failing internally with use of undeclared identifier ([comment](https://github.com/pytorch/pytorch/pull/150077#issuecomment-2846499828))
2025-05-02 06:53:20 +00:00
cyy
688adc9941 Enable -Wunused on torch targets (#150077)
For GCC, ``-Wunused`` contains:
```
-Wunused-function
Warn whenever a static function is declared but not defined or a non\-inline static function is unused.

-Wunused-label
Warn whenever a label is declared but not used.
To suppress this warning use the unused attribute.

-Wunused-parameter
Warn whenever a function parameter is unused aside from its declaration.
To suppress this warning use the unused attribute.

-Wunused-variable
Warn whenever a local variable or non-constant static variable is unused aside from its declaration
To suppress this warning use the unused attribute.
```
For Clang, some of the diagnostics controlled by ``-Wunused`` are enabled by default:
```
Controls [-Wunused-argument](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-argument),
[-Wunused-but-set-variable](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-but-set-variable),
[-Wunused-function](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-function),
[-Wunused-label](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-label), [-Wunused-lambda-capture](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-lambda-capture),
[-Wunused-local-typedef](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-local-typedef),
[-Wunused-private-field](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-private-field),
[-Wunused-property-ivar](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-property-ivar),
[-Wunused-value](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-value), [-Wunused-variable](https://clang.llvm.org/docs/DiagnosticsReference.html#wunused-variable).
```
These checks are all usefull. This PR aims to enable ``-Wunused`` without breaking code.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/150077
Approved by: https://github.com/zou3519
2025-05-01 04:09:06 +00:00
cyy
3907f36808 Turn some variables and functions into static (#136847)
Re-check some files and mark variables and functions into static and fix other warnings.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/136847
Approved by: https://github.com/ezyang
2024-10-29 17:01:56 +00:00
b55f57b7af [codemod][lowrisk] Remove extra semi colon from caffe2/c10/core/SymNodeImpl.h (#123055)
Summary:
`-Wextra-semi` or `-Wextra-semi-stmt`

If the code compiles, this is safe to land.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123055
Approved by: https://github.com/Skylion007
2024-05-14 19:35:29 +00:00
36e6f3b339 [caffe2] Make all get_backtrace() implementations lazy (#125750) (#126064)
Summary:

#125682 (D56586844) added support for lazy symbolization to `Error` and adopted it for internal use cases; this commit adopts it for `get_backtrace()` as well.

Test Plan:
Sandcastle and GH CI.

NOTE: This is a resubmit of D56881683, a spurious copypasted line in the Android implementation broke the build, but this was not surfaced by diff tests.

Reproed the breakage with
```
$ fbpython scripts/build_android_app/build_android_app.py --buck-config-files='@//fbandroid/mode/have_libgflags @//fbandroid/mode/static_linking @//xplat/langtech/mobile/android_opt_buck_config_with_et_boltnn' --build-target='fbsource//xplat/langtech/mobile:transcribe_binAndroid-android-arm64'
```
Verified that the fixed diff builds successfully.

Differential Revision: D57275456

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126064
Approved by: https://github.com/ezyang
2024-05-13 20:17:41 +00:00
ee804d256b Revert "[caffe2] Make all get_backtrace() implementations lazy (#125750)"
This reverts commit cc4da72b47ef63b7c448f0de4cdbdd792e9195ea.

Reverted https://github.com/pytorch/pytorch/pull/125750 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/125750#issuecomment-2105285301))
2024-05-10 21:23:10 +00:00
cc4da72b47 [caffe2] Make all get_backtrace() implementations lazy (#125750)
Summary: #125682 (D56586844) added support for lazy symbolization to `Error` and adopted it for internal use cases; this commit adopts it for `get_backtrace()` as well.

Test Plan: Sandcastle and GH CI.

Differential Revision: D56881683

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125750
Approved by: https://github.com/ezyang
2024-05-10 16:02:40 +00:00
902a74c1d6 [caffe2] Lazily symbolize backtrace in c10::Error (#125787)
Summary:
The macros that build `c10::Error` compute the stack trace at the point of throwing, which is then returned as part of the `what()`. If `what()` is never called, which is the case for most exceptions (since logging is throttled), the cost of computing the stack trace was wasted.

By far, the most expensive part of computing the stack trace is its symbolization; just unwinding the stack and collecting the instruction addresses is comparatively cheap. We can thus defer the symbolization to first invocation of `what()`.

Test Plan:
Added unit tests exercising the lazy nature of `what()`.

Ran an adfinder canary: https://www.internalfb.com/intern/ads/canary/460118801509424346

We can see that the cost of symbolization is obliterated (meaning that `what()` is virtually never called, as expected):
 {F1496627896}

Differential Revision: D57128632

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125787
Approved by: https://github.com/huydhn
2024-05-09 01:46:57 +00:00
e457fdcd81 Revert "[caffe2] Lazily symbolize backtrace in c10::Error (#125682)"
This reverts commit 08f6ef0e1ccadf4626c0d7ecb15db96c01b8f418.

Reverted https://github.com/pytorch/pytorch/pull/125682 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/125682#issuecomment-2101477132))
2024-05-08 21:11:27 +00:00
08f6ef0e1c [caffe2] Lazily symbolize backtrace in c10::Error (#125682)
Summary:
The macros that build `c10::Error` compute the stack trace at the point of throwing, which is then returned as part of the `what()`. If `what()` is never called, which is the case for most exceptions (since logging is throttled), the cost of computing the stack trace was wasted.

By far, the most expensive part of computing the stack trace is its symbolization; just unwinding the stack and collecting the instruction addresses is comparatively cheap. We can thus defer the symbolization to first invocation of `what()`.

Test Plan:
Added unit tests exercising the lazy nature of `what()`.

Ran an adfinder canary: https://www.internalfb.com/intern/ads/canary/460118801509424346

We can see that the cost of symbolization is obliterated (meaning that `what()` is virtually never called, as expected):
 {F1496627896}

Reviewed By: ezyang

Differential Revision: D56586844

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125682
Approved by: https://github.com/ezyang
2024-05-08 04:57:59 +00:00
cyy
bae61ecb96 [Reland 1] Cleanup header inclusions in torch_cpu by iwyu (#112311)
Reland https://github.com/pytorch/pytorch/pull/101178 to use IWYU on torch_cpu. The header file changes are excluded to avoid breaking internal jobs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112311
Approved by: https://github.com/ezyang
2023-11-19 04:06:36 +00:00
83deaa16ed Revert "[1/N] Cleanup header inclusions in torch_cpu by iwyu (#101178)"
This reverts commit b7a95f4fdb8a79dc459cc757dafcdbd0953b1a62.

Reverted https://github.com/pytorch/pytorch/pull/101178 on behalf of https://github.com/atalman due to Break internal CI ([comment](https://github.com/pytorch/pytorch/pull/101178#issuecomment-1734384645))
2023-09-25 20:05:25 +00:00
cyy
b7a95f4fdb [1/N] Cleanup header inclusions in torch_cpu by iwyu (#101178)
Following our previous IWYU work  #100304 on C10, it makes more sense to try IWYU on torch_cpu. This PR does exactly that. Meanwhile, it fixes issue #48684.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101178
Approved by: https://github.com/ezyang
2023-09-24 05:01:20 +00:00
66a2600b6a [T153220354] Fix header inclusions in c10 (#1541) (#101846)
Summary:
This is a re-attempt to land the iwyu header changes, by taking the diff from [PR 100304](https://github.com/pytorch/pytorch/pull/100304), and adding the bare minimal changes to make the diff build corectly in the internal builds.

X-link: https://github.com/facebookresearch/pytorch3d/pull/1541

X-link: https://github.com/fairinternal/pytorch3d/pull/44

- Re-work D45769819 to fix header inclusions in c10

Test Plan:
```
buck2 build --no-remote-cache mode/dev-nosan //caffe2/c10/...

buck2 build --no-remote-cache mode/dev-nosan //deeplearning/fbgemm/fbgemm_gpu/...

buck2 build mode/dev-nosan //vision/fair/pytorch3d/pytorch3d:_C
```

Reviewed By: malfet

Differential Revision: D45920611

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101846
Approved by: https://github.com/malfet, https://github.com/Skylion007
2023-05-20 19:35:14 +00:00
4eaaa08623 Revert "Fix header inclusions in c10 by iwyu (#100304)"
This reverts commit 6037ee8cc914d64a27965a35b20472044416a2a5.

Reverted https://github.com/pytorch/pytorch/pull/100304 on behalf of https://github.com/jeanschmidt due to Breaking meta internal builds and fbgemm builds ([comment](https://github.com/pytorch/pytorch/pull/100304#issuecomment-1543919257))
2023-05-11 12:37:35 +00:00
cyy
6037ee8cc9 Fix header inclusions in c10 by iwyu (#100304)
This work introduces include-what-you-use  support for c10 by a CMake option defaulting to off. We also remove some unused header inclusions and  fix a trivial inclusion error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100304
Approved by: https://github.com/ezyang
2023-05-11 05:19:42 +00:00
3271413e74 Revert "Fix header inclusions in c10 by iwyu (#100304)"
This reverts commit 39ec5fa722730f6c25490c2c33933b014767f297.

Reverted https://github.com/pytorch/pytorch/pull/100304 on behalf of https://github.com/huydhn due to Sorry for reverting your PR, it is almost there but fails on Windows 39ec5fa722, which is in unstable mode after https://github.com/pytorch/pytorch/pull/100548 ([comment](https://github.com/pytorch/pytorch/pull/100304#issuecomment-1542975714))
2023-05-11 00:37:32 +00:00
cyy
39ec5fa722 Fix header inclusions in c10 by iwyu (#100304)
This work introduces include-what-you-use  support for c10 by a CMake option defaulting to off. We also remove some unused header inclusions and  fix a trivial inclusion error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100304
Approved by: https://github.com/ezyang
2023-05-10 15:42:43 +00:00
97db9fde69 Fix header-filter for clang-tidy c10 and apply some fixes to c10 and … (#91178)
…c10d

Fixes a broken header filters from #90699 and applies a few more clang-tidy fixes that are relevant from c10 and c10d. The header filter pattern was actually broken and the clang-tidy include pattern was redundant. Also fixed a few bugs in torch/distributed/c10d

Pull Request resolved: https://github.com/pytorch/pytorch/pull/91178
Approved by: https://github.com/ezyang
2022-12-27 07:34:12 +00:00
1dbc8ad3b7 Add Warning class and refactor C++ warnings to use it (#84101)
Also adds `TORCH_WARN_WITH` and `TORCH_WARN_DEPRECATION` macros

Part of #72948

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84101
Approved by: https://github.com/albanD
2022-10-18 20:02:42 +00:00
a9b0a921d5 Disable avoid-non-const-global-variables lint check (#62008)
Summary:
As GoogleTest `TEST` macro is non-compliant with it as well as `DEFINE_DISPATCH`

All changes but the ones to `.clang-tidy` are generated using following script:
```
for i in `find . -type f -iname "*.c*" -or -iname "*.h"|xargs grep cppcoreguidelines-avoid-non-const-global-variables|cut -f1 -d:|sort|uniq`;  do sed -i "/\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)/d" $i; done
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62008

Reviewed By: driazati, r-barnes

Differential Revision: D29838584

Pulled By: malfet

fbshipit-source-id: 1b2f8602c945bd4ce50a9bfdd204755556e31d13
2021-07-22 18:04:40 -07:00
1733d10399 Warn when backward() is called with create_graph=True (#59412)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/4661
- Add warnings in engine's `execute` function so it can be triggered through both cpp and python codepaths
- Adds an RAII guard version of `c10::Warning::set_warnAlways` and replaces all prior usages of the set_warnAlways with the new one

Pull Request resolved: https://github.com/pytorch/pytorch/pull/59412

Reviewed By: jbschlosser

Differential Revision: D28969294

Pulled By: soulitzer

fbshipit-source-id: b03369c926a3be18ce1cf363b39edd82a14245f0
2021-06-08 17:19:04 -07:00
cd22bdf236 [PyTorch] Autoformat c10, round 2 (#57645)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/57645

Second round of autoformatting changes since the first pass became too large.
ghstack-source-id: 128199695

Test Plan: CI

Reviewed By: zertosh

Differential Revision: D28131430

fbshipit-source-id: 24b03e38b087f31e8cac2404bebcd401c55b6cab
2021-05-05 15:45:53 -07:00
b11a24209f [PyTorch] Take advantage of string literals in TORCH_WARN (#54032)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54032

Add a `const char*` override to c10::Warning::warn so that we can avoid wrapping plain C string literals in std::string.
ghstack-source-id: 125544720

Test Plan: Buildsizebot some iOS apps?

Reviewed By: ezyang

Differential Revision: D27061983

fbshipit-source-id: dc11150c911a4317a8edac75e50c5ba43511ff24
2021-04-30 19:02:42 -07:00
cyy
83c23703b7 Some simple optimizations (#51831)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/51831

Reviewed By: albanD

Differential Revision: D26379122

Pulled By: VitalyFedyunin

fbshipit-source-id: d3562232f8501f2ad0b291586bf7f828e9b47010
2021-04-23 07:55:15 -07:00
087049000b Make c10 clang-tidy clean (#55870)
Summary:
This change was autogenerated by running:
```
% find c10 -iname "*.cpp" -exec python3 tools/clang_tidy.py -c build -x {} -s \;
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55870

Reviewed By: janeyx99

Differential Revision: D27728617

Pulled By: malfet

fbshipit-source-id: bede4d7f0c106d51394d1e9efddf01bf894421c5
2021-04-14 11:23:28 -07:00
7cd9892f83 [PyTorch] Sync TORCH_INTERNAL_ASSERT optis with TORCH_CHECK (#52226)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52226

This gets TORCH_INTERNAL_ASSERT to parity with TORCH_CHECK in terms of optimization for 0 or 1 argument.
ghstack-source-id: 121877054

(Note: this ignores all push blocking failures!)

Test Plan:
Compare generated assembly for
```
#include <c10/util/Exception.h>

void f(bool b) {
  TORCH_INTERNAL_ASSERT(b, "message");
}

void g(bool b) {
  TORCH_INTERNAL_ASSERT(b);
}

void h(bool b) {
  TORCH_INTERNAL_ASSERT(b, "message", random());
}
```

before/after this diff.
Before: P174916324
After: P174916411

Before, f and g called out to outlined lambdas to build
std::strings. After, they load string constants and call
torchInternalAssertFail. Similarly, h calls random() and c10::detail::_str_wrapper() inline and then calls out to torchInternalAssertFail. As with D26380783 (efbb854ed8), I hope to solve the problem of outlining the random & _str_wrapper calls separately.

Profile AdIndexer benchmark & verify that toTensor() is still inlined (it calls TORCH_INTERNAL_ASSERT with an integer argument, like `h` above).

Reviewed By: bhosmer

Differential Revision: D26410575

fbshipit-source-id: f82ffec8d302c9a51f7a82c65bc698fab01e1765
2021-02-19 12:45:40 -08:00
b97a040f71 ENH: toggle TORCH_WARN_ONCE to TORCH_WARN for tests (#48560)
Summary:
Toward fixing https://github.com/pytorch/pytorch/issues/47624

~Step 1: add `TORCH_WARN_MAYBE` which can either warn once or every time in c++, and add a c++ function to toggle the value.
Step 2 will be to expose this to python for tests. Should I continue in this PR or should we take a different approach: add the python level exposure without changing any c++ code and then over a series of PRs change each call site to use the new macro and change the tests to make sure it is being checked?~

Step 1: add a python and c++ toggle to convert TORCH_WARN_ONCE into TORCH_WARN so the warnings can be caught in tests
Step 2: add a python-level decorator to use this toggle in tests
Step 3: (in future PRs): use the decorator to catch the warnings instead of `maybeWarnsRegex`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/48560

Reviewed By: ngimel

Differential Revision: D26171175

Pulled By: mruberry

fbshipit-source-id: d83c18f131d282474a24c50f70a6eee82687158f
2021-02-08 08:21:19 -08:00
630ee57bc2 [PyTorch] Provide overload of torchCheckFail taking const char* (#51389)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/51389

This should reduce code size when STRIP_ERROR_MESSAGES is defined by allowing callers of TORCH_CHECK to avoid creating `std::string`s.
ghstack-source-id: 120692772

Test Plan: Measure code size of STRIP_ERROR_MESSAGES builds

Reviewed By: ezyang

Differential Revision: D25891476

fbshipit-source-id: 34eef5af7464da6534989443859e2765887c243c
2021-02-01 16:48:46 -08:00
7d406b4a07 [PyTorch] Make TORCH_CHECK less likely to interfere with inlining (#49263)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49263

Now it is smaller and calls to an out-of-line function in
case of failure.
ghstack-source-id: 118480531

Test Plan:
1) Inspect perf profile of internal benchmark, much less
time spent in (for example) `c10::impl::getDeviceImpl`, which calls
TORCH_CHECK and should be inlined
2) Internal benchmarks

Reviewed By: smessmer

Differential Revision: D25481308

fbshipit-source-id: 0121ada779ca2518ca717f75920420957b3bb1aa
2020-12-14 08:11:23 -08:00
a058e938f9 Refactor error msg stack handling, add TORCH_RETHROW (#37101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37101

Fixes #36954.

The basic concept is to streamline the process of rethrowing
c10::Error with extra error information.  This is in a few
steps:

- I completely remodeled the Error data type and the internal
  invariants.  Instead of manually adding in newlines, the
  message stack formatting process is responsible for inserting
  newlines and spacing as necessary.  Call sites are then
  modified to respect the new API model.
- TORCH_RETHROW macro is added, which adds context to an error
  message and then rethrows it.

New internal assert failure looks like:

```
0 INTERNAL ASSERT FAILED at ../c10/test/util/exception_test.cpp:64, please report a bug to PyTorch.
Exception raised from TestBody at ../c10/test/util/exception_test.cpp:64 (most recent call first):
frame #0: <unknown function> + 0x6aab9 (0x7ff611d3aab9 in /data/users/ezyang/pytorch-tmp/build/lib/libc10.so)
frame #1: ...
```

Error message with context looks like:

```
This is an error
  This is context 1
  This is context 2
```

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21202891

Pulled By: ezyang

fbshipit-source-id: 361cadd16bc52e5886dba08e79277771ada76169
2020-05-04 11:56:45 -07:00
b64fc3c4b5 Changes warnings generated in cpp to show point of Python origination (#36052)
Summary:
Today in PyTorch, warnings triggered in C++ are printed to Python users like this:

`../aten/src/ATen/native/BinaryOps.cpp:81: UserWarning: Integer division of tensors using div or / is deprecated, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead.`

This may be unhelpful to Python users, who have complained it's difficult to relate these messages back to their programs. After this PR, warnings that go through the PyWarningHandler and allow it to add context print like this:

```
test/test_torch.py:16463: UserWarning: Integer division of tensors using div or / is deprecated, and in a future release div will perform true division as in Python 3. Use true_divide or floor_divide (// in Python) instead. (Triggered internally at  ../aten/src/ATen/native/BinaryOps.cpp:81.)
  cpu_result = getattr(cpu_tensor, op_str)(*cpu_args)
```

This relates the warning back to the user's program. The information about the cpp file and line number is preserved in the body of the warning message.

Some warnings, like those generated in the JIT, already account for a user's Python context, and so they specify that they should be printed verbatim and are unaffected by this change. Warnings originating in Python and warnings that go through c10's warning handler, which prints to cerr, are also unaffected.

A test is added to test_torch.py for this behavior. The test relies on uint8 indexing being deprecated and its warning originating from its current header file, which is an unfortunate dependency. We could implement a `torch.warn` function, instead.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36052

Differential Revision: D20887740

Pulled By: mruberry

fbshipit-source-id: d3515c6658a387acb7fccaf83f23dbb452f02847
2020-04-25 21:18:58 -07:00
50a1850d8d [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36984

Follow LOG(WARNING) format for c++ side warnings in order to play well with larger services, especially when using glog. I need to hook up into GLOG internals a bit in order to override FILE/LINE without having to change the whole thing to be macros, but it seems to be stable between glog versions.

Note, this also changes caffe2_log_level to warning by default - I think it's a much better default when compiling without glog (or maybe even have info).

With glog output, stderr capture doesn't work any more in tests. That's why we instead use c10-level warnings capture.

Test Plan:
Run unittest in both glog and non-glog build mode:

glog:
```
W0416 12:06:49.778215 3311666 exception_test.cpp:23] Warning: I'm a warning (function TestBody)
```

no-glog:
```
[W exception_test.cpp:23] Warning: I'm a warning (function TestBody)
```

Reviewed By: ilia-cher

Differential Revision: D21151351

fbshipit-source-id: fa926d9e480db5ff696990dad3d80f79ef79f24a
2020-04-23 01:08:00 -07:00
30e7055ed7 Revert D21078446: [pytorch] Route default warning sync to LOG(WARNING)
Test Plan: revert-hammer

Differential Revision:
D21078446

Original commit changeset: b5d36aac54d6

fbshipit-source-id: adff2d7e396b2efdd29eeabfe393fbc55edbe635
2020-04-20 00:26:56 -07:00
9d5dda7c2f [pytorch] Route default warning sync to LOG(WARNING) (#36768)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36768

Follow LOG(WARNING) format for c++ side warnings in order to play well with larger services, especially when using glog. I need to hook up into GLOG internals a bit in order to override FILE/LINE without having to change the whole thing to be macros, but it seems to be stable between glog versions.

Note, this also changes caffe2_log_level to warning by default - I think it's a much better default when compiling without glog (or maybe even have info)

Test Plan:
Run unittest in both glog and non-glog build mode:

glog:
```
W0416 12:06:49.778215 3311666 exception_test.cpp:23] Warning: I'm a warning (function TestBody)
```

no-glog:
```
[W exception_test.cpp:23] Warning: I'm a warning (function TestBody)
```

Reviewed By: ilia-cher

Differential Revision: D21078446

fbshipit-source-id: b5d36aac54d6b6295a72de6754696ccafbcb84ca
2020-04-19 23:02:55 -07:00
15c7486416 Canonicalize includes in c10, and add tests for it (#36299)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36299

Test Plan: Imported from OSS

Differential Revision: D20943005

Pulled By: ezyang

fbshipit-source-id: 9dd0a58824bd0f1b5ad259942f92954ba1f63eae
2020-04-10 12:07:52 -07:00
d6377b7cef Fix thread_local initializtion in C10 WarningHandler. (#34822)
Summary:
The Windows + MSVC-specific bug discussed here: https://github.com/pytorch/pytorch/issues/19394 and fixed here: https://github.com/pytorch/pytorch/issues/22405 still appears in C10's warning handler class. This results in a crash if a user attempts to run code which would print a warning when that code is running inside a thread created by a DLL. This PR applies a similar fix to that of https://github.com/pytorch/pytorch/issues/22405.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34822

Test Plan:
* Tested locally by running CodecverseWorkbench Unity app with patched build.
* CI

Differential Revision: D20627971

Pulled By: HapeMask

fbshipit-source-id: 64dfca531ed7eebbe9e0ecac3d3d4d025c683883
2020-03-24 23:52:23 -07:00
9617d07bd5 Wrap warning handler in a function to avoid siof (#30800)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30800

SparseNN benchmark crashed due to this.
Wrap warning handler in a function to avoid siof.

Test Plan: Tested locally, SparseNN benchmark no longer crashes.

Reviewed By: yinghai

Differential Revision: D18826731

fbshipit-source-id: 8fcab8a3f38cc20f775409c0686363af3c27d0a6
2019-12-05 11:22:15 -08:00
9b875e1256 Buffer python warning to avoid deadlocks
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26613

Test Plan: Imported from OSS

Differential Revision: D18249633

Pulled By: albanD

fbshipit-source-id: 863f52400e1b97943a67a9e1abb09ae8d045e7f0
2019-11-07 08:35:06 -08:00
6e4be0af2e Fix failed type cast in Windows Debug Build (#15333)
Summary:
Fixes #15330
Pull Request resolved: https://github.com/pytorch/pytorch/pull/15333

Differential Revision: D13531317

Pulled By: soumith

fbshipit-source-id: b956f27bd7fa33cbdf405338fcbcbc7df2fd629f
2018-12-26 00:48:58 -08:00
a4475d529d Use GetFetchStackTrace for the AT_* error macros too. (#13007)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/13007

No reason to use the hook if it's set, this helps fbcode traces.

This slightly pessimizes the stack trace for ATen functions,
because we are no longer skipping all of the frames we should.
This is probably OK.

Reviewed By: Yangqing

Differential Revision: D10518499

fbshipit-source-id: be54e490df3c3fde7ff894b5b1473442ffc7ded3
2018-10-24 16:18:25 -07:00
713e706618 Move exception to C10 (#12354)
Summary:
There are still a few work to be done:

- Move logging and unify AT_WARN with LOG(ERROR).
- A few header files are still being plumbed through, need cleaning.
- caffe2::EnforceNotMet aliasing is not done yet.
- need to unify the macros. See c10/util/Exception.h

This is mainly a codemod and not causing functional changes. If you find your job failing and trace back to this diff, usually it can be fixed by the following approaches:

(1) add //caffe2/c10:c10 to your dependency (or transitive dependency).
(2) change objects such as at::Error, at::Optional to the c10 namespace.
(3) change functions to the c10 namespace. Especially, caffe2::MakeString is not overridden by the unified c10::str function. Nothing else changes.

Please kindly consider not reverting this diff - it involves multiple rounds of rebasing and the fix is usually simple. Contact jiayq@ or AI Platform Dev for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/12354

Reviewed By: orionr

Differential Revision: D10238910

Pulled By: Yangqing

fbshipit-source-id: 7794d5bf2797ab0ca6ebaccaa2f7ebbd50ff8f32
2018-10-15 13:33:18 -07:00