Commit Graph

54 Commits

Author SHA1 Message Date
cyy
a05b64a38f [5/N] Fix extra warnings brought by clang-tidy-17 (#138403)
Follows #137983
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138403
Approved by: https://github.com/ezyang
2024-10-21 02:59:54 +00:00
cyy
0c0d8c8ff0 [1/N] Fix extra warnings brought by clang-tidy-17 (#137407)
Before we can use clang-tidy-17
Pull Request resolved: https://github.com/pytorch/pytorch/pull/137407
Approved by: https://github.com/Skylion007, https://github.com/aaronenyeshi
2024-10-07 17:53:59 +00:00
cyy
47a78daf91 [Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)
This PR is the beginning of attempts to wrap thread-unsafe getenv and set_env functions inside a RW mutex.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119449
Approved by: https://github.com/malfet, https://github.com/albanD, https://github.com/eqy
2024-10-01 06:24:30 +00:00
517aee5369 [torchscript] Add a sampled logging integration point. (#133484)
Test Plan:
test script:
```
    def test_zhxchen17(self):
        from libfb.py.pyinit import initFacebook

        initFacebook()

        class M(torch.nn.Module):
            def forward(self, x):
                return torch.add(x, x)

        def tmptmp(x, y):
            return torch.mul(x, y)

        m = M()
        n = torch.jit.script(m)
        print(n(torch.tensor(1)))
        print(torch.jit.script(tmptmp)(torch.tensor(1), torch.tensor(2)))
```

```
I0802 12:01:23.932929 4079081 init.cc:407] Logging to scuba: run __torch__.caffe2.test.export.test_export.M.forward sample rate: 1000000
```

Differential Revision: D60920867

Pull Request resolved: https://github.com/pytorch/pytorch/pull/133484
Approved by: https://github.com/davidberard98
2024-08-19 18:04:45 +00:00
b55f57b7af [codemod][lowrisk] Remove extra semi colon from caffe2/c10/core/SymNodeImpl.h (#123055)
Summary:
`-Wextra-semi` or `-Wextra-semi-stmt`

If the code compiles, this is safe to land.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123055
Approved by: https://github.com/Skylion007
2024-05-14 19:35:29 +00:00
36e6f3b339 [caffe2] Make all get_backtrace() implementations lazy (#125750) (#126064)
Summary:

#125682 (D56586844) added support for lazy symbolization to `Error` and adopted it for internal use cases; this commit adopts it for `get_backtrace()` as well.

Test Plan:
Sandcastle and GH CI.

NOTE: This is a resubmit of D56881683, a spurious copypasted line in the Android implementation broke the build, but this was not surfaced by diff tests.

Reproed the breakage with
```
$ fbpython scripts/build_android_app/build_android_app.py --buck-config-files='@//fbandroid/mode/have_libgflags @//fbandroid/mode/static_linking @//xplat/langtech/mobile/android_opt_buck_config_with_et_boltnn' --build-target='fbsource//xplat/langtech/mobile:transcribe_binAndroid-android-arm64'
```
Verified that the fixed diff builds successfully.

Differential Revision: D57275456

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126064
Approved by: https://github.com/ezyang
2024-05-13 20:17:41 +00:00
ee804d256b Revert "[caffe2] Make all get_backtrace() implementations lazy (#125750)"
This reverts commit cc4da72b47ef63b7c448f0de4cdbdd792e9195ea.

Reverted https://github.com/pytorch/pytorch/pull/125750 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/125750#issuecomment-2105285301))
2024-05-10 21:23:10 +00:00
cc4da72b47 [caffe2] Make all get_backtrace() implementations lazy (#125750)
Summary: #125682 (D56586844) added support for lazy symbolization to `Error` and adopted it for internal use cases; this commit adopts it for `get_backtrace()` as well.

Test Plan: Sandcastle and GH CI.

Differential Revision: D56881683

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125750
Approved by: https://github.com/ezyang
2024-05-10 16:02:40 +00:00
902a74c1d6 [caffe2] Lazily symbolize backtrace in c10::Error (#125787)
Summary:
The macros that build `c10::Error` compute the stack trace at the point of throwing, which is then returned as part of the `what()`. If `what()` is never called, which is the case for most exceptions (since logging is throttled), the cost of computing the stack trace was wasted.

By far, the most expensive part of computing the stack trace is its symbolization; just unwinding the stack and collecting the instruction addresses is comparatively cheap. We can thus defer the symbolization to first invocation of `what()`.

Test Plan:
Added unit tests exercising the lazy nature of `what()`.

Ran an adfinder canary: https://www.internalfb.com/intern/ads/canary/460118801509424346

We can see that the cost of symbolization is obliterated (meaning that `what()` is virtually never called, as expected):
 {F1496627896}

Differential Revision: D57128632

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125787
Approved by: https://github.com/huydhn
2024-05-09 01:46:57 +00:00
e457fdcd81 Revert "[caffe2] Lazily symbolize backtrace in c10::Error (#125682)"
This reverts commit 08f6ef0e1ccadf4626c0d7ecb15db96c01b8f418.

Reverted https://github.com/pytorch/pytorch/pull/125682 on behalf of https://github.com/facebook-github-bot due to Diff reverted internally ([comment](https://github.com/pytorch/pytorch/pull/125682#issuecomment-2101477132))
2024-05-08 21:11:27 +00:00
08f6ef0e1c [caffe2] Lazily symbolize backtrace in c10::Error (#125682)
Summary:
The macros that build `c10::Error` compute the stack trace at the point of throwing, which is then returned as part of the `what()`. If `what()` is never called, which is the case for most exceptions (since logging is throttled), the cost of computing the stack trace was wasted.

By far, the most expensive part of computing the stack trace is its symbolization; just unwinding the stack and collecting the instruction addresses is comparatively cheap. We can thus defer the symbolization to first invocation of `what()`.

Test Plan:
Added unit tests exercising the lazy nature of `what()`.

Ran an adfinder canary: https://www.internalfb.com/intern/ads/canary/460118801509424346

We can see that the cost of symbolization is obliterated (meaning that `what()` is virtually never called, as expected):
 {F1496627896}

Reviewed By: ezyang

Differential Revision: D56586844

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125682
Approved by: https://github.com/ezyang
2024-05-08 04:57:59 +00:00
277ab8a4c0 Revert "[Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)"
This reverts commit a56e057814565b2ae33b2106b4d0136179aa18f8.

Reverted https://github.com/pytorch/pytorch/pull/119449 on behalf of https://github.com/jeanschmidt due to Broken internal signals, @albanD please help get this sorted :) ([comment](https://github.com/pytorch/pytorch/pull/119449#issuecomment-2069716129))
2024-04-22 14:44:44 +00:00
cyy
a56e057814 [Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)
This PR is the beginning of attempts to wrap thread-unsafe getenv and set_env functions inside a RW mutex.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119449
Approved by: https://github.com/malfet, https://github.com/albanD
2024-04-19 13:39:41 +00:00
61bc188f42 Revert "[Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)"
This reverts commit b51f66c1950a582dd18d1b2ee67df840a8c4dbbe.

Reverted https://github.com/pytorch/pytorch/pull/119449 on behalf of https://github.com/malfet due to Broke gcc9 builds ([comment](https://github.com/pytorch/pytorch/pull/119449#issuecomment-2064936414))
2024-04-18 18:53:59 +00:00
cyy
b51f66c195 [Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)
This PR is the beginning of attempts to wrap thread-unsafe getenv and set_env functions inside a RW mutex.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119449
Approved by: https://github.com/albanD
2024-04-18 13:35:48 +00:00
f5049de242 Revert "[Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)"
This reverts commit 5bef127c2ea49280e7fda4f9fa7cad6fa4078e7d.

Reverted https://github.com/pytorch/pytorch/pull/119449 on behalf of https://github.com/PaliC due to your using TORCH_INTERNAL_ASSERT incorrectly ([comment](https://github.com/pytorch/pytorch/pull/119449#issuecomment-2062696010))
2024-04-17 23:44:00 +00:00
cyy
5bef127c2e [Environment Variable][1/N] Use thread-safe env variable API in c10 (#119449)
This PR is the beginning of attempts to wrap thread-unsafe getenv and set_env functions inside a RW mutex.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/119449
Approved by: https://github.com/albanD
2024-04-16 04:39:20 +00:00
b84f94f6a3 Restore timestamps on C++ logs without glog (#121384)
It looks like it was commented out because the original implementation was not sufficiently portable. I had to do some rewrites to the innards to make it no portable. No Windows nanoseconds support because I'm lazy.

I tested by running `build/bin/TCPStoreTest` and observing the log messages there.  I am actually not sure how to look at the log messages from Python though.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/121384
Approved by: https://github.com/Skylion007, https://github.com/malfet
2024-03-12 17:01:32 +00:00
cyy
4a019047ad Enable nested namespace check in clang-tidy (#118506)
It is time to enable nested namespaces in the code.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118506
Approved by: https://github.com/albanD
2024-01-31 00:32:35 +00:00
cyy
bae61ecb96 [Reland 1] Cleanup header inclusions in torch_cpu by iwyu (#112311)
Reland https://github.com/pytorch/pytorch/pull/101178 to use IWYU on torch_cpu. The header file changes are excluded to avoid breaking internal jobs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/112311
Approved by: https://github.com/ezyang
2023-11-19 04:06:36 +00:00
de3ae93e9b Include rank of default PG in C++ log messages (#110623)
I tested by adding some warning logs in C++, run a distributed program and show that they now had `[rank0]:` in the messages. There is no existing test infra for C++ logging so I couldn't easily add a unit test.

The implementation strategy is to setup a global variable in C++, and then poke it when we initialize a process group. This was the simplest thing I could think of that would work.

This PR only works for non-glog logging. Probably need to come up with some other strategy for glog, e.g., a custom prefix, but need to make sure this doesn't conflict with fbcode. I can't easily test this from OSS, will leave as follow up work.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110623
Approved by: https://github.com/voznesenskym, https://github.com/wanchaol, https://github.com/fduwjj
2023-10-10 00:26:52 +00:00
cyy
d0ad848aa5 Enable misc clang-tidy checks (#110283)
This PR enables the misc-XX checks in clang-tidy. Meanwhile, I excluded some of them that require a lot of code changes and have no immediate benefits. Some additional fixes and suppression were also given.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110283
Approved by: https://github.com/albanD
2023-09-30 10:39:52 +00:00
83deaa16ed Revert "[1/N] Cleanup header inclusions in torch_cpu by iwyu (#101178)"
This reverts commit b7a95f4fdb8a79dc459cc757dafcdbd0953b1a62.

Reverted https://github.com/pytorch/pytorch/pull/101178 on behalf of https://github.com/atalman due to Break internal CI ([comment](https://github.com/pytorch/pytorch/pull/101178#issuecomment-1734384645))
2023-09-25 20:05:25 +00:00
cyy
b7a95f4fdb [1/N] Cleanup header inclusions in torch_cpu by iwyu (#101178)
Following our previous IWYU work  #100304 on C10, it makes more sense to try IWYU on torch_cpu. This PR does exactly that. Meanwhile, it fixes issue #48684.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101178
Approved by: https://github.com/ezyang
2023-09-24 05:01:20 +00:00
cyy
87cbfe957a increase clang-tidy coverage to more c10 source files (#102902)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102902
Approved by: https://github.com/Skylion007
2023-06-04 06:33:01 +00:00
b469ed72d0 Integrating new API usage metadata logger (#101762)
Summary: The new logger allows passing metadata into the api usage logger. The immediate use case is to pass the serialization_id to the save and load events to be enable tracking serialized models in API events. It could be extended to add more metadata in the future.

Test Plan:
```
buck2 test @//mode/dev //caffe2/caffe2/serialize:inline_container_test
```

Reviewed By: davidberard98

Differential Revision: D45683697

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101762
Approved by: https://github.com/davidberard98
2023-05-26 00:24:26 +00:00
cyy
045d1de02d Fix some code issues (#92760)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/92760
Approved by: https://github.com/Skylion007, https://github.com/albanD
2023-01-24 08:19:03 +00:00
5932c37198 [caffe2] drop XROS ports (#76366)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/76366

caffe2 is not currently being built for XROS.

Test Plan: CI

Reviewed By: kimishpatel

Differential Revision: D35923922

fbshipit-source-id: 260dacadf0bd5b6bab7833a4ce81e896d280b053
(cherry picked from commit 8370b8dd2519d55a79fa8d45e7951ca8dc0b21a8)
2022-04-26 23:54:22 +00:00
9ac27e4e3c [AutoAccept][Codemod][FBSourceClangFormatLinter] Daily arc lint --take CLANGFORMAT
Reviewed By: zertosh

Differential Revision: D34472222

fbshipit-source-id: 52bd1c726f405dfc5aea6656eb0429d93b1854bb
(cherry picked from commit ba15b5c9bd1b6d550eeec3580451fb10b9e208f9)
2022-02-25 15:37:29 +00:00
331df00785 Do not set the logtostderr GLOG flag just to be on the safe side (#73360)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/73360

Some (internal) PyTorch users use GLOG for their own logging purposes as well. Since the default setting is already to write to `stderr` anyways, let's be a good citizen and don't interfere with the logging settings of the main application.
ghstack-source-id: 149874118

Test Plan: Run existing unit and integration tests.

Reviewed By: zhaojuanmao

Differential Revision: D34452040

fbshipit-source-id: 1a6e1e94e25c3c50c82a2696548f7f08c0a9ee67
(cherry picked from commit 23741aa6f83f1dbbc627cc1428ae88d48f388688)
2022-02-24 21:35:36 +00:00
7366724e07 Introduce an environment variable to change c10 log level (#71746)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71746

This PR contains the following improvements:

- It exposes a new environment variable `TORCH_CPP_LOG_LEVEL` that enables users to set the log level of c10 logging facility (supports both GLOG and c10 loggers). Valid values are `INFO`, `WARNING`, `ERROR`, and `FATAL` or their numerical equivalents `0`, `1`, `2`, and `3`.
- It implements an `initLogging()` function and calls it as part of `torch._C` module import to ensure that the underlying logging facility is correctly initialized in Python.

With these changes a user can dynamically set the log level of c10 as in the following example:

```
$ TORCH_CPP_LOG_LEVEL=INFO python my_torch_script.py
```
ghstack-source-id: 149822703

Test Plan: Run existing tests.

Reviewed By: malfet

Differential Revision: D33756252

fbshipit-source-id: 7fd078c03a598595d992de0b474a23cec91838af
(cherry picked from commit 01d6ec6207faedf259ed1368730e9e197cb3e1c6)
2022-02-24 14:34:01 +00:00
eb3b9fe719 [XROS][ML] System specific adjustments for UTs to work. (#65245)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65245

Building and running c10 and qnnpack tests on XROS.

Notable changes:
- Adding #if define(_XROS_) in few places not supported by XROS
- Changing Threadpool to abstract class
ghstack-source-id: 139513579

Test Plan: Run c10 and qnnpack tests on XROS.

Reviewed By: veselinp, iseeyuan

Differential Revision: D30137333

fbshipit-source-id: bb6239b935187fac712834341fe5a8d3377762b1
2021-10-01 18:15:14 -07:00
a9b0a921d5 Disable avoid-non-const-global-variables lint check (#62008)
Summary:
As GoogleTest `TEST` macro is non-compliant with it as well as `DEFINE_DISPATCH`

All changes but the ones to `.clang-tidy` are generated using following script:
```
for i in `find . -type f -iname "*.c*" -or -iname "*.h"|xargs grep cppcoreguidelines-avoid-non-const-global-variables|cut -f1 -d:|sort|uniq`;  do sed -i "/\/\/ NOLINTNEXTLINE(cppcoreguidelines-avoid-non-const-global-variables)/d" $i; done
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62008

Reviewed By: driazati, r-barnes

Differential Revision: D29838584

Pulled By: malfet

fbshipit-source-id: 1b2f8602c945bd4ce50a9bfdd204755556e31d13
2021-07-22 18:04:40 -07:00
1e77ba36db change ddpLoggingData struct to map or dict (#56641)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56641

currently ddpLoggingData is flat struct, which requires internal DDP developers and external users to know about the struct field names. This is not flexible to delete or add new fields in the future. also it is hard to access ddpLoggingData.

With maps/dict, developers and users can easily access the fields without knowing the field names, also easier to add/remove a new/old field.

Since C++ does not support map values to be different types, right now ddpLoggingData containes two types of maps.
ghstack-source-id: 127482694

Test Plan: unit tests

Reviewed By: SciPioneer

Differential Revision: D27923723

fbshipit-source-id: c90199c14925fc50ef219000e2f809dc7601cce1
2021-04-28 06:43:25 -07:00
087049000b Make c10 clang-tidy clean (#55870)
Summary:
This change was autogenerated by running:
```
% find c10 -iname "*.cpp" -exec python3 tools/clang_tidy.py -c build -x {} -s \;
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/55870

Reviewed By: janeyx99

Differential Revision: D27728617

Pulled By: malfet

fbshipit-source-id: bede4d7f0c106d51394d1e9efddf01bf894421c5
2021-04-14 11:23:28 -07:00
20d8fe83cd [TSAN] Suppress data races in caffe2/c10/util/Logging.cpp
Summary:
This suppresses some data races reported by TSAN. See the associated
task(s) below for context, including sample stack traces caused by these races
and reproduction instructions.

This diff is automatically generated. Therefore, the way it makes suppressions
may not be as beautiful as if written by hand. *However, we don't have the
resources to manually adjust these diffs, nor do we have the capacity to
actually fix the bugs*; we just want to get the existing bugs
out of the way so we can enable TSAN across the fleet. If you are a reviewer
please do one of the following:

1. Accept the diff as is, and you may follow up with more changes (or fix the
   bugs) later.
2. Fix the data races in a different diff and land it within a reasonable amount
   of time (e.g. a week), and comment about it here.
3. Comment to suggest us a different code location(s) to suppress these data
   races.

Test Plan: Unit tests were automatically run as part of https://www.internalfb.com/intern/sandcastle/job/22517998509525934/

Reviewed By: ezyang

Differential Revision: D26094360

fbshipit-source-id: 06c285570bcf7a1491d8f17d1885d065ef0bc537
2021-03-26 10:11:23 -07:00
0c8f16622b [Caffe2] Rework CAFFE_ENFORCE_THAT (#53303)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53303

The old code did a heap allocation unnecessarily and was a
little convoluted. I think that it was structured that way to avoid
double-evaluating arguments; I just forced them to be evaluated once
as though they were passed to a function by binding const references
to them.
ghstack-source-id: 123918262

Test Plan:
1) `buck run mode/opt-clang //caffe2/caffe2/fb/tests:logging_bench`

Before:
```
============================================================================
caffe2/caffe2/fb/tests/logging_bench.cpp        relative  time/iter  iters/s
============================================================================
glog_CHECK                                                   2.01ns  498.63M
caffe2_ENFORCE_GE                                 50.00%     4.01ns  249.31M
glog_CHECK_GE                                     17.39%    11.53ns   86.73M
fbcode_ENFORCE                                   100.00%     2.01ns  498.65M
caffe2_ENFORCE                                   100.00%     2.01ns  498.63M
caffe2_ENFORCE_THAT                               50.00%     4.01ns  249.33M
============================================================================
```

After:
```
============================================================================
caffe2/caffe2/fb/tests/logging_bench.cpp        relative  time/iter  iters/s
============================================================================
glog_CHECK                                                   2.01ns  498.63M
caffe2_ENFORCE_GE                                 97.44%     2.06ns  485.88M
glog_CHECK_GE                                     17.39%    11.53ns   86.73M
fbcode_ENFORCE                                   100.00%     2.01ns  498.65M
caffe2_ENFORCE                                   100.00%     2.01ns  498.65M
caffe2_ENFORCE_THAT                               97.28%     2.06ns  485.06M
============================================================================
```

Looks like about a 1.94x speedup!

2) Inspect generated assembly for logging_bench.cpp before & after by:
```
$ compile-commands caffe2/caffe2/fb/tests/logging_bench.cpp -f "mode/opt-clang"
$ jq -r '.[0].arguments | sh' < compile_commands.json | sed -e "s/'-c'/'-S'/g" | sed -E -e "s/'-g[12]'/'-g0'/g" > out.sh
$ sh out.sh
```

Then diff logging_bench.s as you like.

Before: P255408666
After: P277883307

Net about 1500 lines deleted from the assembly. We can see that the
happy path (which the benchmark tests) no longer contains string
creation.

Reviewed By: dzhulgakov

Differential Revision: D26829714

fbshipit-source-id: 6e11f8ea29292ae3d9f2cc89d08afcb06f7d39c9
2021-03-16 23:01:00 -07:00
566f7c79d3 [c10] Take advantage of c10::str optis for simple CAFFE_ENFORCE (#52223)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52223

After the previous diffs, `c10::str()` will return a
`CompileTimeEmptyString` when passed 0 arguments and a `const char*` when
passed 1 `const char *` argument. We can take advantage of this to
outline further std::string creation from CAFFE_ENFORCE.
ghstack-source-id: 121877053

(Note: this ignores all push blocking failures!)

Test Plan:
Compare assembly for
```
#include <c10/util/Logging.h>

void f(bool b) {
  CAFFE_ENFORCE(b);
}

void g(bool b) {
  CAFFE_ENFORCE(b, "message");
}

void h(bool b) {
  CAFFE_ENFORCE(b, "message", random());
}
```

before & after this diff.

before: P174902847
after: P174902912

f & g are clearly much improved, and h is about the same.

(I tried measuring caffe2 perf on the AdIndexer MergeNet benchmark, but didn't see a win, which makes sense because the change is small.)

Reviewed By: bhosmer

Differential Revision: D26405181

fbshipit-source-id: c51a9e459ae7d9876494a83ade6f6fe725619512
2021-02-19 12:45:35 -08:00
e54cbb8250 Create PyTorch DDP logging APIs for applications to use (#50637)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50637

add APIs for logging pytorch ddp logging data in applications.

Test Plan: unit tests

Reviewed By: rohan-varma

Differential Revision: D25933411

fbshipit-source-id: 57c248a2f002da06a386fc7406d3e5533ebb9124
2021-02-02 18:24:21 -08:00
702140758f Move GLOG_ constants into c10 namespace (#41504)
Summary:
Declaring GLOG_ constants in google namespace causes a conflict in C++ project that uses GLOG and links with LibPyTorch compiled without GLOG.
For example, see https://github.com/facebookresearch/ReAgent/issues/288

Pull Request resolved: https://github.com/pytorch/pytorch/pull/41504

Reviewed By: kaiwenw

Differential Revision: D22564308

Pulled By: malfet

fbshipit-source-id: 2167bd2c6124bd14a67cc0a1360521d3c375e3c2
2020-07-15 21:56:00 -07:00
fe18dcd692 Use GLOG logging prefixes (#40491)
Summary:
PyTorch should stop polluting global namespace with symbols such as `ERROR` `WARNING` and `INFO`.
Since `logging_is_not_google_glog.h` is a C++ header, define severity levels in namespace and add `GLOG_` prefix to match an unshortened glog severity levels.
Change `LOG` and `LOG_IF` macros to use prefix + namespaced severity levels.

Closes https://github.com/pytorch/pytorch/issues/40083
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40491

Test Plan: CI

Reviewed By: ezyang

Differential Revision: D22210925

Pulled By: malfet

fbshipit-source-id: 0ec1181a53baa8bca2f526f245e398582304aeab
2020-06-24 14:07:00 -07:00
a058e938f9 Refactor error msg stack handling, add TORCH_RETHROW (#37101)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/37101

Fixes #36954.

The basic concept is to streamline the process of rethrowing
c10::Error with extra error information.  This is in a few
steps:

- I completely remodeled the Error data type and the internal
  invariants.  Instead of manually adding in newlines, the
  message stack formatting process is responsible for inserting
  newlines and spacing as necessary.  Call sites are then
  modified to respect the new API model.
- TORCH_RETHROW macro is added, which adds context to an error
  message and then rethrows it.

New internal assert failure looks like:

```
0 INTERNAL ASSERT FAILED at ../c10/test/util/exception_test.cpp:64, please report a bug to PyTorch.
Exception raised from TestBody at ../c10/test/util/exception_test.cpp:64 (most recent call first):
frame #0: <unknown function> + 0x6aab9 (0x7ff611d3aab9 in /data/users/ezyang/pytorch-tmp/build/lib/libc10.so)
frame #1: ...
```

Error message with context looks like:

```
This is an error
  This is context 1
  This is context 2
```

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D21202891

Pulled By: ezyang

fbshipit-source-id: 361cadd16bc52e5886dba08e79277771ada76169
2020-05-04 11:56:45 -07:00
50a1850d8d [pytorch] Route default warning sync to LOG(WARNING) - second try (#36984)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36984

Follow LOG(WARNING) format for c++ side warnings in order to play well with larger services, especially when using glog. I need to hook up into GLOG internals a bit in order to override FILE/LINE without having to change the whole thing to be macros, but it seems to be stable between glog versions.

Note, this also changes caffe2_log_level to warning by default - I think it's a much better default when compiling without glog (or maybe even have info).

With glog output, stderr capture doesn't work any more in tests. That's why we instead use c10-level warnings capture.

Test Plan:
Run unittest in both glog and non-glog build mode:

glog:
```
W0416 12:06:49.778215 3311666 exception_test.cpp:23] Warning: I'm a warning (function TestBody)
```

no-glog:
```
[W exception_test.cpp:23] Warning: I'm a warning (function TestBody)
```

Reviewed By: ilia-cher

Differential Revision: D21151351

fbshipit-source-id: fa926d9e480db5ff696990dad3d80f79ef79f24a
2020-04-23 01:08:00 -07:00
30e7055ed7 Revert D21078446: [pytorch] Route default warning sync to LOG(WARNING)
Test Plan: revert-hammer

Differential Revision:
D21078446

Original commit changeset: b5d36aac54d6

fbshipit-source-id: adff2d7e396b2efdd29eeabfe393fbc55edbe635
2020-04-20 00:26:56 -07:00
9d5dda7c2f [pytorch] Route default warning sync to LOG(WARNING) (#36768)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36768

Follow LOG(WARNING) format for c++ side warnings in order to play well with larger services, especially when using glog. I need to hook up into GLOG internals a bit in order to override FILE/LINE without having to change the whole thing to be macros, but it seems to be stable between glog versions.

Note, this also changes caffe2_log_level to warning by default - I think it's a much better default when compiling without glog (or maybe even have info)

Test Plan:
Run unittest in both glog and non-glog build mode:

glog:
```
W0416 12:06:49.778215 3311666 exception_test.cpp:23] Warning: I'm a warning (function TestBody)
```

no-glog:
```
[W exception_test.cpp:23] Warning: I'm a warning (function TestBody)
```

Reviewed By: ilia-cher

Differential Revision: D21078446

fbshipit-source-id: b5d36aac54d6b6295a72de6754696ccafbcb84ca
2020-04-19 23:02:55 -07:00
15c7486416 Canonicalize includes in c10, and add tests for it (#36299)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/36299

Test Plan: Imported from OSS

Differential Revision: D20943005

Pulled By: ezyang

fbshipit-source-id: 9dd0a58824bd0f1b5ad259942f92954ba1f63eae
2020-04-10 12:07:52 -07:00
15bf4892f2 prevent crash on exit from static destructor race (#33955)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33955

unit tests on windows (clang and cl) were crashing on exit due to racing with static variable destruction.

Test Plan: CI green

Differential Revision: D20153587

fbshipit-source-id: 22e35e591660d49f3a755f93d0c14d7a023ebb2a
2020-03-02 14:28:13 -08:00
9407137102 Update the descriptive error message for enforce fail (#31575)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31575

We need a new exception class specifically for the enforce_finite operator, because we need to map it to a specific python exception ExitException, not the RuntimeError type that all c10::Errors get mapped to by default. This diff includes:
- Define c10::EnforceFiniteNotMet
- API CAFFE_ENFORCE_FINITE to throw c10::EnforceFiniteNotMet
- Map from c10::EnforceFiniteNotMet to python ExitException
- Apply CAFFE_ENFORCE_FINITE in caffe2 op

Test Plan:
- integration test pass: https://fburl.com/fblearner/xwkzbqyo
- integration test with D19213617: https://fburl.com/fblearner/479y4jrj Generate error message as desired

- Example:
  - Original error message  f157597803
{F225477055}

  - Updated error message  (with D19213617 to generate the error): f158571327
{F225477071}

Reviewed By: zheng-xq

Differential Revision: D19206240

fbshipit-source-id: bd256862801d5957a26b76d738edf4e531f03827
2020-01-03 13:53:20 -08:00
c25e33789e Lightweight at-most-once logging for API usage (#20745)
Summary:
Resubmit #20698 which got messed up.

Idea is that when PyTorch is used in a custom build environment (e.g. Facebook), it's useful to track usage of various APIs centrally. This PR introduces a simple very lightweight mechanism to do so - only first invocation of a trigger point would be logged. This is significantly more lightweight than #18235 and thus we can allow to put logging in e.g. TensorImpl.

Also adds an initial list of trigger points. Trigger points are added in such a way that no static initialization triggers them, i.e. just linking with libtorch.so will not cause any logging. Further suggestions of what to log are welcomed.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/20745

Differential Revision: D15429196

Pulled By: dzhulgakov

fbshipit-source-id: a5e41a709a65b7ebccc6b95f93854e583cf20aca
2019-05-23 23:17:59 -07:00
9b1dbffba5 Re-sync with internal repository (#20702) 2019-05-20 09:22:57 -04:00