23 Commits

Author SHA1 Message Date
5e6e52e7c9 [JIT] add GRAPH_DEBUG for setGraphExecutorOptimize (#153549)
Summary: Optionally log when setGraphExecutorOptimize is called, so we can get insight into the GraphExecutor behavior.

Differential Revision: D74692508

Pull Request resolved: https://github.com/pytorch/pytorch/pull/153549
Approved by: https://github.com/PaulZhang12, https://github.com/SamGinzburg
2025-05-14 20:07:25 +00:00
cyy
efca51e171 [8/N] Fix clang-tidy warnings in jit (#131997)
Follows #131996
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131997
Approved by: https://github.com/Skylion007
2024-07-29 12:40:42 +00:00
b27ec57331 [JIT] script & logging for extracting IR from logs (#72889)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72889

The script along with the GRAPH_EXPORT macro will allow for an easy way to extract IR from logs. One use case in this diff is to extract the fusion groups from nvfuser, so that the fusions can be tested individually.

Usage (e.g. for nvfuser test)

1. Write some test.py file that uses nvfuser
2. `PYTORCH_JIT_LOG_LEVEL=">>graph_fuser" python3 test.py 2>&1 | tee output.txt`
3. `python3 pytorch/scripts/jit/log_extract.py output.txt --nvfuser`

This will run with and without nvfuser to compare the output.

Alternatively, use `--output` to dump the IR so that it can be used in other applications.

Currently, only `--output` works (since generating input tensors is not supported)

Test Plan: Imported from OSS

Reviewed By: ngimel

Differential Revision: D34440189

Pulled By: davidberard98

fbshipit-source-id: fca0f619200ee37aba34bb39b69e6c640c263e26
(cherry picked from commit eb319166075db160f1628f0de545641fbecde8be)
2022-03-02 18:34:35 +00:00
b2e79ed5ec Remove WindowsTorchApiMacro.h in favor of Export.h (#69585)
Summary:
Follow up to https://github.com/pytorch/pytorch/issues/68095

This also changes the files from the ATen folder to include c10's `Export.h` instead since they can't ever be exporting `TORCH_PYTHON_API`.

cc pietern mrshenli pritamdamania87 zhaojuanmao satgera rohan-varma gqchen aazzolini osalpekar jiayisuse SciPioneer H-Huang

Pull Request resolved: https://github.com/pytorch/pytorch/pull/69585

Reviewed By: mrshenli

Differential Revision: D32958594

Pulled By: albanD

fbshipit-source-id: 1ec7ef63764573fa2b486928955e3a1172150061
2021-12-09 17:30:09 -08:00
2828ce53fd Added jit log stream changing function and some refactor (#65768)
Summary:
Description:
- Have only added `stdout` and `stderr` as possible options from python
  API for now. We can do file path passing later maybe.
- Put the class `JitLoggingConfig` in the cpp file as none of its methods were being used outside of this file.

Python API:
`torch._C._jit_set_logging_stream('stdout|stderr')`
C++ API:
`::torch::jit::set_jit_logging_output_stream(ostream);`

Testing:
- Tested python API locally.
- Unit test for the C++ API is written

Fixes https://github.com/pytorch/pytorch/issues/54182

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65768

Reviewed By: mrshenli

Differential Revision: D31291739

Pulled By: ZolotukhinM

fbshipit-source-id: eee72edc20488efad78a01c5b0ed8a132886a08d
2021-09-30 23:25:11 -07:00
dec5aa2260 [JIT] clean up (#60390)
Summary:
* Minor: spelling, grammar.
* Add calls to `GRAPH_DUMP()` where they were missing.
* Add or expand a few comments.
* Move a few comments to seemingly more appropriate spots.
* In canonicalize_graph_fuser_ops.cpp inline `runnableInputs()` since it
  was only called in one place and had a misleading comment and
  confusing name.
* In `PeepholeOptimizeImpl::optimizeBlock()`, set `changed = true;` when
  removing `aten::is_complex`. Pretty sure its absence was a bug.
* Delete unused `_jit_pass_remove_inplace_ops` and and its
  implementation `RemoveInplaceOps()`.
* In `preprocessCaffe2Ops()`, remove redundant check for nested optional
  types. It was already checked in `checkONNXCompatibility()`.
* In `EncoderBase::AddAttribute`, log the unexpected attribute kind.
  I don't remember the repro case now but I did hit this error at some
  point and this additional logging made it easier to understand.
* In `fuseConvBatchNorm()` in eval_peephole.cpp, consistently use
  camelCase instead of snake_case for local variables.
* Add curly braces around the bodies of if and loops.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60390

Reviewed By: Krovatkin

Differential Revision: D29523283

Pulled By: SplitInfinity

fbshipit-source-id: 4e16c5648616f53da07d68dab7fdf252e06a0752
2021-07-09 16:28:27 -07:00
6ecc1a4c4f Make pytorch clang-tidy clean (#60649)
Summary:
This PR suppresses clang-tidy warnings in the codebase (for now) so that we can re-enable clang-tidy checks on master.

I ran this script to add the `NOLINTNEXTLINE` comments (on a devserver):
```bash
python3 setup.py develop

# Uses same script that's run on CI and adds the -j (parallel), -s (add comments), -k (continue if diagnostic errors are found) options
python3 tools/clang_tidy.py \
  -j \
  -s \
  -k \
  -v \
  --paths torch/csrc/ \
  -g"-torch/csrc/jit/passes/onnx/helper.cpp" \
  -g"-torch/csrc/jit/passes/onnx/shape_type_inference.cpp" \
  -g"-torch/csrc/jit/serialization/onnx.cpp" \
  -g"-torch/csrc/jit/serialization/export.cpp" \
  -g"-torch/csrc/jit/serialization/import.cpp" \
  -g"-torch/csrc/jit/serialization/import_legacy.cpp" \
  -g"-torch/csrc/onnx/init.cpp" \
  -g"-torch/csrc/cuda/nccl.*" \
  -g"-torch/csrc/cuda/python_nccl.cpp" \
  -g"-torch/csrc/autograd/FunctionsManual.cpp" \
  -g"-torch/csrc/generic/*.cpp" \
  -g"-torch/csrc/jit/codegen/cuda/runtime/*" \
  -g"-torch/csrc/deploy/interpreter/interpreter.cpp" \
  -g"-torch/csrc/deploy/interpreter/interpreter.h" \
  -g"-torch/csrc/deploy/interpreter/interpreter_impl.h" \
  -g"-torch/csrc/deploy/interpreter/test_main.cpp"
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60649

Test Plan: Verified changes by re-running the script (without the `-s` option) and seeing no warnings/errors.

Reviewed By: walterddr, janeyx99

Differential Revision: D29504258

Pulled By: 1ntEgr8

fbshipit-source-id: 78310b30ee8213b73ddb4771ad874665323e7a4e
2021-07-01 12:21:07 -07:00
9d1d799034 Added API to change logging levels for JIT (#58821)
Summary:
Description:
- Before this, logging level could only be changed by changing the env
variable "PYTORCH_JIT_LOG_LEVEL"
    - Can change the level from python now
- Have not added stream configuration for now
- Configuration is stored in a singleton class managing the options

Issue Link: https://github.com/pytorch/pytorch/issues/54188

Gotchas:
- Created separate functions
`::torch::jit::get_jit_logging_levels/set_jit_logging_levels` instead of
using the singleton class's method directly
    - This is because when running test cases, two different instances
    of the singleton are created for the test suite and the actual code
    (`jit_log.cpp`)
    - On using these methods directly, `is_enabled` calls the singleton
    in `jit_log.cpp` while we are setting the config using another
    singleton
    - See: https://stackoverflow.com/questions/55467246/my-singleton-can-be-called-multiple-times

API:
- To set the level: `torch._C._jit_set_logging_option("level")`
- To get the level: `torch._C._jit_get_logging_option()`

Testing:
- UTs were added for C++
- A very simple UT was added for python to just check if the API is
being called correctly
- The API was checked by running trace in a sample python file
    - Set env variable to "" and used `_jit_set_logging_option` in python to set the variable to `>dead_code_elimination`
    - The error output had logs of form [DUMP..] [UPDATE...] etc

Fixes https://github.com/pytorch/pytorch/issues/54188

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58821

Reviewed By: soulitzer

Differential Revision: D29116712

Pulled By: ZolotukhinM

fbshipit-source-id: 8f2861ee2bd567fb63b405953d035ca657a3200f
2021-06-21 16:10:49 -07:00
476cabdfff added macros in jit logging to check whether loggings are enabled; replaced similar checks in LLVM codegen with such macros (#49121)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/49121

Test Plan: Imported from OSS

Reviewed By: ZolotukhinM

Differential Revision: D25445971

Pulled By: huiguoo

fbshipit-source-id: 980775a94159aa0b3b66fae938962761b38703d5
2020-12-21 13:01:22 -08:00
d790ec6de0 [JIT] Update comment in jit_log.h. (#46301)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/46301

Test Plan: Imported from OSS

Reviewed By: bertmaher

Differential Revision: D24295281

Pulled By: ZolotukhinM

fbshipit-source-id: a4f84c773029845065895a81f9d753a9c82a99e0
2020-10-13 23:42:28 -07:00
7b03ce7bb3 make sure logs work inside aten/c10 namespaces as well (#37018)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/37018

Differential Revision: D21167933

Pulled By: Krovatkin

fbshipit-source-id: b63f890c19b9887d5709b308ef691f5061cd27b8
2020-04-21 20:01:13 -07:00
6384c2d81b [JIT] clang-format JIT code (#35115)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35115

This commit runs the newly added tools/clang_format.py on the JIT
codebase and includes all of the formatting changes thus produced.

Testing:
Ran the script, CI.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D20568523

Pulled By: SplitInfinity

fbshipit-source-id: e09bdb982ccf090eecfb7c7b461b8d0681eef82b
2020-03-26 11:24:51 -07:00
8b12602264 Add traces to specialize_autograd and lower_grad_of (2nd try)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22752

Differential Revision: D17543836

Pulled By: Krovatkin

fbshipit-source-id: 5cbca220943a580169bf60ac09780b6e67075d2b
2019-09-24 09:58:43 -07:00
27b5a6c577 Add documentation to logging
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/26175

Differential Revision: D17371085

Pulled By: Krovatkin

fbshipit-source-id: ea06f4e16fc320940a299e8e1d4f4d7c76f5950a
2019-09-13 12:13:16 -07:00
1eae6355d8 tracing with an opt-in by file name (#25895)
Summary:
This basically works a simple filter as you suggested ZolotukhinM

`export PYTORCH_JIT_LOG_LEVEL=guard_elimination` will print all `GRAPH_DUMP` and `GRAPH_UPDATE` statements.
`export PYTORCH_JIT_LOG_LEVEL=>guard_elimination:>alias_analysis` will print all `GRAPH_DUMP`, `GRAPH_UPDATE` **and** `GRAPH_DEBUG` statements in `guard_elimination.cpp` **and** in `alias_analysis.cpp`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25895

Differential Revision: D17309090

Pulled By: Krovatkin

fbshipit-source-id: 8fa9e67cc9af566b084d66cc15223633fda08444
2019-09-12 14:16:53 -07:00
f3fdbba666 print source code when a function is executed (#25868)
Summary:
While this isn't ideal as it might print out the same source every time a function is run; it's still easier to go and tweak python code to reduce loop counts, than to insert `std::cout` and recompile cpp code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25868

Differential Revision: D17318386

Pulled By: Krovatkin

fbshipit-source-id: 928ba6543204042924ab41a724635594709630de
2019-09-12 10:03:59 -07:00
9b73c77390 jit_log: Extract a function that prefixes all lines of a string with another string. (#24355)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24355

Pull Request resolved: https://github.com/pytorch/pytorch/pull/24355

Test Plan: Imported from OSS

Differential Revision: D16864134

Pulled By: ZolotukhinM

fbshipit-source-id: 8b456858d8ee07fd4ca3fb1759237756df897cd9
2019-08-16 15:12:58 -07:00
db1e9b1d6c Fix a few clang warnings.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23524

Differential Revision: D16549562

fbshipit-source-id: 58351fc2858d495b135023626116f6f565c8e9b1
2019-07-29 22:08:50 -07:00
05f088ec22 make jit logging visible, so it can be used in a TVM compiler
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/23041

Differential Revision: D16402934

Pulled By: Krovatkin

fbshipit-source-id: 715f9821809527e94bd7f01f1680db046c888e6c
2019-07-20 14:37:49 -07:00
0196e0bafb add line numbers to jit_log.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22630

Differential Revision: D16172090

Pulled By: Krovatkin

fbshipit-source-id: 26cdb0077a0bfbf9981e39359472f3251546db53
2019-07-10 15:28:29 -07:00
cbb0b8166d Revert D16161144: [pytorch][PR] Add traces to LowerGradOf and SpecializeAutoGrad
Differential Revision:
D16161144

Original commit changeset: 9e206fcfb179

fbshipit-source-id: 8f9eecb5cd6ca715bd0c647c32cf77cd9d88e6ac
2019-07-10 06:55:01 -07:00
50901be9fb Add traces to LowerGradOf and SpecializeAutoGrad
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22599

Differential Revision: D16161144

Pulled By: Krovatkin

fbshipit-source-id: 9e206fcfb1796e9448e80f178b75d0c277bd348f
2019-07-09 16:41:39 -07:00
91706d1044 Primitive Jit Logging
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22278

Differential Revision: D16134598

Pulled By: Krovatkin

fbshipit-source-id: e64b14d0d68801189fc78c059a4e8b322acce3fa
2019-07-05 15:27:38 -07:00