Commit Graph

32 Commits

Author SHA1 Message Date
cyy
efca51e171 [8/N] Fix clang-tidy warnings in jit (#131997)
Follows #131996
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131997
Approved by: https://github.com/Skylion007
2024-07-29 12:40:42 +00:00
7ce69d5dbe [RELAND] Remove some unnecessary <iostream> includes from headers (#108150)
In almost all cases this is only included for writing the output formatter, which
only uses `std::ostream` so including `<ostream>` is sufficient.

The istream header is ~1000 lines so the difference is non-trivial.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108150
Approved by: https://github.com/albanD, https://github.com/malfet
ghstack dependencies: #108149
2023-09-20 21:55:15 +00:00
378ffde8c1 Revert "Remove some unnecessary <iostream> includes from headers (#106914)"
This reverts commit a6c29b722772816804d54eed070fbb38450d3e6f.

Reverted https://github.com/pytorch/pytorch/pull/106914 on behalf of https://github.com/izaitsevfb due to Causing metal breakage internally, see D48709279 ([comment](https://github.com/pytorch/pytorch/pull/106914#issuecomment-1696670027))
2023-08-29 02:22:33 +00:00
a6c29b7227 Remove some unnecessary <iostream> includes from headers (#106914)
In almost all cases this is only included for writing the output formatter, which
only uses `std::ostream` so including `<ostream>` is sufficient.

The istream header is ~1000 lines so the difference is non-trivial.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106914
Approved by: https://github.com/lezcano
2023-08-25 18:24:05 +00:00
28dc1a093f Revert "Remove some unnecessary <iostream> includes from headers (#106914)"
This reverts commit 60936e4c296e79f56cac2431a560970bb4529d03.

Reverted https://github.com/pytorch/pytorch/pull/106914 on behalf of https://github.com/ZainRizvi due to Sorry, but this is breaking internal builds. Seems like a lot of internal code depends on some of the removed imports ([comment](https://github.com/pytorch/pytorch/pull/106914#issuecomment-1688605975))
2023-08-22 17:16:48 +00:00
60936e4c29 Remove some unnecessary <iostream> includes from headers (#106914)
In almost all cases this is only included for writing the output formatter, which
only uses `std::ostream` so including `<ostream>` is sufficient.

The istream header is ~1000 lines so the difference is non-trivial.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106914
Approved by: https://github.com/lezcano
2023-08-19 20:21:58 +00:00
0247ed27cc Apply Clang-Tidy readability-container-size-empty (#93236)
Not only is this change usually shorter and more readable, it also can yield better performance. size() is not always a constant time operation (such as on LinkedLists), but empty() always is.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/93236
Approved by: https://github.com/malfet
2023-01-29 23:28:19 +00:00
f3e81f3eed Remove copies in jit_log.cpp (#67841)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/67841

Reviewed By: anjali411

Differential Revision: D33768433

Pulled By: ZolotukhinM

fbshipit-source-id: 9c081895f7b98eb1ed55fc65250d5ab1f33463b7
(cherry picked from commit a32445da4dc6b69c8ad79282031128b0e637be82)
2022-01-25 20:32:12 +00:00
62441157e3 Have getFilesToLevels return a reference (#71047)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/71047

The copy induced by getFilesToLevels is currently consuming 3,457,470,000 cycles per day. A reference might fix that.

Reference:
```
["Inline torch::jit::JitLoggingConfig::getFilesToLevels[abi:cxx11] @ caffe2/torch/csrc/jit/jit_log.cpp:54"]
```

Test Plan: Sandcastle

Reviewed By: ngimel

Differential Revision: D33479180

fbshipit-source-id: 05d306ad9ea23e2f30348a08d547ebe274eb0c10
2022-01-10 11:32:32 -08:00
2828ce53fd Added jit log stream changing function and some refactor (#65768)
Summary:
Description:
- Have only added `stdout` and `stderr` as possible options from python
  API for now. We can do file path passing later maybe.
- Put the class `JitLoggingConfig` in the cpp file as none of its methods were being used outside of this file.

Python API:
`torch._C._jit_set_logging_stream('stdout|stderr')`
C++ API:
`::torch::jit::set_jit_logging_output_stream(ostream);`

Testing:
- Tested python API locally.
- Unit test for the C++ API is written

Fixes https://github.com/pytorch/pytorch/issues/54182

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65768

Reviewed By: mrshenli

Differential Revision: D31291739

Pulled By: ZolotukhinM

fbshipit-source-id: eee72edc20488efad78a01c5b0ed8a132886a08d
2021-09-30 23:25:11 -07:00
dec5aa2260 [JIT] clean up (#60390)
Summary:
* Minor: spelling, grammar.
* Add calls to `GRAPH_DUMP()` where they were missing.
* Add or expand a few comments.
* Move a few comments to seemingly more appropriate spots.
* In canonicalize_graph_fuser_ops.cpp inline `runnableInputs()` since it
  was only called in one place and had a misleading comment and
  confusing name.
* In `PeepholeOptimizeImpl::optimizeBlock()`, set `changed = true;` when
  removing `aten::is_complex`. Pretty sure its absence was a bug.
* Delete unused `_jit_pass_remove_inplace_ops` and and its
  implementation `RemoveInplaceOps()`.
* In `preprocessCaffe2Ops()`, remove redundant check for nested optional
  types. It was already checked in `checkONNXCompatibility()`.
* In `EncoderBase::AddAttribute`, log the unexpected attribute kind.
  I don't remember the repro case now but I did hit this error at some
  point and this additional logging made it easier to understand.
* In `fuseConvBatchNorm()` in eval_peephole.cpp, consistently use
  camelCase instead of snake_case for local variables.
* Add curly braces around the bodies of if and loops.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60390

Reviewed By: Krovatkin

Differential Revision: D29523283

Pulled By: SplitInfinity

fbshipit-source-id: 4e16c5648616f53da07d68dab7fdf252e06a0752
2021-07-09 16:28:27 -07:00
9d1d799034 Added API to change logging levels for JIT (#58821)
Summary:
Description:
- Before this, logging level could only be changed by changing the env
variable "PYTORCH_JIT_LOG_LEVEL"
    - Can change the level from python now
- Have not added stream configuration for now
- Configuration is stored in a singleton class managing the options

Issue Link: https://github.com/pytorch/pytorch/issues/54188

Gotchas:
- Created separate functions
`::torch::jit::get_jit_logging_levels/set_jit_logging_levels` instead of
using the singleton class's method directly
    - This is because when running test cases, two different instances
    of the singleton are created for the test suite and the actual code
    (`jit_log.cpp`)
    - On using these methods directly, `is_enabled` calls the singleton
    in `jit_log.cpp` while we are setting the config using another
    singleton
    - See: https://stackoverflow.com/questions/55467246/my-singleton-can-be-called-multiple-times

API:
- To set the level: `torch._C._jit_set_logging_option("level")`
- To get the level: `torch._C._jit_get_logging_option()`

Testing:
- UTs were added for C++
- A very simple UT was added for python to just check if the API is
being called correctly
- The API was checked by running trace in a sample python file
    - Set env variable to "" and used `_jit_set_logging_option` in python to set the variable to `>dead_code_elimination`
    - The error output had logs of form [DUMP..] [UPDATE...] etc

Fixes https://github.com/pytorch/pytorch/issues/54188

Pull Request resolved: https://github.com/pytorch/pytorch/pull/58821

Reviewed By: soulitzer

Differential Revision: D29116712

Pulled By: ZolotukhinM

fbshipit-source-id: 8f2861ee2bd567fb63b405953d035ca657a3200f
2021-06-21 16:10:49 -07:00
2f4c31ce3a [jit] Speed up saving in case of many classes (#44589)
Summary:
There's an annoying O(N^2) in module export logic that makes saving some of the models (if they have many classes) take eternity.

I'm not super familiar with this code to properly untangle the deps and make it a pure hash lookup. So I just added a side lookup table for raw pointers. It's still quadratic, but it's O(num_classes^2) instead of O(num_classes * num_references) which already gives huge savings.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/44589

Test Plan:
Tested with one of the offending models - just loading a saving a Torchscript file:

```
Before:
load 1.9239683151245117
save 165.74712467193604

After:
load 1.9409027099609375
save 1.4711427688598633
```

Reviewed By: suo

Differential Revision: D23675278

Pulled By: dzhulgakov

fbshipit-source-id: 8f3fa7730941085ea20d9255b49a149ac1bf64fe
2020-09-15 15:10:45 -07:00
82da6b3702 [JIT] Fix jit-log verbosity selection logic. (#44587)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/44587

Currently it's skewed by one.

The following test demonstrates it:
```
$ cat test.py

import torch
def foo(a,b):
    return a*a*b
torch._C._jit_set_profiling_executor(True)
torch._C._jit_set_profiling_mode(True)
torch._C._jit_override_can_fuse_on_cpu(True)
torch._C._jit_set_texpr_fuser_enabled(True)
f = torch.jit.script(foo)
for _ in range(10):
    f(torch.rand(10), torch.rand(10))

$ cat test_logging_levels.sh

PYTORCH_JIT_LOG_LEVEL="tensorexpr_fuser"    python test.py 2>&1 | grep DUMP   >& /dev/null && echo OK || echo FAIL
PYTORCH_JIT_LOG_LEVEL="tensorexpr_fuser"    python test.py 2>&1 | grep UPDATE >& /dev/null && echo FAIL || echo OK
PYTORCH_JIT_LOG_LEVEL="tensorexpr_fuser"    python test.py 2>&1 | grep DEBUG  >& /dev/null && echo FAIL || echo OK

PYTORCH_JIT_LOG_LEVEL=">tensorexpr_fuser"   python test.py 2>&1 | grep DUMP   >& /dev/null && echo OK || echo FAIL
PYTORCH_JIT_LOG_LEVEL=">tensorexpr_fuser"   python test.py 2>&1 | grep UPDATE >& /dev/null && echo OK || echo FAIL
PYTORCH_JIT_LOG_LEVEL=">tensorexpr_fuser"   python test.py 2>&1 | grep DEBUG  >& /dev/null && echo FAIL || echo OK

PYTORCH_JIT_LOG_LEVEL=">>tensorexpr_fuser"  python test.py 2>&1 | grep DUMP   >& /dev/null && echo OK || echo FAIL
PYTORCH_JIT_LOG_LEVEL=">>tensorexpr_fuser"  python test.py 2>&1 | grep UPDATE >& /dev/null && echo OK || echo FAIL
PYTORCH_JIT_LOG_LEVEL=">>tensorexpr_fuser"  python test.py 2>&1 | grep DEBUG  >& /dev/null && echo OK || echo FAIL
```

Before this change:
```
OK
FAIL
OK
OK
OK
FAIL
OK
OK
OK
```

With this change everthing passes.

Differential Revision: D23666813

Test Plan: Imported from OSS

Reviewed By: bertmaher

Pulled By: ZolotukhinM

fbshipit-source-id: 4adaa5a3d06deadf54eae014a0d76588cdc5e20a
2020-09-13 11:29:25 -07:00
690946c49d Generalize constant_table from tensor only to ivalue (#40718)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40718

Currently only constant except tensor must be inlined during serialization.
Tensor are stored in the contant table. This patch generalizes this capability
to any IValue. This is particularly useful for non ASCII string literal that
cannot be inlined.

Test Plan: Imported from OSS

Differential Revision: D22298169

Pulled By: bzinodev

fbshipit-source-id: 88cc59af9cc45e426ca8002175593b9e431f4bac
2020-07-09 09:09:40 -07:00
866d9d4e6a [jit] Fix name collision on load (#35720)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35720

When modules are saved, all relevant types are serialized according to
their qualified name with a compilation unit. Since qualified names are
guaranteed to be unique within a compilation unit, this normally works
fine.

On load, all types are registered in a compilation unit owned by the
script::Module. Type names are not unique across compilation units, so
if you load two modules with colliding type names, make them submodules
of yet another module, and save that module, there is the potential of a
name collision. See the added tests for examples if that description is
confusing.

The solution is to unique type names when serializing code by mangling
them if we detect a name collision.

Test Plan: Imported from OSS

Differential Revision: D20749423

Pulled By: suo

fbshipit-source-id: a8827ff1d4a89f3e7964dbbb49b4381863da3e6a
2020-04-01 00:02:38 -07:00
6384c2d81b [JIT] clang-format JIT code (#35115)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35115

This commit runs the newly added tools/clang_format.py on the JIT
codebase and includes all of the formatting changes thus produced.

Testing:
Ran the script, CI.

Test Plan: Imported from OSS

Reviewed By: eellison

Differential Revision: D20568523

Pulled By: SplitInfinity

fbshipit-source-id: e09bdb982ccf090eecfb7c7b461b8d0681eef82b
2020-03-26 11:24:51 -07:00
60e8615a6d [JIT] Virtualize Function (#33921)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33921

**NOTE FOR REVIEWERS**: This PR has internal Facebook specific changes or comments, please review them on [Phabricator](https://our.intern.facebook.com/intern/diff/D20153092/)!

Test Plan: Imported from OSS

Differential Revision: D20177227

Pulled By: jamesr66a

fbshipit-source-id: 87f3e484c4f873d60f76f50f6789c1b4a73bdfde
2020-03-07 10:03:50 -08:00
dbe850af5b [jit] do the code reorg (#33851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33851

Rationale and context described in #33828.

Script to reproduce the move:
https://gist.github.com/suo/16cbefaaeb67ca5a7c6caffd49b7f6e9
ghstack-source-id: 99079645

Test Plan: Make sure CI passes

Reviewed By: jamesr66a

Differential Revision: D20133869

fbshipit-source-id: 390e9241a9c85366d9005c492ac31f10aa96488e
2020-02-27 13:02:51 -08:00
58ed8ca9e1 clean up exported source format (#28129)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28129

The previous PR in the stack removed the need to order classes/functions
or have correct import statements. This resolved circular depedency issues
that can arise when class constructors like ModuleList put new instances
of themselves in a common namespace.

This PR changes our export format to no longer produce this information.
By doing so we can make the logic signficantly simpler, since we just
keep track of an individual PythonPrint object per file.

Notes:
* PythonPrint was changed to manage its own stream/list of ranges. It
was doing this anyway internally, this just makes the API more clear.
* Since we are changing the serialization format, I also removed op_version_set.
It is now replaced with the VERSION number that written in the zip archive.
This further simplifies the code emission process.
* A test of op_version_set was removed since there is no longer any behavior
to test.

Test Plan: Imported from OSS

Differential Revision: D17961610

Pulled By: zdevito

fbshipit-source-id: ada362c4ca34d05393a1a7e799c94785ab9d9825
2019-10-16 22:47:24 -07:00
3de34744b3 Make PythonPrint a class (#26787)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26787

A follow up PR will remove the need to issue import statements,
or write classes in order since they are no longer needed.
 This change allows the same PythonPrint class
to be used for an entire file which will be needed in that patch.

Test Plan: Imported from OSS

Differential Revision: D17566440

Pulled By: zdevito

fbshipit-source-id: 1ee896da0cdfe6a003298e1d4b0238403b9ed6dd
2019-10-15 16:00:34 -07:00
0ae0c9788e Fix misuages for TORCH_CHECK/TORCH_INTERNAL_ASSERT with string (#26897)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26897

TORCH_INTERNAL_ASSERT("foo") doesn't do what you think it does :)

I'll try to do a fix to catch it in the compiler, but for now - let's fix usages

Found them using regex:
```
ag --cpp "TORCH_(CHECK|INTERNAL_ASSERT)\([ \n]*\"" --multiline
```

Test Plan: Imported from OSS

Differential Revision: D17624299

Pulled By: dzhulgakov

fbshipit-source-id: 74f05737ef598fd92b5e61541ee36de2405df23d
2019-09-27 13:45:19 -07:00
8b12602264 Add traces to specialize_autograd and lower_grad_of (2nd try)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22752

Differential Revision: D17543836

Pulled By: Krovatkin

fbshipit-source-id: 5cbca220943a580169bf60ac09780b6e67075d2b
2019-09-24 09:58:43 -07:00
1eae6355d8 tracing with an opt-in by file name (#25895)
Summary:
This basically works a simple filter as you suggested ZolotukhinM

`export PYTORCH_JIT_LOG_LEVEL=guard_elimination` will print all `GRAPH_DUMP` and `GRAPH_UPDATE` statements.
`export PYTORCH_JIT_LOG_LEVEL=>guard_elimination:>alias_analysis` will print all `GRAPH_DUMP`, `GRAPH_UPDATE` **and** `GRAPH_DEBUG` statements in `guard_elimination.cpp` **and** in `alias_analysis.cpp`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25895

Differential Revision: D17309090

Pulled By: Krovatkin

fbshipit-source-id: 8fa9e67cc9af566b084d66cc15223633fda08444
2019-09-12 14:16:53 -07:00
f928994968 make sure all out stringstreams start out empty in jit_log.hpp
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/25863

Differential Revision: D17347386

Pulled By: Krovatkin

fbshipit-source-id: a42cf56680a27bc3e50fd945ab372a409225b875
2019-09-12 12:39:10 -07:00
f3fdbba666 print source code when a function is executed (#25868)
Summary:
While this isn't ideal as it might print out the same source every time a function is run; it's still easier to go and tweak python code to reduce loop counts, than to insert `std::cout` and recompile cpp code.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25868

Differential Revision: D17318386

Pulled By: Krovatkin

fbshipit-source-id: 928ba6543204042924ab41a724635594709630de
2019-09-12 10:03:59 -07:00
5c78e0c470 Fix a bug in creating a prefix string in jit log. (#25051)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25051

In #24355 I factored out a function for creating a prefix in jit_log,
but I made a copypasta error there: the prefix stringstream was
initialized from the input string instead of an empty string.

Test Plan: Imported from OSS

Differential Revision: D16974156

Pulled By: ZolotukhinM

fbshipit-source-id: 014fe0e3366e85e984a6936ec9bb17f571107f6e
2019-08-22 17:44:42 -07:00
9b73c77390 jit_log: Extract a function that prefixes all lines of a string with another string. (#24355)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24355

Pull Request resolved: https://github.com/pytorch/pytorch/pull/24355

Test Plan: Imported from OSS

Differential Revision: D16864134

Pulled By: ZolotukhinM

fbshipit-source-id: 8b456858d8ee07fd4ca3fb1759237756df897cd9
2019-08-16 15:12:58 -07:00
0196e0bafb add line numbers to jit_log.h
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22630

Differential Revision: D16172090

Pulled By: Krovatkin

fbshipit-source-id: 26cdb0077a0bfbf9981e39359472f3251546db53
2019-07-10 15:28:29 -07:00
cbb0b8166d Revert D16161144: [pytorch][PR] Add traces to LowerGradOf and SpecializeAutoGrad
Differential Revision:
D16161144

Original commit changeset: 9e206fcfb179

fbshipit-source-id: 8f9eecb5cd6ca715bd0c647c32cf77cd9d88e6ac
2019-07-10 06:55:01 -07:00
50901be9fb Add traces to LowerGradOf and SpecializeAutoGrad
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22599

Differential Revision: D16161144

Pulled By: Krovatkin

fbshipit-source-id: 9e206fcfb1796e9448e80f178b75d0c277bd348f
2019-07-09 16:41:39 -07:00
91706d1044 Primitive Jit Logging
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/22278

Differential Revision: D16134598

Pulled By: Krovatkin

fbshipit-source-id: e64b14d0d68801189fc78c059a4e8b322acce3fa
2019-07-05 15:27:38 -07:00