Commit Graph

12 Commits

Author SHA1 Message Date
bcc9084bc4 [JIT] Support scripting torch.is_autocast_enabled() (#81305)
This adds an `aten::is_autocast_enabled` op into the jit runtime so that
autocasting ops can be scripted and called from within jit.

Differential Revision: [D37901585](https://our.internmc.facebook.com/intern/diff/D37901585)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81305
Approved by: https://github.com/qihqi, https://github.com/eellison
2022-07-27 22:32:08 +00:00
1a41cd8f97 Conv BN folding data type issue when conv has no bias (#78241)
PR https://github.com/pytorch/pytorch/pull/77042 has fixed the new folding conv-bn data type issue but missing the case when original conv has no bias input.
In this PR:

- Fix the new folding conv-bn's bias data type issue, when conv has no bias but weight as lower precision datatype, the new generated bias data type should be same as conv's weight.
- Move the Autocast JIT Trace UT from `test_jit.py` to `test_jit_autocast.py`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/78241
Approved by: https://github.com/davidberard98
2022-05-26 18:42:17 +00:00
05ce0f9be6 Add option to disable autocast pass
Pull Request resolved: https://github.com/pytorch/pytorch/pull/77566

Approved by: https://github.com/anijain2305, https://github.com/davidberard98
2022-05-18 14:57:25 +00:00
91f5056ffc [JIT][Autocast] Don't cast softmax on CPU
In eager autocasting, softmax is only up-casted on GPU (and not on CPU).
This fixes the JIT implementation to do the same.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/76661

Approved by: https://github.com/jjsjann123, https://github.com/eellison
2022-05-02 22:47:52 +00:00
981baadf47 [JIT] Add autocasting to freezing pass & enable autocast pass by default (#74178)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74178

Autocasting + freezing should reduce model size in some scenarios, since half-precision constants should be smaller than full-precision constants. This also enables the jit autocast pass by default, so `torch._C._jit_set_autocast_mode(True)` doesn't need to be set in order to enable autocasting.

Test Plan: Imported from OSS

Reviewed By: zou3519, eellison

Differential Revision: D34914245

Pulled By: davidberard98

fbshipit-source-id: 301f3669431feabbd695ebbdfc9c17bd1be3b565
(cherry picked from commit 0530cd365ae1f148910100a5c2981e80d04e4883)
2022-03-23 23:10:48 +00:00
2e523ed229 [JIT] additional support for CallMethod with autocasting (#67925)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67925

Previously, the following would always fail, because autocasting would not be enabled in the called method:

```
torch.jit.script
def fn(x, y):
    with autocast():
        # CallMethod() to some method

fn(x, y)
```

This allows the above, if autocasting is globally enabled, e.g.

```
torch.jit.script
def fn(x, y):
    with autocast():
        # CallMethod() to some method

with autocast():
    fn(x, y) # now
```
ghstack-source-id: 142667351

Test Plan: added test in test_jit_autocast.py

Reviewed By: navahgar

Differential Revision: D32214439

fbshipit-source-id: bb7db054e25e18f5e3d2fdb449c35b5942ab303e
2021-11-08 14:37:09 -08:00
ee7412dd29 autodiff fix for autocast_to_xxx (#67648)
Summary:
Fixes autocast + autodiff issue where `RuntimeError: grad_inputs.size() == node->inputs().size()INTERNAL ASSERT FAILED at "../torch/csrc/jit/runtime/autodiff.cpp":426, please report a bug to PyTorch.`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67648

Reviewed By: cpuhrsch

Differential Revision: D32083227

Pulled By: davidberard98

fbshipit-source-id: edf526cff4ec21874ae35ec730d13c250073e10c
2021-11-05 10:48:39 -07:00
b8d365ca3a ci fix (#67826)
Summary:
Fixes #{issue number}

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67826

Reviewed By: Chillee

Differential Revision: D32164770

Pulled By: mruberry

fbshipit-source-id: c1de7e6db6d0cb1761388f1ea0178dbff3fe6dc8
2021-11-04 00:16:47 -07:00
99c7a9f09d fix bfloat16 autocast skip (#67822)
Summary:
Per title

Pull Request resolved: https://github.com/pytorch/pytorch/pull/67822

Reviewed By: mruberry

Differential Revision: D32162605

Pulled By: ngimel

fbshipit-source-id: eb5ccf6c441231e572ec93ac8c2638d028abecad
2021-11-03 21:02:37 -07:00
88d86de7d8 Add lint to ensure all test files have headers with ownership info (#66826)
Summary:
UPDATE: CI should be green now with the added files.

This should fail for now, but will pass when all action for https://github.com/pytorch/pytorch/issues/66232 is done.

Example failure run: https://github.com/pytorch/pytorch/runs/4052881947?check_suite_focus=true

Pull Request resolved: https://github.com/pytorch/pytorch/pull/66826

Reviewed By: seemethere

Differential Revision: D32087209

Pulled By: janeyx99

fbshipit-source-id: ad4b51e46de54f23aebacd592ee67577869f8bb6
2021-11-03 18:21:49 -07:00
fddfb81dd0 Add BF16 type to _autocast_to_full_precision (#67707)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/67707

https://github.com/pytorch/pytorch/pull/63939/files has added FP16 support to torchscript.

This is to add BF16 device type when doing full conversion.

Test Plan: Unit test. Also tested BF16 locally on A100 using MLP model.

Reviewed By: idning

Differential Revision: D32027152

fbshipit-source-id: b2a5ff2b22ea1e02306b0399f2b39b8493be4f45
2021-11-03 14:06:50 -07:00
1ec732bc46 Add fp16/fp32 autocasting to JIT/TorchScript (#63939)
Summary:
Adds mixed precision autocasting support between fp32/fp16 to torchscript/JIT. More in depth descriptoin can be found at [torch/csrc/jit/JIT-AUTOCAST.md](https://github.com/pytorch/pytorch/pull/63939/files#diff-1f1772aaa508841c5bb58b74ab98f49a1e577612cd9ea5c386c8714a75db830b)

This PR implemented an autocast optimization pass that inserts casting ops per AMP rule (torch/csrc/jit/passes/autocast.cpp), that mimics the behavior of eager autocast. The pass also takes into consideration the context of `torch.cuda.amp.autocast` and only inserts casting ops within the enabled context manager, giving feature parity as with eager amp autocast.

We currently provide JIT AMP autocast as a prototyping feature, so it is default off and could be turned on via `torch._C._jit_set_autocast_mode(True)`

The JIT support for autocast is subject to different constraints compared to the eager mode implementation (mostly related to the fact that TorchScript is statically typed), restriction on the user facing python code is described in doc torch/csrc/jit/JIT-AUTOCAST.md

This is a prototype, there are also implementation limitation that's necessary to keep this PR small and get something functioning quickly on upstream, so we can iterate on designs.

Few limitation/challenge that is not properly resolved in this PR:
1. Autocast inserts cast operation, which would have impact on scalar type of output tensor feeding downstream operations. We are not currently propagating the updated scalar types, this would give issues/wrong results on operations in promotion rules.

2. Backward for autodiff in JIT misses the casting of dgrad to input scalar type, as what autograd does in eager. This forces us to explicitly mark the casting operation for certain operations (e.g. binary ops), otherwise, we might be feeding dgrad with mismatch scalar type to input. This could potentially break gradient function consuming dgrad. (e.g. gemm backwards, which assumes grad_output to be of same scalar type as input')

3. `torch.autocast` api has an optional argument `dtype` which is not currently supported in the JIT autocast and we require a static value.

Credit goes mostly to:
tlemo
kevinstephano

Pull Request resolved: https://github.com/pytorch/pytorch/pull/63939

Reviewed By: navahgar

Differential Revision: D31093381

Pulled By: eellison

fbshipit-source-id: da6e26c668c38b01e296f304507048d6c1794314
2021-10-27 12:11:36 -07:00