Commit Graph

1565 Commits

Author SHA1 Message Date
68a6113248 Add nvFuser support for torch.native_batch_norm (#85562)
This PR adds nvFuser's implementation for batch_norm as there's no reference yet (https://github.com/pytorch/pytorch/pull/81191) and no in-place copy support (https://github.com/pytorch/pytorch/pull/84545).

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85562
Approved by: https://github.com/kevinstephano, https://github.com/ngimel
2022-10-03 15:03:08 +00:00
07ce0b435b Remove backward for im2col and col2im (#85542)
`im2col` is a linear map, and `col2im` is its adjoint. As such, the
adjoint to `col2im` is `im2col` (the adjoint of the adjoint is the
original function.

There's no point having explicit derivatives in ATen for these
functions, so this PR deletes all these.

Furthermore, along the way, we fix an error for the derivative of im2col
for non-batched inputs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85542
Approved by: https://github.com/soulitzer, https://github.com/ngimel
2022-10-03 00:16:42 +00:00
99ca25e6eb Misspelling Correction PR common_methods_invocations.py (#86081)
Noticed a misspelling while looking at Issue #85712. This fix just fixes the mispelling on line #3107.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/86081
Approved by: https://github.com/ngimel
2022-10-02 22:55:34 +00:00
007e12a3e9 OpInfo: Extend natural syntax to allow adding metadata (#85890)
Splitting into a seperate PR in case of bike shedding. We can't use
the normal fluent syntax `SampleInput(x).name("foo")` because `.name`
is already how the metadata is accessed. So instead, this adds a
single function where you pass keyword arguments to fill in the
metadata, e.g.
```
SampleInput(x).with_metadata(
    name="foo", output_process_fn_grad=out_fn)
```

An alternative closer to the normal fluent style would be to adding a
prefix to the property's name, e.g.
```
(SampleInput(x)
    .with_name("foo")
    .with_output_process_fn_grad(out_fn))
```

However, I have a slight preference for the `with_metadata` style
because you don't need to add extra parenthesis to break lines.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85890
Approved by: https://github.com/mruberry
2022-10-02 19:56:40 +00:00
ed5f95048e OpInfo: Add natural syntax for SampleInput creation (#85723)
Most SampleInput objects currently have no additional metadata,
meaning they have a 1:1 mapping with a normal function call. This adds
var arg forms of the `SampleInput` constructor such that you can just
call the `SampleInput` constructor as you would call the operator.

So, for example
```python
SampleInput(make_arg(shape), args=(2, 3), kwargs=dict(alpha=4))
```
becomes
```python
SampleInput(make_arg(shape), 2, 3, alpha=4)
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85723
Approved by: https://github.com/mruberry
2022-10-02 19:56:40 +00:00
6db3539e70 Revert "Improve make_tensor performance for float and complex types (#85473)"
This reverts commit a76995e584b880910f0724be98eb21773e8ed6e9.

Reverted https://github.com/pytorch/pytorch/pull/85473 on behalf of https://github.com/huydhn due to Sorry for revert your PR, but it seems to cause a bunch of flaky test in pull an periodic
2022-09-29 20:06:52 +00:00
8fb470e81a [fix] max_pool1d: shape check (#85594)
Fixes #76587

Before PR:

```python
import torch
max_pool = torch.nn.MaxPool1d(3)
t = torch.rand([17, 0, 50], dtype=torch.float32)  # note requires_grad is False
max_pool(t) # Worked and returned tensor of shape [17, 0, 48].
```

After PR
```python
import torch
max_pool = torch.nn.MaxPool1d(3)
t = torch.rand([17, 0, 50], dtype=torch.float32)  # note requires_grad is False
max_pool(t) # Errors with `max_pool1d: Expected 2D or 3D (batch mode) tensor with optional 0 dim batch size for input, but got: [17, 0, 48]`
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85594
Approved by: https://github.com/mruberry
2022-09-29 15:40:09 +00:00
a76995e584 Improve make_tensor performance for float and complex types (#85473)
For floating types, `make_tensor` calls `rand` and then does a linear
interpolation from `low` to `high`. This instead calls `uniform_(low,
high)` to cut out the interpolation step.

For complex types, `make_tensor` does the `rand` + interpolation step
twice and calls `torch.complex(real, imag)` at the end. This instead
uses `view_as_real` and `uniform_(low, high)` to fuse it all into one
operation.

My benchmarks show significant speedups in all cases for float32 and
complex64.

| Device | dtype     | Size  | Master (us) | This PR (us) | Speedup |
|--------|-----------|-------|-------------|--------------|---------|
| CPU    | float32   | 8     | 19.4        | 6.34         | 3.1     |
|        |           | 4096  | 36.8        | 21.3         | 1.7     |
|        |           | 2**24 | 167,000     | 80,500       | 2.1     |
|        | complex32 | 8     | 37.0        | 7.57         | 4.9     |
|        |           | 4096  | 73.1        | 37.6         | 1.9     |
|        |           | 2**24 | 409,000     | 161,000      | 2.5     |
| CUDA   | float32   | 8     | 40.4        | 11.7         | 3.5     |
|        |           | 4096  | 38.7        | 11.7         | 3.3     |
|        |           | 2**24 | 2,300       | 238          | 9.7     |
|        | complex32 | 8     | 78.7        | 14           | 5.6     |
|        |           | 4096  | 82.7        | 13.8         | 6.0     |
|        |           | 2**24 | 5,520       | 489          | 11.3    |
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85473
Approved by: https://github.com/mruberry
2022-09-29 11:46:09 +00:00
cca909645f Add bfloat16 support for lerp on CPU (#84327)
### Description
Add bfloat16 support for lerp on CPU

### Testing
single core:
<html>
<body>
<!--StartFragment-->

op | shape |fp32 forward/ms|bf16 forward/s|fb32 backward/s| bf16 backward/s
-- | -- | -- | -- | -- | --
lerp (tensor) | [10, 128, 10, 124] | 0.005489 | 0.000613 | 0.006658 | 0.003385
  | [10, 128, 20, 124] | 0.011057 | 0.001204 | 0.016032 | 0.007869
  | [10, 128, 30, 124] | 0.016691 | 0.001954 | 0.025549 | 0.012823
  |   |   |   |   |  
lerp (scalar) | [10, 128, 10, 124] | 0.001096 | 0.000507 | 0.002024 | 0.001479
  | [10, 128, 20, 124] | 0.00247 | 0.000997 | 0.005468 | 0.002907
  | [10, 128, 30, 124] | 0.004178 | 0.001513 | 0.009775 | 0.004859

<!--EndFragment-->
</body>
</html>

single socket (28cores):
<html>
<body>
<!--StartFragment-->

op | shape | fp32 forward/s| bf16 forward/s| fb32backward/s| bf16 backward/s
-- | -- | -- | -- | -- | --
lerp (tensor) | [10, 128, 10, 124] | 0.000236 | 3.93E-05 | 0.000494 | 0.000235
  | [10, 128, 20, 124] | 0.000525 | 7.39E-05 | 0.002485 | 0.000638
  | [10, 128, 30, 124] | 0.000801 | 0.000121 | 0.004235 | 0.001529
  |   |   |   |   |  
lerp (scalar) | [10, 128, 10, 124] | 5.90E-05 | 3.32E-05 | 0.000129 | 0.000116
  | [10, 128, 20, 124] | 0.000155 | 5.87E-05 | 0.000368 | 0.000206
  | [10, 128, 30, 124] | 0.000324 | 9.04E-05 | 0.001322 | 0.000313

<!--EndFragment-->
</body>
</html>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84327
Approved by: https://github.com/frank-wei
2022-09-29 01:16:16 +00:00
8dd45424ea [primTorch] Add ref for huber_loss and error inputs (#85041)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85041
Approved by: https://github.com/lezcano, https://github.com/mruberry
2022-09-28 19:56:17 +00:00
a0b1693996 Revert "Update amax/amin/norm/count_nonzero signatures with int[*]? dim (#83300)"
This reverts commit 1c0f0b33a0e013d6ec162cf488ff7643c4ffa33e.

Reverted https://github.com/pytorch/pytorch/pull/83300 on behalf of https://github.com/jeffdaily due to The commit breaks nvfuser tests
2022-09-28 17:04:53 +00:00
0b251d985d skip test TestCompositeComplianceCUDA::test_forward_ad_nn_functional_max_unpool2d_cuda_float32 (#85767)
This test was marked as expected failure, but this test is flaky for ROCm but only because ROCm sometimes gets expected success. The test was only marked expected failure due to non-determinism that was already well-known. See the nearby comments.

a4c94f0739/torch/testing/_internal/common_methods_invocations.py (L11410-L11421)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85767
Approved by: https://github.com/clee2000
2022-09-28 14:05:02 +00:00
795028a3ce Make Python reference for permute accept varargs (#85460)
Fixes #85452

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85460
Approved by: https://github.com/jjsjann123, https://github.com/mruberry, https://github.com/ngimel
2022-09-28 03:50:42 +00:00
1c0f0b33a0 Update amax/amin/norm/count_nonzero signatures with int[*]? dim (#83300)
Changes `dim` arg to use `int[*]?` type for the following functions in `native_funcitons.yaml`:
* `amax`
* `amin`
* `norm`
* `frobenius_norm`
* `native_norm`
* `count_nonzero`

Part of #29137

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83300
Approved by: https://github.com/ngimel, https://github.com/albanD, https://github.com/kulinseth
2022-09-28 01:56:37 +00:00
572dd862c4 Revert "Update amax/amin/norm/count_nonzero signatures with int[*]? dim (#83300)"
This reverts commit 8c7c7ed3221aeeefb63ef2b7a221a5d8b274cda5.

Reverted https://github.com/pytorch/pytorch/pull/83300 on behalf of https://github.com/huydhn due to The commit pin breaks XLA test somehow
2022-09-28 01:36:43 +00:00
8c7c7ed322 Update amax/amin/norm/count_nonzero signatures with int[*]? dim (#83300)
Changes `dim` arg to use `int[*]?` type for the following functions in `native_funcitons.yaml`:
* `amax`
* `amin`
* `norm`
* `frobenius_norm`
* `native_norm`
* `count_nonzero`

Part of #29137

Pull Request resolved: https://github.com/pytorch/pytorch/pull/83300
Approved by: https://github.com/ngimel, https://github.com/albanD, https://github.com/kulinseth
2022-09-27 23:50:04 +00:00
b656ba0b11 Use hexfloat for threshold OpInfo tests (#85676)
0.123 isn't exactly representable as a floating point value, and so
the threshold will move marginally depending on the data type where
the computation is performed. This leads to a rare flake in tests
comparing against a reference implementation.

Instead, this chooses a threshold which is exactly representable as a
bfloat16 value and thus has the same value for all data types.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85676
Approved by: https://github.com/ngimel
2022-09-27 16:44:46 +00:00
15c52ffc4f Disallow auto_element_wise for in-place and fix some in-place gradients (#85634)
Fixes https://github.com/pytorch/pytorch/issues/85535

Also fixes the backward and forward gradients of `nn.functional.threshold`. The issue was that in-place gradients weren't tested because the in-place variants were not properly registered to the OpInfo.

Perhaps an alternative to this to make auto_element_wise smart enough to actually handle the in-places cases (we have 4 cases total now where we manually copy_ after doing auto_element_wise), but that requires a few more changes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85634
Approved by: https://github.com/albanD
2022-09-27 15:35:24 +00:00
686555b663 [maskedtensor] port torch/_masked into torch/masked (#85515)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85515
Approved by: https://github.com/cpuhrsch
2022-09-26 23:41:13 +00:00
a531a604a0 Support BF16ImmPtr (#84041)
- To support BF16 Immediate value by converting it to uint16. The behavior is as same as BF16 tensor
- Enable BF16 test cases.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84041
Approved by: https://github.com/ZolotukhinM
2022-09-24 11:58:43 +00:00
604487f239 OpInfo for Slice (#85554)
This is based on wconstab tests from #84680

Technically, slice is covered by the __getitem__ opinfo, but it is
easier to debug/test on a more narrow internal function that only
uses this functionality and not other advanced indexing stuff.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85554
Approved by: https://github.com/mruberry, https://github.com/wconstab
2022-09-23 22:01:32 +00:00
bc6dc8d271 [fix] composite compliance: cumprod, _masked.cumprod, linalg.vander (#85330)
Ref: #69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85330
Approved by: https://github.com/zou3519
2022-09-23 21:40:07 +00:00
253ffbf28b Exposing native _scaled_dot_product_attention to torch.nn (#85044)
# Summary
This exposes the _scaled_dot_product_attention function to python in the nn namespace. It is still underscored because the api for args, and kwargs is still in flux for the next few weeks and will eventually land as a prototype feature.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85044
Approved by: https://github.com/cpuhrsch
2022-09-22 16:30:16 +00:00
56a41b5998 [composite compliance] ctc_loss (#84752)
#Ref #69991

I have mixed feelings about adding new (private) operators. Backends writers will have to override them as well.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84752
Approved by: https://github.com/zou3519
2022-09-22 00:21:11 +00:00
764cba6848 add Python ref for isreal (#85361)
Dipping my toes into prims waters

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85361
Approved by: https://github.com/IvanYashchuk, https://github.com/mruberry
2022-09-21 18:53:34 +00:00
35943f30cb Reference implementation for torch.Tensor.sum_to_size (#85338)
New ref: `torch._refs.sum_to_size`.

View consistency validation is disabled because the ref returns a view instead of returning the input.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85338
Approved by: https://github.com/mruberry
2022-09-21 18:12:52 +00:00
0217a8d049 Revert "[fix] composite compliance: cumprod, _masked.cumprod, linalg.vander (#85330)"
This reverts commit d3dec8097b847fc46755ef06ea6ff90eebc846eb.

Reverted https://github.com/pytorch/pytorch/pull/85330 on behalf of https://github.com/dagitses due to a PR this is based on got reverted, rebase and reland
2022-09-21 18:02:50 +00:00
2a88f1b2d8 Land "Make ceil,floor,round,trunc handle integers" (#85144)
PR to land https://github.com/pytorch/pytorch/pull/78480, as Rohit does
not work in the PyTorch project anymore
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85144
Approved by: https://github.com/ngimel, https://github.com/mruberry
2022-09-21 17:23:47 +00:00
563b065f5a [fix] rrelu, rrelu_, & RReLU when lower bound > upper bound (#84996)
Fixes #83160

cc @kshitij12345 @albanD
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84996
Approved by: https://github.com/mruberry, https://github.com/albanD
2022-09-21 13:57:16 +00:00
308b26fe4d Add nvFuser support for transpose (#84629)
`torch._refs.t`, `torch._refs.transpose`, `torch._refs.permute` are all should be working now with nvFuser executor. It would also work with graphs processed by AOT Autograd as these functions are registered to the aten->ref mapping via the "register_decomposition" decorator:
07d398fb26/torch/_refs/__init__.py (L3125-L3126)
07d398fb26/torch/_refs/__init__.py (L3143-L3144)
07d398fb26/torch/_refs/__init__.py (L2548-L2549)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84629
Approved by: https://github.com/ngimel
2022-09-21 12:45:15 +00:00
2f4a517d67 Ported matmul compositeimplicitautograd impl into core (#85239)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85239
Approved by: https://github.com/ezyang, https://github.com/lezcano
2022-09-21 09:25:24 +00:00
a3dc338ee1 Revert "Exposing native _scaled_dot_product_attention to torch.nn (#85044)"
This reverts commit 9fdd8a8b7f171be70ea3bd4724c38852ef292d73.

Reverted https://github.com/pytorch/pytorch/pull/85044 on behalf of https://github.com/huydhn due to This breaks CUDA 10.2 in trunk. We are deprecating CUDA 10.2, but it is still here in the mean time
2022-09-21 08:34:51 +00:00
9fdd8a8b7f Exposing native _scaled_dot_product_attention to torch.nn (#85044)
# Summary
This exposes the _scaled_dot_product_attention function to python in the nn namespace. It is still underscored because the api for args, and kwargs is still in flux for the next few weeks and will eventually land as a prototype feature.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85044
Approved by: https://github.com/cpuhrsch
2022-09-21 03:09:08 +00:00
b9b27f7664 Added Tensor.to overloads to torch._refs.to (#84802)
Fixes #84264

Pull Request resolved: https://github.com/pytorch/pytorch/pull/84802
Approved by: https://github.com/IvanYashchuk, https://github.com/mruberry
2022-09-20 18:52:02 +00:00
d3dec8097b [fix] composite compliance: cumprod, _masked.cumprod, linalg.vander (#85330)
Ref: #69991
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85330
Approved by: https://github.com/zou3519
2022-09-20 18:18:39 +00:00
d17b144e65 Adding multigammaln ref and fix arange (#85153)
Partially based on https://github.com/pytorch/pytorch/pull/83662.

I'll help land this one, as Rob does not work in the PyTorch project
anymore

I removed the data-dependent check for the args, as data dependencies
are bad for many reasons (and it was failing when the input has NaNs).

It also registers arange as a decomposition, and fixes the naming of its
args.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85153
Approved by: https://github.com/mruberry, https://github.com/ngimel
2022-09-20 17:52:56 +00:00
9c1a6a522d Make ones and zeros's ref accepts variadic size argument (#85117)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85117
Approved by: https://github.com/ngimel, https://github.com/lezcano
2022-09-20 16:41:30 +00:00
a4dca9822d [composite compliance] prod (#81969)
Ref: #69991

Also fixes #82644 (fix similar to #81617)

For CompositeCompliance, we can't use `item` to choose a special fast-path when Tensor is a Subclass. Instead we always dispatch to the slower but safer implementation.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/81969
Approved by: https://github.com/zou3519
2022-09-20 08:03:36 +00:00
8c952db13a Fix segfault case for torch.ormqr (#85278)
Correct behavior is to raise an error for `tau.size[-1] > input.size[-1]`.

Fixes https://github.com/pytorch/pytorch/issues/85218
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85278
Approved by: https://github.com/Lezcano, https://github.com/malfet, https://github.com/ngimel
2022-09-19 19:31:18 +00:00
555bb6cdb8 Check that groups is > 0 in _convolution op (#85111) (#85248)
`_convolution` will raise an error if it is called with groups <= 0

Signed-off-by: Thytu <valentin.de-matos@epitech.eu>

Fixes #85111

Side note : If I need to do it elsewhere, let me know 🙂
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85248
Approved by: https://github.com/Lezcano, https://github.com/malfet
2022-09-19 18:49:09 +00:00
7234eb06f7 Revert "Land "Make ceil,floor,round,trunc handle integers" (#85144)"
This reverts commit b27eb8d377fc8ac267fdaed7f95a03d609764604.

Reverted https://github.com/pytorch/pytorch/pull/85144 on behalf of https://github.com/clee2000 due to broke slow tests in trunk  ex https://ossci-raw-job-status.s3.amazonaws.com/log/8433956087
2022-09-19 18:46:35 +00:00
b27eb8d377 Land "Make ceil,floor,round,trunc handle integers" (#85144)
PR to land https://github.com/pytorch/pytorch/pull/78480, as Rohit does
not work in the PyTorch project anymore
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85144
Approved by: https://github.com/ngimel, https://github.com/mruberry
2022-09-19 17:21:48 +00:00
3a51b557ef Added docs and opinfo for narrow_copy (#84493)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/84493
Approved by: https://github.com/amjames, https://github.com/ngimel, https://github.com/mruberry
2022-09-19 14:28:25 +00:00
d561aa944b Adds normal prim, randn reference, and randn OpInfo (#85128)
This PR extends prims support for random operations by adding `prims.normal` and `refs.randn`. Note that in the future we may not want to model draws from distributions as their own prims.

`prims.normal` accepts a shape and the mean and standard deviation of a normal distribution as numbers. This is distinct from `torch.normal` which takes two tensors so every generated datapoint can be drawn from a normal distribution with its own mean and standard deviation. To address this @ngimel and I expect to add `prims.normal_with_tensors`. The current `prims.normal` could be implemented using `prims.normal_with_tensors`, but we expect the case of two numbers is much more common, and that executors will likely want to specialize for it, anyway.

In a follow-up PR I plan to add `refs.randn_like`, `prims.normal_with_tensors` (as mentioned above), and `refs.normal`.

While writing this PR I noticed the following issues:

- https://github.com/pytorch/pytorch/issues/85123
- https://github.com/pytorch/pytorch/issues/85121

The latter of which is prohibiting some testing.

In future PRs I plan to add a prim for changing layout, add support for pinned memory, and improve support for testing tensor creation operators, likely with a TensorCreationOpInfo class.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85128
Approved by: https://github.com/ngimel
2022-09-19 10:32:41 +00:00
5dd9610e9d Refs and decompositions for index_{add,copy,select,fill} (#85002)
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85002
Approved by: https://github.com/ngimel
2022-09-17 19:57:34 +00:00
98b8ef99e1 Add refs for sinc and sgn (#85142)
This PR superseded https://github.com/pytorch/pytorch/pull/80171

This does not add the ref for `special.sinc` as I was getting some
errors. This should be added to https://github.com/pytorch/pytorch/pull/84957
(cc @nkaretnikov)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85142
Approved by: https://github.com/ngimel, https://github.com/mruberry
2022-09-17 06:09:13 +00:00
e33b464ffc Revert "Refs and decompositions for index_{add,copy,select,fill} (#85002)"
This reverts commit 2f0b3de443dd8d4477d70c5a56fa14496d1eebe3.

Reverted https://github.com/pytorch/pytorch/pull/85002 on behalf of https://github.com/huydhn due to Broke trunk slow tests
2022-09-17 04:26:04 +00:00
2f0b3de443 Refs and decompositions for index_{add,copy,select,fill} (#85002)
As per title
Pull Request resolved: https://github.com/pytorch/pytorch/pull/85002
Approved by: https://github.com/ngimel
2022-09-16 23:59:35 +00:00
a9258eba8e [Testing] Port bernoulli and multinomial to ErrorInputs. (#74683)
Hi,
The PR aims to port `bernoulli` and `multinomial` to error inputs. Thanks!

cc: @kshitij12345! :)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74683
Approved by: https://github.com/kshitij12345, https://github.com/mruberry
2022-09-16 21:24:09 +00:00
776e0fe756 Revert "Make ones and zeros's ref accepts variadic size argument (#85117)"
This reverts commit 7e5616c9ff6347913d98627c60e39f72dce558e3.

Reverted https://github.com/pytorch/pytorch/pull/85117 on behalf of https://github.com/ZainRizvi due to Failed trunk
2022-09-16 21:06:24 +00:00