Commit Graph

164 Commits

Author SHA1 Message Date
619029e892 [easy] Small rendering fix in Tensor.module_load doc (#130489)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/130489
Approved by: https://github.com/janeyx99
2024-07-12 22:12:53 +00:00
a5f816df18 Add more dtypes to __cuda_array_interface__ (#129621)
`__cuda_array_interface__` was missing some unsigned integer dtypes as well as BF16.

numba doesn't support BF16 so I skip tests for that one.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/129621
Approved by: https://github.com/lezcano
2024-07-09 10:47:19 +00:00
8a5fda0377 added type hints for __contains__ (#129653)
- Fixes #129646
- Added test in test/typing/reveal/tensor_constructors.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/129653
Approved by: https://github.com/ezyang
2024-06-30 11:49:11 +00:00
83bb9b7c53 [BE] explicitly export subpackage torch.utils (#128342)
Resolves #126401

Pull Request resolved: https://github.com/pytorch/pytorch/pull/128342
Approved by: https://github.com/Skylion007
ghstack dependencies: #127707
2024-06-13 04:39:16 +00:00
dd143d44cc [BE] enable UFMT for top-level files torch/*.py (#127707)
Part of #123062

- #123062

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127707
Approved by: https://github.com/ezyang
2024-06-12 20:15:05 +00:00
afe15d2d2f Flip default value for mypy disallow_untyped_defs [3/11] (#127840)
See #127836 for details.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127840
Approved by: https://github.com/oulgen
2024-06-08 18:28:01 +00:00
03005bb655 Improve the clarity of the torch.Tensor.backward doc (#127201)
Improve the clarity of the torch.Tensor.backward doc, particularly wrt the arg `gradient`.
Reference https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html,
```
We need to explicitly pass a gradient argument in Q.backward() because it is a vector. gradient is a tensor of the same shape as Q, and it represents the gradient of Q w.r.t. itself
```

@janeyx99 feel free to assign to the corresponding reviewers, thanks
Co-authored-by: Jeffrey Wan <soulitzer@gmail.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/127201
Approved by: https://github.com/soulitzer
2024-05-28 19:25:51 +00:00
34910f87f0 [BE]: Update ruff to v0.4.4 (#125031)
Update ruff version to 0.4.2. This version mostly has bugfixes for the new parser and also updates the f-string rule to be able to apply more fixes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/125031
Approved by: https://github.com/albanD, https://github.com/malfet
2024-05-12 20:02:37 +00:00
c82fcb7b30 Add testing and fix weights_only load for quantized types and nn.Parameters with python attrs (#124330)
Adds the following to allowed globals for the `weights_only` unpickler
- [x] `torch._utils._rebuild_qtensor` and qtensor related types
- [x] `torch._utils._rebuild_parameter_with_state` (used deserializing a parameter that has user-defined attributes like `Param.foo`)

The remaining rebuild functions that have not been allowlisted are

- [x] `torch._utils._rebuild_wrapper_subclass` (allowlisted in above PR)
- [ ] `torch._utils._rebuild_device_tensor_from_numpy`
- [ ] `torch._utils._rebuild_xla_tensor` (legacy)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/124330
Approved by: https://github.com/albanD
2024-04-23 04:13:26 +00:00
5f5778476a rename ort to maia (#123265)
Fixes #123264

Pull Request resolved: https://github.com/pytorch/pytorch/pull/123265
Approved by: https://github.com/albanD
2024-04-23 00:33:25 +00:00
c81c9ba472 Disallow {FakeTensor,FunctionalTensor}.data_ptr (#122514)
This PR:
- disallows FakeTensor.data_ptr when it is called inside PT2 or fx tracing.
- disallows FunctionalTensor.data_ptr (python FunctionalTensor is only used in
  PT2)

The motivation behind this is that the leading cause of segfaults when
using custom ops with PT2 is calling .data_ptr on FunctionalTensor or
FakeTensor.

This change is BC-breaking. If your code broke as a result of this, it's
because there was a bug in it (these .data_ptr should never be
accessed!). You can either fix the bug (recommended) or get the previous
behavior back with:
```
from torch._subclasses.fake_tensor import FakeTensor
from torch._subclasses.functional_tensor import FunctionalTensor

data_ptr = 0 if isinstance(tensor, (FakeTensor, FunctionalTensor)) else tensor.data_ptr()
```

Test Plan:
- existing tests

Differential Revision: [D55366199](https://our.internmc.facebook.com/intern/diff/D55366199)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/122514
Approved by: https://github.com/ezyang, https://github.com/albanD, https://github.com/yifuwang, https://github.com/kurtamohler
2024-03-26 23:55:42 +00:00
8a5a377190 Move doc links to point to main (#121823)
The previous links were pointing to an outdated branch

Command: `find . -type f -exec sed -i "s:docs/main:docs/master:g" {} + `

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121823
Approved by: https://github.com/albanD, https://github.com/malfet
2024-03-15 19:49:37 +00:00
4b3903379a Add assign argument to torch.Tensor.module_load (#121158)
Make `torch.__future__.get_swap_module_params_on_conversion() == True` account for `assign` argument to `nn.Module.load_state_dict`

Similar to when `torch.__future__.set_swap_module_params_on_conversion()` is `False`, `assign=True` means that we do not incur a `self.copy_(other)` and the properties of `other` will be preserved

Pull Request resolved: https://github.com/pytorch/pytorch/pull/121158
Approved by: https://github.com/albanD
ghstack dependencies: #121157
2024-03-06 01:32:06 +00:00
bfa71b523d add complex32 to v3_dtypes (#120388)
Fixes [#120290](https://github.com/pytorch/pytorch/issues/120290)
Fixes https://github.com/pytorch/pytorch/issues/73502

use `v3_dtypes` and `torch._utils._rebuild_tensor_v3` to handle torch.save(complex32)

result:
![image](https://github.com/pytorch/pytorch/assets/37650440/18b6cbb3-fb3f-4855-9d48-374014647988)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/120388
Approved by: https://github.com/albanD
2024-02-28 02:32:29 +00:00
3372aa51b4 Integrate swap_tensors into nn.Module.load_state_dict (#117913)
Added a `torch.Tensor` method that defines how to transform `other`, a value in the state dictionary, to be loaded into `self`, a param/buffer in an `nn.Module` before swapping via `torch.utils.swap_tensors`
* `param.module_load(sd[key])`

This method can be overridden using `__torch_function__`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/117913
Approved by: https://github.com/albanD
2024-02-09 22:32:29 +00:00
01abb5af21 additional support for float8_e4m3fnuz and _e5m2fnuz (#115214)
Follow up to #107586.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115214
Approved by: https://github.com/peterbell10, https://github.com/malfet
2024-01-22 18:33:41 +00:00
b637fdc8b3 Revert "additional support for float8_e4m3fnuz and _e5m2fnuz (#115214)"
This reverts commit 74e13624998f2a4de29bce73a949d7f0339ec04e.

Reverted https://github.com/pytorch/pytorch/pull/115214 on behalf of https://github.com/PaliC due to breaking internal builds ([comment](https://github.com/pytorch/pytorch/pull/115214#issuecomment-1900815152))
2024-01-19 17:35:04 +00:00
74e1362499 additional support for float8_e4m3fnuz and _e5m2fnuz (#115214)
Follow up to #107586.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115214
Approved by: https://github.com/peterbell10
2024-01-19 00:50:18 +00:00
3f9e9ecfe4 Fix torch.detach doc-string (#115850)
Fixes https://github.com/pytorch/pytorch/issues/98976

Pull Request resolved: https://github.com/pytorch/pytorch/pull/115850
Approved by: https://github.com/albanD
2023-12-22 20:04:33 +00:00
b5c4b1d9fe Make Float8 types serializeable (#114662)
By finally breaking FC promise on new dtypes by serializing untyped
storage and tensor dtypes

- Add `_rebuild_tensor_v3` that takes an extra dtype argument
- In `Tensor.__reduce_ex__` serialize tensor using untyped storage for
  v3_dtypes (which are at the moment limited to float8 dtypes)

Test plan: `python -c "import torch;x=torch.arange(10).to(dtype=torch.float8_e4m3fn);torch.save(x, 'pt.pt');print(torch.load('pt.pt'))"`

Fixes https://github.com/pytorch/pytorch/issues/114634

Pull Request resolved: https://github.com/pytorch/pytorch/pull/114662
Approved by: https://github.com/ngimel
2023-11-29 23:23:23 +00:00
a3b859fc67 Drop dynamo-specific type hints on Tensor in favor of type-ignores (#113720)
Per [this][1] discussion, plus some offline discussion. The summary:
@albanD considers the core PyTorch types like Tensor to be extremely
brittle, and does not think the risk of adding these typed attributes to
be worth it.

@eellison mentioned that we could use `WeakTensorKeyDictionary` instead.
However, based on the sparse usage of these bonus attributes, I think
that would be overkill. So I've opted to go with a few more type-ignore
comments instead.

[1]: https://github.com/pytorch/pytorch/pull/113610#discussion_r1392907367

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113720
Approved by: https://github.com/ezyang, https://github.com/albanD, https://github.com/eellison
ghstack dependencies: #113534, #113610
2023-11-16 01:54:00 +00:00
d00c983b63 [dynamo] Make {testing,debug_utils,utils}.py pass follow_imports typechecking (#113519)
Notes:

* `debug_insert_nops` in testing.py was passing `None` to the compiler_fn
parameter of `OutputGraph`, hence the modifications there.
* I added `disable-error-code="method-assign"` to debug_utils.py as it
does several such assignments. I guess mypy doesn't like it because it
makes code near-impossible to safely typecheck.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/113519
Approved by: https://github.com/Skylion007
ghstack dependencies: #113413, #113518
2023-11-11 22:15:46 +00:00
767ce2b81c [dynamo] Make decorators.py pass follow-import typechecking (#113304)
I am trying to turn on `follow_imports=silent` for MYPYNOFOLLOW.
However, this requires a huge number of changes, so I am breaking it
down to a per-file basis.

Unfortunately, we will not be able to turn on `follow_imports` until all
files are fixed, so there is no way to stop regressions. So I hope to get
these fixes in as fast as possible.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/113304
Approved by: https://github.com/Skylion007
2023-11-09 21:55:49 +00:00
51a38380d1 Fix torch.load(..., weights_only=True) for NT (#112516)
Found when looking into #112509
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112516
Approved by: https://github.com/soulitzer
2023-11-02 14:41:04 +00:00
320ac546ed Clarify difference between share_memory and from_file (#111856)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111856
Approved by: https://github.com/albanD
ghstack dependencies: #111688
2023-11-01 03:25:09 +00:00
3693777a86 Pickle support for NT (#110219)
Fixes #104198
Pull Request resolved: https://github.com/pytorch/pytorch/pull/110219
Approved by: https://github.com/cpuhrsch
2023-09-29 15:30:06 +00:00
09c598745c Rename torch._C._TensorBase to TensorBase (#109940)
I have gone ahead and implemented the renaming of the type `torch._C._TensorBase` to a non-private class name `TensorBase`.
The changes also include leaving `torch._C._TensorBase` as an alias to the new type: 70458768fb/torch/csrc/autograd/python_variable.cpp (L2196-L2197) both in the c++ code and in the corresponding `__init__.pyi.in` file:
70458768fb/torch/_C/__init__.pyi.in (L1522)

Fixes #109438

Pull Request resolved: https://github.com/pytorch/pytorch/pull/109940
Approved by: https://github.com/ezyang
2023-09-25 19:10:22 +00:00
8a7a6867b9 [PyTorch][Tensor] Introduce tensor.dim_order (#106835)
Summary:
This is a stride based attribute for a tensor available in Python.

This can help inspect tensors generated using `torch.empty_permuted(.., physical_layout, ...)`, where physical_layout should match the dim_order returned here. `empty_permuted` will be renamed to use dim_order as the param name in the future. And also help Executorch export pipeline with implementing dim_order based tensors.

Differential Revision: D48134476

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106835
Approved by: https://github.com/ezyang
2023-08-25 00:06:03 +00:00
6e71ad0509 Add tensor post accumulate grad hook API (#107063)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107063
Approved by: https://github.com/albanD, https://github.com/soulitzer
2023-08-24 00:19:35 +00:00
432fce4e0d Revert "Add tensor post accumulate grad hook API (#107063)"
This reverts commit 3f655277d44909e0770e77e1b4fe1c9b0f39d7b9.

Reverted https://github.com/pytorch/pytorch/pull/107063 on behalf of https://github.com/ZainRizvi due to Diff train weirdness. Need to temporarily revert this PR and will right land it soon afterwards ([comment](https://github.com/pytorch/pytorch/pull/107063#issuecomment-1690799057))
2023-08-24 00:12:34 +00:00
221daeb1a7 Fix deepcopy for tensor with MTIA device key. (#107427)
Summary: Tensor with MTIA device type doesn't have storage and we need to treat it same as other tensors which don't have storage.

Test Plan: CI tests.

Differential Revision: D48456004

Pull Request resolved: https://github.com/pytorch/pytorch/pull/107427
Approved by: https://github.com/cx-yin, https://github.com/ezyang
2023-08-23 20:47:36 +00:00
3f655277d4 Add tensor post accumulate grad hook API (#107063)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107063
Approved by: https://github.com/albanD, https://github.com/soulitzer
2023-08-22 15:15:57 +00:00
4cc1745b13 [BE] f-stringify torch/ and scripts (#105538)
This PR is a follow up on the pyupgrade series to convert more strings to use f-strings using `flynt`.

- https://docs.python.org/3/reference/lexical_analysis.html#f-strings
- https://pypi.org/project/flynt/

Command used:

```
flynt torch/ -ll 120
flynt scripts/ -ll 120
flynt tools/ -ll 120
```

and excluded `collect_env.py`

Pull Request resolved: https://github.com/pytorch/pytorch/pull/105538
Approved by: https://github.com/ezyang, https://github.com/malfet
2023-07-21 19:35:24 +00:00
79c5e33349 [BE] Enable ruff's UP rules and autoformat nn/ mps/ and torch/ (#105436)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105436
Approved by: https://github.com/malfet, https://github.com/albanD
2023-07-21 07:38:46 +00:00
08cbfb2a58 Avoid tensor creation and use scalar overload (#104264)
I would expect this preserves the behavior but there might be weird edge cases?
@mruberry might know?

The aim is to fix https://github.com/pytorch/pytorch/pull/104254 (and make `1 ** t` capturable via cudagraph)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104264
Approved by: https://github.com/zou3519
2023-07-12 18:11:27 +00:00
872fdb329b This extra message would have helped with Wav2Vec2 debugging. (#103002)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103002
Approved by: https://github.com/janeyx99, https://github.com/anijain2305, https://github.com/voznesenskym, https://github.com/malfet
2023-06-06 04:28:16 +00:00
39b04370db Preserve coalesce state in sparse COO tensor serialization (#102647)
Fixes #101186

Also, resolves the "serialization to preserve coalesced-ness" part in https://github.com/pytorch/pytorch/issues/73479

Pull Request resolved: https://github.com/pytorch/pytorch/pull/102647
Approved by: https://github.com/mikaylagawarecki
2023-06-03 01:37:52 +00:00
eqy
66f6e0e605 [CUDA][DLPack] Handle legacy default streams for DLPack conversion (#101318)
It seems that some legacy default stream logic (e.g., present in a8ff647e42/torch/utils/dlpack.py (L114) ) is not handled on the potential receiving end in `torch/_tensor.py`.

Open to suggestions on how to make the test case less clunky, as this was the combination we arrived at after discovering flakiness in alternate versions.

Thanks to Olga Andreeva for surfacing this issue and providing a repro.

CC @Aidyn-A @ngimel

Pull Request resolved: https://github.com/pytorch/pytorch/pull/101318
Approved by: https://github.com/ngimel
2023-05-24 16:14:50 +00:00
bafa2c4724 Change 'w.r.t.' to 'wrt' in function docstrings to fix doc rendering (#100028)
Fixes #72428 according to decision reached in comments.

I've left other instances of `w.r.t.` in tact (e.g. in parameter/return descriptions, in comments, etc) because there were many, and I didn't' want to go out-of-scope. That being said, I'm happy to change those as well if we'd prefer the consistency!

I've also fixed a typo that I came across while grepping for instances.

Will update with screenshots once docs are built.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/100028
Approved by: https://github.com/albanD
2023-04-25 23:53:26 +00:00
79c9e82e27 Fix flake8 lint errors reported by ruff - take 2 (#99798)
Replaces #99784. This PR is pure autofix.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/99798
Approved by: https://github.com/Skylion007, https://github.com/kit1980
2023-04-23 23:09:51 +00:00
419ad49e65 Make Tensor.__contains__ accept SymInt/Float/Bool. (#98933)
Fixes https://github.com/pytorch/pytorch/issues/98870

Signed-off-by: Edward Z. Yang <ezyang@meta.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98933
Approved by: https://github.com/albanD, https://github.com/Skylion007
2023-04-12 19:16:33 +00:00
53c9bc8c68 Add DLPack support for XPU backend by mapping to kDLOneAPI in DLPack … (#94968)
# Motivation
The DLPack device type kDLOneAPI stands for the Unified Shared Memory allocated on a oneAPI device. The corresponding Pytorch backend type is XPU.
Support to export/import the Pytorch XPU tensor as a DLPack tensor of kDLOneAPI device.

# Solution
1. Update the DLPack protocol to v0.7.
2. Add the XPU hooks to map the Aten device and DLPack device with the address value and device information.

# Additional Context
Reopen (#82867)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94968
Approved by: https://github.com/kit1980
2023-03-30 04:32:15 +00:00
2ea097071a fix device type bug for custom device (#97213)
Fixes #ISSUE_NUMBER
support  the custom renamed device ,@bdhirsh , please review my changes.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97213
Approved by: https://github.com/bdhirsh, https://github.com/kit1980
2023-03-27 18:36:47 +00:00
4a5ce921a0 Add HPU to compatible shallow copy list and remove lazy HPU changes (#94673)
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94673
Approved by: https://github.com/wconstab
2023-02-14 17:15:25 +00:00
5b1cedacde [BE] [2/3] Rewrite super() calls in functorch and torch (#94588)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94588
Approved by: https://github.com/ezyang, https://github.com/albanD
2023-02-10 21:16:33 +00:00
fba13d94a1 Remove deprecated torch.symeig (#70988)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.symeig`.

- [x] XLA PR: https://github.com/pytorch/xla/pull/4498

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70988
Approved by: https://github.com/lezcano, https://github.com/kit1980, https://github.com/malfet
2023-01-31 11:59:11 +00:00
acdd462b1a Revert "Remove deprecated torch.symeig (#70988)"
This reverts commit d70ed68162521341060b06985620cdbef04a8fa9.

Reverted https://github.com/pytorch/pytorch/pull/70988 on behalf of https://github.com/kit1980 due to Failing XLA tests, forward fix unsuccessful
2023-01-24 19:03:40 +00:00
d70ed68162 Remove deprecated torch.symeig (#70988)
The time has come to remove deprecated linear algebra related functions. This PR removes `torch.symeig`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/70988
Approved by: https://github.com/lezcano, https://github.com/kit1980
2023-01-23 22:51:40 +00:00
88366a9075 Document hooks ordering behavior in the autograd note (#91667)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91667
Approved by: https://github.com/albanD
2023-01-18 00:20:13 +00:00
b32b81a0c5 Make torch.split take symint as arg (#91724)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91724
Approved by: https://github.com/voznesenskym
2023-01-07 00:00:03 +00:00