ce2903080c
Add sparse compressed fake tensor support ( #120920 )
...
As in the title.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120920
Approved by: https://github.com/ezyang
2024-03-04 14:38:45 +00:00
b8e6ca6f76
Add sparse compressed meta tensor support ( #120707 )
...
As in the title.
Replaces https://github.com/pytorch/pytorch/pull/120498 and https://github.com/pytorch/pytorch/pull/120562
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120707
Approved by: https://github.com/ezyang
ghstack dependencies: #120703
2024-03-01 13:28:47 +00:00
8a32a07856
Revert "Add meta device support to sparse compressed tensors ( #120498 )"
...
This reverts commit 5d71ba688563ef491bb28d47c493ec6fc7791da2.
Reverted https://github.com/pytorch/pytorch/pull/120498 on behalf of https://github.com/zou3519 due to broke CI ([comment](https://github.com/pytorch/pytorch/pull/120498#issuecomment-1964491999 ))
2024-02-26 15:59:36 +00:00
5d71ba6885
Add meta device support to sparse compressed tensors ( #120498 )
...
As in the title.
Unblocks https://github.com/pytorch/pytorch/pull/117907#discussion_r1499251745
Pull Request resolved: https://github.com/pytorch/pytorch/pull/120498
Approved by: https://github.com/ezyang
2024-02-25 16:50:17 +00:00
4a37f57c69
Add batched sparse CSR/CSC/BSR/BSC to sparse COO conversion support ( #116206 )
...
As in the title.
Fixes https://github.com/pytorch/pytorch/issues/104868
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116206
Approved by: https://github.com/amjames , https://github.com/lezcano , https://github.com/cpuhrsch
2024-01-07 19:42:02 +00:00
2a87ab4508
Refactor some tests by using TEST_CUDA & TEST_MULTIGPU instead ( #116083 )
...
as https://github.com/pytorch/pytorch/pull/116014#discussion_r1430510759 stated, refactor some tests related.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116083
Approved by: https://github.com/fduwjj
2024-01-03 08:53:59 +00:00
3fe437b24b
[BE]: Update flake8 to v6.1.0 and fix lints ( #116591 )
...
Updates flake8 to v6.1.0 and fixes a few lints using sed and some ruff tooling.
- Replace `assert(0)` with `raise AssertionError()`
- Remove extraneous parenthesis i.e.
- `assert(a == b)` -> `assert a == b`
- `if(x > y or y < z):`->`if x > y or y < z:`
- And `return('...')` -> `return '...'`
Co-authored-by: Nikita Shulga <2453524+malfet@users.noreply.github.com >
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116591
Approved by: https://github.com/albanD , https://github.com/malfet
2024-01-03 06:04:44 +00:00
3a4fe835cc
Fixed segfault when trying to permute empty tensor ( #116335 )
...
Fixes #116325 .
Fixed unchecked access to first element of `dims` when permuting an empty tensor. Added test to prevent regressions.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116335
Approved by: https://github.com/Skylion007
2023-12-23 23:14:28 +00:00
194d57dae7
Add values backward support for sparse CSR, CSC, BSR, and BSC tensors ( #115586 )
...
Fixes https://github.com/pytorch/pytorch/issues/107286
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115586
Approved by: https://github.com/cpuhrsch , https://github.com/albanD
2023-12-14 23:09:13 +00:00
193f87857e
[BC breaking] Remove check_sparse_nnz argument of gradcheck ( #115658 )
...
As in title per deprecation plan.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115658
Approved by: https://github.com/cpuhrsch , https://github.com/soulitzer
2023-12-13 17:34:30 +00:00
670eb83573
Enable test_sparse_addmm for crossref tests ( #115536 )
...
Fixes https://github.com/pytorch/pytorch/issues/97284
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115536
Approved by: https://github.com/cpuhrsch
2023-12-12 17:26:40 +00:00
f98b0f3ebc
Add bfloat16 support to torch.sparse.addmm for CPU ( #115535 )
...
Fixes https://github.com/pytorch/pytorch/issues/73145 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115535
Approved by: https://github.com/cpuhrsch
2023-12-12 13:26:33 +00:00
d90d67a146
Added a check to prevent accessing blocksize during Tensor.to_sparse … ( #114905 )
...
…conversion if empty. The main problem was that blocksize is an `optional<ArrayRef>`, so checking for `.has_value()` will be true even if the containing `ArrayRef` is empty.
Fixes #114865 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/114905
Approved by: https://github.com/malfet
2023-12-01 12:36:15 +00:00
0bd4d1f4ab
Add sparse tensors support to dataloader. ( #112842 )
...
Fixes https://github.com/pytorch/pytorch/issues/106837
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112842
Approved by: https://github.com/cpuhrsch , https://github.com/gokulavasan
2023-11-19 16:05:27 +00:00
26b5e27ace
Add Half support for cummax, cummin, cumprod, logcumsumexp, and prod on CPU ( #112132 )
...
Add Half support for cummax, cummin, cumprod, logcumsumexp, and prod on CPU.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/112132
Approved by: https://github.com/cpuhrsch
2023-11-05 12:31:38 +00:00
0a26e5fd8f
Use 'device' argument in test_sparse.py::TestSparseAnyCUDA::test_as_sparse_gradcheck_* ( #111584 )
...
Argument "device" was missed.
So, "test_sparse.py::TestSparseAnyCUDA::test_as_sparse_gradcheck_*_cuda" was always run on the default device ("cpu") if another default torch device was not configured before.
This fix will probably detect a number of issues on various devices which were previously missed.
Should fix failed rocm CI jobs with "##[error]The action has timed out." and speedup test execution
Pull Request resolved: https://github.com/pytorch/pytorch/pull/111584
Approved by: https://github.com/soulitzer
2023-10-24 00:03:50 +00:00
1f20531939
fall back to eager on NotImplementedError
( #107863 )
...
Follow-up to https://github.com/pytorch/pytorch/pull/107710 :
Help dynamo fall back to eager when compiling unimplemented numpy constructs:
- arrays of strings
- (arg){min, max} for complex types
- various arguments typed as NotImplemented (`np.ones(4, order="F")` etc)
- numpy functions which torch._numpy does not implement
To test, run (we do not implement arrays of strings)
```
import torch
import numpy as np
@torch.compile(fullgraph=False)
def fn():
return np.asarray(["L", "U"])
```
and observe it compiles with fullgraph=False and fails with fullgraph=True
Fixes https://github.com/pytorch/pytorch/issues/107970
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107863
Approved by: https://github.com/ezyang , https://github.com/lezcano
2023-09-07 21:22:20 +00:00
c5ad44be1d
Add torch.sparse.as_sparse_gradcheck decorator of gradcheck that allows gradcheck input function to receive and return sparse tensors ( #107150 )
...
Compared to #104848 , this PR makes a step further: when the enable_sparse_support decorator is applied to `torch.autograd.gradcheck`, the resulting callable is equivalent to `torch.autograd.gradcheck` with an extra feature of supporting functions that can have input sparse tensors or/and can return sparse tensors.
At the same time, the underlying call to `torch.autograd.gradcheck` will operate on strided tensors only. This basically means that torch/autograd/gradcheck.py can be cleaned up by removing the code that deals with sparse tensors.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107150
Approved by: https://github.com/albanD , https://github.com/amjames , https://github.com/cpuhrsch
ghstack dependencies: #107638 , #107777
2023-08-26 07:24:31 +00:00
e4b38b9ce9
Support torch.sparse_mask on strided input with sparse CSR, CSC, BSR, and BSC mask. ( #107777 )
...
While `input.sparse_mask(mask)` can be defined as `input.mul(ones_like(mask))`, implementing this definition leads to a chicken-and-egg problem because the multiplication of dense and sparse_compressed tensors relies on the `sparse_mask` support.
This PR implements `sparse_mask` support for sparse compressed masks using utility functions from sparse compressed tensor conversions support.
Fixes https://github.com/pytorch/pytorch/issues/107373
Fixes https://github.com/pytorch/pytorch/issues/107370
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107777
Approved by: https://github.com/amjames , https://github.com/cpuhrsch
ghstack dependencies: #107638
2023-08-26 07:24:31 +00:00
fe3309b4b8
Add optional is_coalesced argument to sparse coo tensor factory function. ( #107638 )
...
Resolves https://github.com/pytorch/pytorch/issues/107097
After this PR, instead of
```python
torch.sparse_coo_tensor(indices, values, size)._coalesced_(is_coalesced)
```
(that does not work in the autograd context, see #107097 ), use
```python
torch.sparse_coo_tensor(indices, values, size, is_coalesced=is_coalesced)
```
All sparse coo factory functions that take indices as input support the `is_coalesced` argument.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107638
Approved by: https://github.com/cpuhrsch
2023-08-26 07:24:29 +00:00
a816aa785b
Implement autograd support for sparse compressed tensor constructors ( #107384 )
...
Fixes https://github.com/pytorch/pytorch/issues/107126
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107384
Approved by: https://github.com/cpuhrsch
ghstack dependencies: #107447
2023-08-21 20:26:39 +00:00
017499b078
Update reduction_ops groupings to include primtorch types ( #107338 )
...
Fixes https://github.com/pytorch/pytorch/issues/107335 . The skips were updated for the _ref ops to match those for eager mode where necessary. Part of breakdown of https://github.com/pytorch/pytorch/pull/104489 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107338
Approved by: https://github.com/ezyang
2023-08-19 02:09:11 +00:00
5b7b9e7896
Update binary_ufuncs groupings to include primtorch types ( #107419 )
...
Fixes #107335 . The skips were updated for the _ref ops to match those for eager mode where necessary. Part of breakdown of https://github.com/pytorch/pytorch/pull/104489 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107419
Approved by: https://github.com/ezyang
2023-08-18 20:45:36 +00:00
e108f33299
Update distutils.Version to packaging.version due to the deprecation … ( #107207 )
...
Update distutils.Version to packaging.version due to the deprecation warning.
```python
/root/Git.d/pytorch/pytorch/torch/testing/_internal/common_methods_invocations.py:17136: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
active_if=TEST_SCIPY and LooseVersion(scipy.__version__) < "1.4.0"),
/root/Git.d/pytorch/pytorch/torch/testing/_internal/common_methods_invocations.py:17138: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
active_if=TEST_SCIPY and LooseVersion(scipy.__version__) < "1.4.0"),
/root/Git.d/pytorch/pytorch/torch/testing/_internal/common_methods_invocations.py:17140: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
active_if=TEST_SCIPY and LooseVersion(scipy.__version__) < "1.4.0"),
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/107207
Approved by: https://github.com/soulitzer
2023-08-17 11:19:44 +00:00
01069ad4be
sparse.mm.backward: fix for non-contiguous grad values on CPU ( #106127 )
...
Fixes https://github.com/pytorch/pytorch/issues/102493 .
The problem was that the backward implementation assumed inputs to be contiguous.
This might supersede https://github.com/pytorch/pytorch/pull/104520 .
Pull Request resolved: https://github.com/pytorch/pytorch/pull/106127
Approved by: https://github.com/cpuhrsch
2023-07-28 01:25:00 +00:00
73e1455327
[BE] Enable ruff's UP rules and autoformat test/ ( #105434 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/105434
Approved by: https://github.com/albanD
2023-07-19 20:36:06 +00:00
fc2f87b281
Add semi-structured sparse conversions ( #103830 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103830
Approved by: https://github.com/amjames , https://github.com/jcaip , https://github.com/cpuhrsch
2023-07-13 21:09:09 +00:00
437bc5b1b7
sparse_mask: backward support for sparse lhs (take 2) ( #104341 )
...
This is a copy of https://github.com/pytorch/pytorch/pull/95165 with some bug fixes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/104341
Approved by: https://github.com/albanD , https://github.com/pearu , https://github.com/amjames
2023-07-03 14:12:44 +00:00
7274582390
Revert "sparse_mask: backward support for sparse lhs ( #95165 )"
...
This reverts commit f090fdf3b49164679fb6316e9ae15e0c4fb3c9eb.
Reverted https://github.com/pytorch/pytorch/pull/95165 on behalf of https://github.com/huydhn due to Sorry for reverting this. I think one of the tests test_sparse.py::TestSparseCUDA::test_sparse_mask_backward_cuda_complex128 is failing on slow gradcheck f090fdf3b4
([comment](https://github.com/pytorch/pytorch/pull/95165#issuecomment-1604696109 ))
2023-06-23 18:40:15 +00:00
f090fdf3b4
sparse_mask: backward support for sparse lhs ( #95165 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95165
Approved by: https://github.com/pearu , https://github.com/cpuhrsch
2023-06-23 12:27:27 +00:00
ab8fc41e2f
Support bfloat16 dtype for CUTLASS-based semi-structured sparsity ( #103978 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/103978
Approved by: https://github.com/cpuhrsch
2023-06-22 15:53:27 +00:00
09fdea8564
Fix autograd issue with identity conversions ( #92022 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92022
Approved by: https://github.com/pearu , https://github.com/mtaaooby , https://github.com/amjames , https://github.com/cpuhrsch
2023-06-21 21:23:03 +00:00
8fc687f7ee
Add activation functions (ReLU and SiLU for now) for structured sparse linear operator ( #101339 )
...
Differential Revision: [D46453476](https://our.internmc.facebook.com/intern/diff/D46453476 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101339
Approved by: https://github.com/cpuhrsch
2023-06-16 17:24:59 +00:00
2f893d04c8
Implement adding bias vector into structured sparse linear operator ( #100881 )
...
Differential Revision: [D46453477](https://our.internmc.facebook.com/intern/diff/D46453477 )
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100881
Approved by: https://github.com/cpuhrsch , https://github.com/malfet
2023-06-15 16:16:09 +00:00
45401ef745
Enable float16 and complex32 support for sparse CSR elementwise multiplication operation. ( #100394 )
...
As in the title. In addition, the PR adds float16 addcmul support for CPU device.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100394
Approved by: https://github.com/amjames , https://github.com/cpuhrsch
2023-06-14 14:42:39 +00:00
cbe270d233
Fix zeros_like for sparse tensors with batch dimensions. Add opinfo-based tests to like-functions. ( #101215 )
...
Fixes #101078
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101215
Approved by: https://github.com/cpuhrsch
2023-06-13 16:02:10 +00:00
2e2a74670d
torch.sparse.softmax: allow negative dim ( #102172 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102172
Approved by: https://github.com/cpuhrsch
2023-05-24 19:43:47 +00:00
a76c1af351
Revert "Implement adding bias vector into structured sparse linear operator ( #100881 )"
...
This reverts commit c3a893c659bebf0e5b62452a751c4e6ab3dc5b2d.
Reverted https://github.com/pytorch/pytorch/pull/100881 on behalf of https://github.com/izaitsevfb due to breaks internal builds, see D45972633 ([comment](https://github.com/pytorch/pytorch/pull/100881#issuecomment-1553621418 ))
2023-05-18 20:47:02 +00:00
eb9ac9c156
Revert "Add activation functions (ReLU and SiLU for now) for structured sparse linear operator ( #101339 )"
...
This reverts commit bfb3941ad8aaf0af159c2bec3cf1cbec1488f335.
Reverted https://github.com/pytorch/pytorch/pull/101339 on behalf of https://github.com/izaitsevfb due to Depends on #100881 , which has to be reverted due to internal build breakage. ([comment](https://github.com/pytorch/pytorch/pull/101339#issuecomment-1553618216 ))
2023-05-18 20:42:44 +00:00
bfb3941ad8
Add activation functions (ReLU and SiLU for now) for structured sparse linear operator ( #101339 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/101339
Approved by: https://github.com/cpuhrsch
2023-05-18 01:53:18 +00:00
c3a893c659
Implement adding bias vector into structured sparse linear operator ( #100881 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100881
Approved by: https://github.com/cpuhrsch
2023-05-17 05:46:22 +00:00
65b15be04c
Fix incorrect sparse_dim in COO.zero_() and in binary operations with zero-sized COO operands ( #98292 )
...
Fixes https://github.com/pytorch/pytorch/issues/97627
Pull Request resolved: https://github.com/pytorch/pytorch/pull/98292
Approved by: https://github.com/nikitaved , https://github.com/cpuhrsch , https://github.com/amjames
2023-05-11 19:05:34 +00:00
a8c2cd1039
Add CUTLASS-based MM for structured sparse linear operator ( #100485 )
...
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100485
Approved by: https://github.com/cpuhrsch
2023-05-09 21:05:15 +00:00
92a7640b76
Add mul tests with sparse sample inputs ( #100393 )
...
This PR implements sparse sample inputs and error inputs for mul OpInfo.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/100393
Approved by: https://github.com/amjames , https://github.com/cpuhrsch
2023-05-09 16:13:14 +00:00
687afeb686
[dynamo][numpy] Add NumpyTensorVariable to translate ndarray attribute calls to tensor attributes ( #95849 )
...
Issue: #93684
# Problem
Reduce graph breaks when dynamo compiles python functions containing numpy functions and ndarray operations.
# Design (as I know it)
* Use torch_np.ndarray(a wrapper of tensor) to back a `VariableTracker`: `NumpyTensorVariable`.
* Translate all attributes and methods calls, on ndarray, to torch_np.ndarray equivalent.
This PR adds `NumpyTensorVariable` and supports:
1. tensor to ndarray, ndarray to tensor
2. numpy functions such as numpy.meshgrid()
3. ndarray attributes such as `itemsize`, `stride`
Next PR will handle returning `np.ndarray` and add support for ndarray methods
Pull Request resolved: https://github.com/pytorch/pytorch/pull/95849
Approved by: https://github.com/ezyang
2023-04-27 16:18:35 +00:00
a1074ddf51
Enable cadd_sparse for BFloat16 on CPU ( #96767 )
...
Enabling **cadd_sparse** operation for BFloat16 on CPU to support BFloat16 operations in GNN libraries.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96767
Approved by: https://github.com/jgong5 , https://github.com/cpuhrsch
2023-04-14 19:50:49 +00:00
2fddcf0fc0
[CUDA][CUDA 11] Remove more CUDA 11 version checks ( #92934 )
...
Working on removing stragglers missed in previous CUDA version < 11.0 cleanup PRs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/92934
Approved by: https://github.com/ngimel
2023-03-30 19:49:52 +00:00
9d5ac03b9a
Deprecate gradcheck check_sparse_nnz argument as duplicate of masked argument ( #97187 )
...
As in the title.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97187
Approved by: https://github.com/soulitzer
2023-03-22 14:11:03 +00:00
679dec847e
Use is_available instead of device_count to check for CUDA availability ( #97043 )
...
There are some tests that incorrectly uses the number of GPU devices `torch.cuda.device_count() > 0` to check for CUDA availability instead of the default `torch.cuda.is_available()` call. This makes these tests more brittle when encountering infra flakiness on G5 runner using A10G, for example [test_pytorch_np](https://hud.pytorch.org/failure/FAILED%20test_tensorboard.py%3A%3ATestTensorBoardPyTorchNumpy%3A%3Atest_pytorch_np%20-%20RuntimeError%3A%20No%20CUDA%20GPUs%20are%20available ).
The underlying problem is that GPU devices could crash on these runner. While the root cause for that is unclear and we will try to upgrade to a new NVIDIA driver https://github.com/pytorch/pytorch/pull/96904 to see if it helps, we can also make these tests more resilient by using the correct check to skip tests correctly when GPU crashes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/97043
Approved by: https://github.com/clee2000
2023-03-18 00:39:42 +00:00
2abcafcfd8
Add masked_grad kw argument to to_dense ( #96095 )
...
As in the title.
The `masked_grad` kw argument is required for `to_dense` backward to distinguish the expected semantics of sparse tensors. `masked_grad=True` means that the `to_dense` backward will apply a mask to the returned gradient where the mask is defined by the input indices. The default semantics implies `masked_grad==True` for BC but see the [comment](https://github.com/pytorch/pytorch/pull/96095/files#diff-d4df180433a09071e891d552426911c227b30ae9b8a8e56da31046e7ecb1afbeR501-R513 ) in `to_dense_backward`.
As a consequence, existing code that is run through autograd engine must replace `.to_dense()` calls with `.to_dense(masked_grad=False)`. For example,
```python
torch.autograd.gradcheck(lambda x: torch.sum(x, [0]).to_dense())
torch.autograd.gradcheck(lambda x: torch.sparse.sum(x, [0]).to_dense())
```
(recall, gradcheck has `masked=False` as default) must be updated to
```python
torch.autograd.gradcheck(lambda x: torch.sum(x, [0]).to_dense(masked_grad=False))
torch.autograd.gradcheck(lambda x: torch.sparse.sum(x, [0]).to_dense(masked_grad=True), masked=True)
```
Fixes https://github.com/pytorch/pytorch/issues/95550
Pull Request resolved: https://github.com/pytorch/pytorch/pull/96095
Approved by: https://github.com/cpuhrsch
2023-03-16 21:38:11 +00:00