Remove SparseAdam weird allowance of raw Tensor input (#127081)

This continues the full deprecation after https://github.com/pytorch/pytorch/pull/114425. It's been 6 months! And I'm fairly certain no one is going to yell at me as this patch is not really used.

------

# BC Breaking note

As of this PR, SparseAdam will become consistent with the rest of our optimizers in that it will only accept containers of Tensors/Parameters/param groups and fully complete deprecation of this path. Hitherto, the SparseAdam constructor had allowed raw tensors as the params argument to the constructor. Now, if you write the following code, there will be an error similar to every other optim: "params argument given to the optimizer should be an iterable of Tensors or dicts"

```
import torch
param = torch.rand(16, 32)
optimizer = torch.optim.SparseAdam(param)
```

Instead you should replace the last line with
```
optimizer = torch.optim.SparseAdam([param])
```
to no longer error.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/127081
Approved by: https://github.com/soulitzer
This commit is contained in:
Jane Xu
2024-05-24 06:14:48 -07:00
committed by PyTorch MergeBot
parent 29a1f62f23
commit 665637714f
2 changed files with 4 additions and 22 deletions

View File

@ -962,13 +962,6 @@ def optim_error_inputs_func_sparseadam(device, dtype):
error_inputs = get_error_inputs_for_all_optims(device, dtype)
if str(device) == "cpu":
# SparseAdam raises a warning and not an error for the first entry. We
# update it here:
error_inputs[0].error_type = FutureWarning
error_inputs[
0
].error_regex = "Passing in a raw Tensor as ``params`` to SparseAdam"
error_inputs += [
ErrorOptimizerInput(
OptimizerInput(