Files
pytorch/torch/_numpy/_binary_ufuncs_impl.py
Edward Z. Yang 9bce208dfb Replace follow_imports = silent with normal (#118414)
This is a lot of files changed! Don't panic! Here's how it works:

* Previously, we set `follow_imports = silent` for our mypy.ini configuration. Per https://mypy.readthedocs.io/en/stable/running_mypy.html#follow-imports, what this does is whenever we have an import to a module which is not listed as a file to be typechecked in mypy, we typecheck it as normal but suppress all errors that occurred in that file.
* When mypy is run inside lintrunner, the list of files is precisely the files covered by the glob in lintrunner.toml, but with files in excludes excluded.
* The top-level directive `# mypy: ignore-errors` instructs mypy to typecheck the file as normal, but ignore all errors.
* Therefore, it should be equivalent to set `follow_imports = normal`, if we put `# mypy: ignore-errors` on all files that were previously excluded from the file list.
* Having done this, we can remove the exclude list from .lintrunner.toml, since excluding a file from typechecking is baked into the files themselves.
* torch/_dynamo and torch/_inductor were previously in the exclude list, because they were covered by MYPYINDUCTOR. It is not OK to mark these as `# mypy: ignore-errors` as this will impede typechecking on the alternate configuration. So they are temporarily being checked twice, but I am suppressing the errors in these files as the configurations are not quite the same. I plan to unify the configurations so this is only a temporary state.
* There were some straggler type errors after these changes somehow, so I fixed them as needed. There weren't that many.

In the future, to start type checking a file, just remove the ignore-errors directive from the top of the file.

The codemod was done with this script authored by GPT-4:

```
import glob

exclude_patterns = [
    ...
]

for pattern in exclude_patterns:
    for filepath in glob.glob(pattern, recursive=True):
        if filepath.endswith('.py'):
            with open(filepath, 'r+') as f:
                content = f.read()
                f.seek(0, 0)
                f.write('# mypy: ignore-errors\n\n' + content)
```

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/118414
Approved by: https://github.com/thiagocrepaldi, https://github.com/albanD
2024-01-27 02:44:11 +00:00

87 lines
2.4 KiB
Python

# mypy: ignore-errors
"""Export torch work functions for binary ufuncs, rename/tweak to match numpy.
This listing is further exported to public symbols in the `torch._numpy/_ufuncs.py` module.
"""
import torch
from torch import ( # noqa: F401
add, # noqa: F401
arctan2, # noqa: F401
bitwise_and, # noqa: F401
bitwise_left_shift as left_shift, # noqa: F401
bitwise_or, # noqa: F401
bitwise_right_shift as right_shift, # noqa: F401
bitwise_xor, # noqa: F401
copysign, # noqa: F401
divide, # noqa: F401
eq as equal, # noqa: F401
float_power, # noqa: F401
floor_divide, # noqa: F401
fmax, # noqa: F401
fmin, # noqa: F401
fmod, # noqa: F401
gcd, # noqa: F401
greater, # noqa: F401
greater_equal, # noqa: F401
heaviside, # noqa: F401
hypot, # noqa: F401
lcm, # noqa: F401
ldexp, # noqa: F401
less, # noqa: F401
less_equal, # noqa: F401
logaddexp, # noqa: F401
logaddexp2, # noqa: F401
logical_and, # noqa: F401
logical_or, # noqa: F401
logical_xor, # noqa: F401
maximum, # noqa: F401
minimum, # noqa: F401
multiply, # noqa: F401
nextafter, # noqa: F401
not_equal, # noqa: F401
pow as power, # noqa: F401
remainder, # noqa: F401
remainder as mod, # noqa: F401
subtract, # noqa: F401
true_divide, # noqa: F401
)
from . import _dtypes_impl, _util
# work around torch limitations w.r.t. numpy
def matmul(x, y):
# work around:
# - RuntimeError: expected scalar type Int but found Double
# - RuntimeError: "addmm_impl_cpu_" not implemented for 'Bool'
# - RuntimeError: "addmm_impl_cpu_" not implemented for 'Half'
dtype = _dtypes_impl.result_type_impl(x, y)
is_bool = dtype == torch.bool
is_half = (x.dtype == torch.float16 or y.dtype == torch.float16) and (
x.is_cpu or y.is_cpu
)
work_dtype = dtype
if is_bool:
work_dtype = torch.uint8
if is_half:
work_dtype = torch.float32
x = _util.cast_if_needed(x, work_dtype)
y = _util.cast_if_needed(y, work_dtype)
result = torch.matmul(x, y)
if work_dtype != dtype:
result = result.to(dtype)
return result
# a stub implementation of divmod, should be improved after
# https://github.com/pytorch/pytorch/issues/90820 is fixed in pytorch
def divmod(x, y):
return x // y, x % y