Files
pytorch/torch/_classes.py
Yida Wang 4ea6a3aa74 Fix issues with printing certain torch modules (#62447)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/54420

When I tested on master, with the testing code, there were multiple objects on the garbage collector that cannot be printed.

Testing code:
```
import torch
import gc
import os
import sys

print(torch.__version__)

a = torch.rand(10)

print(a)

objects = gc.get_objects()

for i in range(len(objects)):
   print(objects[i])
```

### 1
```
print(torch.classes)
```

Like SplitInfinity has mentioned in the GitHub issue, the solution here is to set `__file__` for `torch.classes` to something. Similar to [_ops.py](https://github.com/pytorch/pytorch/blob/master/torch/_ops.py#L69), where `__file__` is set to `_ops.py`, we could set `__file__` for torch.classes to `_classes.py`.

### 2
```
print(torch._ops.ops.quantized)
print(torch._ops.ops.atan)
```

When we try to print these two modules, it will call `_OpNamespace::__getattr__`, but the `op_name` is `__file__`. This becomes a problem when `torch._C._jit_get_operation(qualified_op_name)` [(link)](https://github.com/pytorch/pytorch/blob/master/torch/_ops.py#L60) tries to look for an actual op on the native C++ side.

Only when we get the attribute for an actual op, e.g. `print(torch._ops.ops.quantized.elu)`, the `op_name` becomes proper (e.g. `elu`).

My current solution is to return a hardcoded string (i.e. “torch.ops”) if `op_name` is `"__file__"`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/62447

Reviewed By: saketh-are

Differential Revision: D30234654

Pulled By: yidawang-oss

fbshipit-source-id: de43a8f599739c749fb3307eea015cc61f1da60e
2021-08-11 09:40:41 -07:00

52 lines
1.7 KiB
Python

import types
import torch._C
class _ClassNamespace(types.ModuleType):
def __init__(self, name):
super(_ClassNamespace, self).__init__('torch.classes' + name)
self.name = name
def __getattr__(self, attr):
proxy = torch._C._get_custom_class_python_wrapper(self.name, attr)
if proxy is None:
raise RuntimeError(f'Class {self.name}.{attr} not registered!')
return proxy
class _Classes(types.ModuleType):
__file__ = '_classes.py'
def __init__(self):
super(_Classes, self).__init__('torch.classes')
def __getattr__(self, name):
namespace = _ClassNamespace(name)
setattr(self, name, namespace)
return namespace
@property
def loaded_libraries(self):
return torch.ops.loaded_libraries
def load_library(self, path):
"""
Loads a shared library from the given path into the current process.
The library being loaded may run global initialization code to register
custom classes with the PyTorch JIT runtime. This allows dynamically
loading custom classes. For this, you should compile your class
and the static registration code into a shared library object, and then
call ``torch.classes.load_library('path/to/libcustom.so')`` to load the
shared object.
After the library is loaded, it is added to the
``torch.classes.loaded_libraries`` attribute, a set that may be inspected
for the paths of all libraries loaded using this function.
Args:
path (str): A path to a shared library to load.
"""
torch.ops.load_library(path)
# The classes "namespace"
classes = _Classes()