Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30168
Previous implementation of `clone` in `script::Module` copies both the module instance and the
class type, after we enabled type sharing https://github.com/pytorch/pytorch/pull/26666 we also
need to have a function to clone instance only and share the underlying class type.
Test Plan:
tbd
Imported from OSS
Differential Revision: D18631324
fbshipit-source-id: dbadcf19695faee0f755f45093b24618c047b9d1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29826
After save/load, we lose concrete type information. So if you tried to
script something that contained a loaded ScriptModule as a submodule,
the following sequence happened:
1. During ConcreteType inference, the loaded submodule got a new
inferred type.
2. But it already has a type! So there was a type mismatch.
To fix this, we should generate a ConcreteType directly from the loaded
submodule type (similar to what we do for interfaces). This makes sense
too--the ConcreteModuleType should be empty, since all the "sugaredness"
was stripped out during the save/load process.
Test Plan: Imported from OSS
Differential Revision: D18575009
Pulled By: suo
fbshipit-source-id: 4d329b7e9b7e7624f459e50092e35ab0ab813791
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29825
We made `ModuleInfo` a union initially to represent the idea that a
submodule could either be a regular module or a module interface.
This PR represents module interfaces as a ConcreteModuleType with no
info (e.g. no "sugaredness"), and with the interface type as the
underlying `jitType_`. This has the effect of reducing the special
casing around adding/maintaining module info.
Test Plan: Imported from OSS
Differential Revision: D18575011
Pulled By: suo
fbshipit-source-id: 53e297b39aa1a03bcdadd795ff225aa68fec9d70
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29824
We have two distinct phases/uses for ConcreteModuleType:
1. We are building it up and using it to check whether we can
reuse JIT types. (RawConcreteModuleType)
2. We are using it to satisfy ModuleValue::attr queries.
(ConcreteModuleType)
These types share an underlying `ConcreteModuleTypeData` which
actually stores the relevant info.
Previously they were the same type because I was lazy, but it's been the
source of a bug. So split them to formalize the differing invariants for
the two phases.
Test Plan: Imported from OSS
Differential Revision: D18575010
Pulled By: suo
fbshipit-source-id: 3e4ebcd36e78b947150d8f0dbb74ecccad23e7c4
Summary:
Fix for https://github.com/pytorch/pytorch/issues/21545
We we were silently giving wrong semantics previously:
Python behavior:
```
def test(x=[]):
x.append(1)
return len(x)
print(test()) # 1
print(test()) # 2
```
By checking at the python layer, we prevent any new models from serializing this behavior but do not break existing serialized models.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29833
Differential Revision: D18513168
Pulled By: eellison
fbshipit-source-id: 6fe73f28e1f9d39dedeaf67a04718089d14401a1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29499
This changes how DataParallel and trace module creation works so that
we no longer need to mutate Module class after it has been created.
The only remaining usage of register_* functions are now inside C++
tests.
Test Plan: Imported from OSS
Differential Revision: D18413652
Pulled By: zdevito
fbshipit-source-id: f039e5400cd016632768be4547892f6a69645c20
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29432
This removes a lot of the private methods on torch._C.ScriptModule,
and instead implements functionality in terms of slot_dict_impl views
to implement _parameter, _buffers, and _modules in nn.Module.
A followup PR should also remove the _register_attribute,
_register_module, and _register_parameter methods, but this requires
more refactoring of the way tracing creates modules and replication
for data parallel works.
Test Plan: Imported from OSS
Differential Revision: D18387963
Pulled By: zdevito
fbshipit-source-id: f10d47afeb30c1e05d704ae5ac4166830933125c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28828
This updates torch::script::Module to more closely match the behavior
of nn.Module. In particular, it implements the (optionally recurisive)
iterators that retrieve submodules, parameters, and buffers and makes
their names match the python versions.
This also removes the individual accessors for Parameter, Module, Buffer, etc.
and replaces them with a single `attr` function which is equivalent to
writing `a.foo` in Python (`setattr` emulates `a.foo = v`).
As we build out the user-facing API for TorchScript values this will end
up matching how an attribute is accessed on general objects.
This PR preservers the python bindings for script::Module by emulating the
old API at the binding level. A followup will clean up the usage to more
directly match the C++ API.
Test Plan: Imported from OSS
Differential Revision: D18197611
Pulled By: zdevito
fbshipit-source-id: 7ee4dcbb258605d1c988314b05d938423f1ccee5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28409
This PR enables submodule swapping via module interface. User could
declare a submodule as an module interface type in the ScriptModule,
during compilation we will record the module interface type in
ModuleInfo of ConcreteModuleType, the JIT type associated will have the
correct ModuleInterfaceType, and CppModule will get the correct module list
Given that we still keep the module interface type in the type system,
the graph is not inlined when we call Module::Attr and it will use
prim::CallMethod to call the method, this allow us to do module swapping
for the ScriptModule that also meet the same module interface type, and
we only allow the module swapping through the module interface
approach.
Test Plan: Imported from OSS
Reviewed By: driazati
Differential Revision: D18284309
fbshipit-source-id: 2cb843e4b75fa3fcd8c6020832a81014dbff4f03
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28408
This enable interface to defined on a nn.Module, and the InterfaceType
now have a field of is_module_ to distinguish if it's a module interface
or a normal interface (This is similar to what ClassType distinguish on
module and torchscript classes).
The module interface can be assigned with any ScriptModule that has the
compatible signatures on schemas. A normal object that is not a
ScriptModule will not be able to assigned to an module interface and
will error out when user explicitly doing so. Assigning a ScriptModule
to class interface will make it only available in attribute_list, not
module_list. More details on subtyping relationship documented in the
jit_type.h
If you declare an module interface inside an nn.Module that is being
compiled to a ScriptModule, behavior to our internal compilation will
be:
1. ConcreteModuleType will record it as an module attribute and add to
the attributes_ list.
2. JitType that is created from the ConcreteModuleType will record it as
an attribute and pre-genenerate the slot. The slot will be marked as
EntityType::MODULE still to make sure JitType record it as a Module
slot
3. cpp_module will also register it as a Module as the Slot type is the
source of truth
Since JitType will record it as attribute as store its type, it will
behave normally as the class interface attribute behave now. This means
the submodule assigned to this module interface is not getting inlined
into the graph as the normal `Module::attr` behave, it will generate
interface callMethod and allow us to later swap this with another
ScriptModule that implicitly implements this module interface.
Test Plan: Imported from OSS
Differential Revision: D18284311
fbshipit-source-id: e0b8f6e8c34b2087fab337a969e5ea3fb37ec209
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28569
Previously, the inclusion of function attributes would "poison" a
ConcreteModuleType, because we did not have a way of checking whether
they are actually the same function. This PR uses the Python function
object to perform that check. This improves our ability to reuse JIT
types between modules.
Also this PR fixes a bug where we weren't properly adding modules as
attributes when converting from ConcreteType -> JIT type (we were adding
them after the fact--another reason to switch from using `register_x` to
`set_x` during module construction, which is on my to-do list after
this).
Fixes https://github.com/pytorch/pytorch/issues/28559
Test Plan: Imported from OSS
Differential Revision: D18111331
Pulled By: suo
fbshipit-source-id: ec2cccf832d3ddd4cd4d28fe19cb265f1275325a
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28189
This makes it a separate createNamed function. The existing API resulted
in poor usage in fbcode, which in turn caused bugs in TorchScript programs.
Test Plan: Imported from OSS
Differential Revision: D17970220
Pulled By: zdevito
fbshipit-source-id: 59b082a726f56bec1c8d10d410db829f4aa271ea
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28242
There is no reason to have it in a general API of Module/Method - it's
just another graph pass. It was there because some time ago modules were
not first class and all graphs were lowered. After that changed, this
API was added for easier transition, but now we don't need it anymore.
Test Plan: Imported from OSS
Differential Revision: D17986724
Pulled By: ZolotukhinM
fbshipit-source-id: 279a1ec450cd8fac8164ee581515b09f1d755630
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/28129
The previous PR in the stack removed the need to order classes/functions
or have correct import statements. This resolved circular depedency issues
that can arise when class constructors like ModuleList put new instances
of themselves in a common namespace.
This PR changes our export format to no longer produce this information.
By doing so we can make the logic signficantly simpler, since we just
keep track of an individual PythonPrint object per file.
Notes:
* PythonPrint was changed to manage its own stream/list of ranges. It
was doing this anyway internally, this just makes the API more clear.
* Since we are changing the serialization format, I also removed op_version_set.
It is now replaced with the VERSION number that written in the zip archive.
This further simplifies the code emission process.
* A test of op_version_set was removed since there is no longer any behavior
to test.
Test Plan: Imported from OSS
Differential Revision: D17961610
Pulled By: zdevito
fbshipit-source-id: ada362c4ca34d05393a1a7e799c94785ab9d9825
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27260
This PR has the following changes:
- Slot class is removed. In all use cases except `lower_graph` we really
just needed the attribute name and thus having an extra layer of
abstraction through Slot only made the code harder to understand.
- get_parameters, get_attributes, get_modules, and get_slots now return
a list of <name, item> pairs instead of a list of Slots.
Differential Revision: D17728910
Test Plan: Imported from OSS
Pulled By: ZolotukhinM
fbshipit-source-id: 94781611752dd88e7fddfe8b8e0252d6ec32ba68
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26572
Combined with isinstance specialization this allows a degree of polymorphic
functions to work without needing to use our weirder overload hacks.
We do not define any operators on Any, so the only thing you can do with it
is to put it in containers or type refine it using an isinstance check.
Any is restricted from appearing in non-argument position because we
cannot restore type tags if it ends up as a field in a class.
Test Plan: Imported from OSS
Differential Revision: D17530643
Pulled By: zdevito
fbshipit-source-id: f06f78ce84819f7773953a492f3d4c49219ee94c
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26787
A follow up PR will remove the need to issue import statements,
or write classes in order since they are no longer needed.
This change allows the same PythonPrint class
to be used for an entire file which will be needed in that patch.
Test Plan: Imported from OSS
Differential Revision: D17566440
Pulled By: zdevito
fbshipit-source-id: 1ee896da0cdfe6a003298e1d4b0238403b9ed6dd
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26666
Changes:
- Introduce a `ConcreteModuleType` concept. This acts both as the key into the type
cache, and as the source of truth for `ModuleValue::attr` queries. It needs
to do both jobs because that's how we ensure correctness (if the types are
different, it's because `ModuleValue::attr` would return different things).
- Now `recursive_script` will first construct a `ConcreteModuleType` and search for a
pre-existing type before starting compilation.
- All previous paths to creating a `ScriptModule` (including inheriting from
`ScriptModule`) are now rewritten to go through `create_script_module`, so
that we have only a single place where construction happens.
Behavioral changes:
- Big change to `torch.jit.ScriptModule` inheritance: all attributes are now
recursively scripted if possible, matching recursive scripting semantics.
This makes it hard to keep something from being scripted (for example, a
Python submodule). Possibly we'll need an `ignore()` type thing for
attributes. In particular, this adds `self.training` to *every* ScriptModule, since
it's present on every `nn.Module`.
- I believe this change to be transparent to existing users of the inheritance API, since if you had an attribute that is unscriptable that you never used, there is no error. In some cases, we will create new attributes (even if they are unused), which will increase serialized model size from before.
Test Plan: Imported from OSS
Differential Revision: D17551196
Pulled By: suo
fbshipit-source-id: b476d1c9feb3ddfd63406d90989aaf9dfe890591
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27555
It is already under '_c' anyway.
Test Plan: Imported from OSS
Differential Revision: D17814333
Pulled By: ZolotukhinM
fbshipit-source-id: ca21649d553f6601be12828958a8077867d0e30e
Summary:
Most of this was old cruft left over from special handling of `training` before we had a `bool` type. This makes all modules have a `training` attribute that is true by default and removes all other special handling.
Fixes#26884
](https://our.intern.facebook.com/intern/diff/17728129/)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/27109
Pulled By: driazati
Differential Revision: D17728129
fbshipit-source-id: 8ddc9fbb07a953dd05529538bfdd01ed88b5cb57
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26758
This PR changes the order in which we import classes and functions so
that is is no longer necessary for them to defined in order in a file,
or for there to be proper import statements in the exported file.
Actually importing a function/class now is driven by the need to resolve
the entity during unpickling, type resolution, or value resolution.
While this should allow significant simplification to the code that
serializes classes, this work has not been done yet in order to avoid
inevitable forward compat issues in the transition period.
Notes:
* Individual functions have been replaced with a SourceImporter object
that exposes a resolveType method. This method loads the type if
it has not been loaded yet, potentially parsing (but not loading)
the file it exists in if that file hasn't been parsed yet.
* Some legacy functionality needed to be added as a method to this object
since the old format still used some of this logic for class resolution.
Test Plan: Imported from OSS
Differential Revision: D17558989
Pulled By: zdevito
fbshipit-source-id: 7eae3470bcbd388c4de463e3462d527776ed46c6
Summary:
When used as annotations on Python functions, `NamedTuple`s go through our Python annotation -> type mapping which previously had no way of lookup up `NamedTuple`s (which are created lazily by checking if the type has certain properties, so the lookup is creating the `TupleType` from scratch). This PR threads through the necessary data to make them work.
Fixes#26437
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26443
Pulled By: driazati
Differential Revision: D17486441
fbshipit-source-id: a6bbb543ff05a5abe61f1a7f68db9ecdb652b358
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25447
When we unpickle IValues, we lose type information for List[T]
and Dict[K, V]. We can restore this information using the static
type information contained in the top-level Module/Class type.
This ensures that even after serialization we can always get the
dynamic type of an ivalue using its type() method.
Test Plan: Imported from OSS
Differential Revision: D17127872
Pulled By: zdevito
fbshipit-source-id: 1ffb5e37a7c35c71ac9d3fb7b2edbc7ce3fbec72
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25740
Previously we had `omit_method_bodies`, `omit_attr_values` and
`omit_param_values`. They were called the same in the python bindings
and it was hard to remember their proper spelling. This PR changes them
to `code`, `attrs`, and `params` which are might easier to remember. It
also flips their meaning - now they enable printing instead of disabling
it. I also changed the default values to 'print all' from 'print
nothing', as that's the most usual way of using it.
Test Plan: Imported from OSS
Differential Revision: D17217517
Pulled By: ZolotukhinM
fbshipit-source-id: fa56e478a732ffd685d885f11c9da0457cd03d16
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25227
Adds cases to NamedType serialization to so that interfaces are written.
Similar implementation to NamedTuples
Test Plan: Imported from OSS
Differential Revision: D17066674
Pulled By: zdevito
fbshipit-source-id: fda5419260fad29e8c4ddb92de1d3447d621d982
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25258
this is the first commit in a series to add interfaces to JIT.
Interfaces allow the specification through a blank python class of an
abstract interface that can be used in type annotations for Script functions.
If a TorchScript class implements all the methods in the interface with
the appropriate types, then it is implicitly considered to implement
that interface.
Follows required:
* implementation of serialization
* implementation in the parser frontend
* better error reporting for explaining why a class does not meet an
interface specification.
Test Plan: Imported from OSS
Differential Revision: D17079963
Pulled By: zdevito
fbshipit-source-id: a9986eeba2d4fdedd0064ce7d459c0251480a5a0
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24259
Follow up to https://github.com/pytorch/pytorch/pull/23886, add the same overload api specified in PEP 484 to module methods to reduce the friction of adding method overloads that was brought up in #23266.
The usage is:
```
torch.jit.overload
def add(self, y: int) -> int: ...
torch.jit.overload
def add(self, y: float) -> float: ...
def add():
...
```
Test Plan: Imported from OSS
Differential Revision: D16921304
Pulled By: eellison
fbshipit-source-id: 784e2f26f7ca9a330a434a603c86b53725c3dc71
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24284
This PR finishes the unification of all Tensor types into a single object.
ProfiledTensorType is renamed to TensorType and the old TensorType is
deleted.
Notes:
* Fixes bug in merge for VaryingShape by changing its representation to an
optional list of optional ints.
* Removes ProfiledTensorType::create(type) invocations that can now
simply be expect calls on tensor type.
Test Plan: Imported from OSS
Differential Revision: D16794034
Pulled By: zdevito
fbshipit-source-id: 10362398d0bb166d0d385d74801e95d9b87d9dfc
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24282
This moves a test from Python to cpp, and in doing so lets us clean up a
bunch of otherwise unused code.
Test Plan: Imported from OSS
Differential Revision: D16800562
Pulled By: suo
fbshipit-source-id: ebc29bb81f4fb2538081fa309ead1739980f1093
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24281
These are not just classes anymore, rename
Test Plan: Imported from OSS
Differential Revision: D16800564
Pulled By: suo
fbshipit-source-id: 8b8d508944c26a8916fc7642df43f22583dfcf82
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24077
This replaces all uses of DimensionedTensorType with ProfiledTensorType.
For places where we propagate shape information, we still follow the
dimension-only propagation rules, meaning that even if full size information
is known on inputs the outputs will only have dimension information.
This fixes several bugs in existing implentations that this change uncovered:
* requires_grad was not propgated correctly across loops
* requires_grad on ProfiledTensorType returned false when requires_grad information
is unknown but the conservative result is true
* some equality code on ProfiledTensorType contained bugs.
Test Plan: Imported from OSS
Reviewed By: suo
Differential Revision: D16729581
Pulled By: zdevito
fbshipit-source-id: bd9f823c1c6b1d06a236a1b5b2b2fcdf0245edce
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23846
This moves a test from Python to cpp, and in doing so lets us clean up a
bunch of otherwise unused code.
Test Plan: Imported from OSS
Differential Revision: D16684390
Pulled By: suo
fbshipit-source-id: fca81ca14d1ac9e4d6b47ae5eecaa42b38d69147
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23845
These are not just classes anymore, rename
Test Plan: Imported from OSS
Differential Revision: D16684391
Pulled By: suo
fbshipit-source-id: af0024c0b7fbcca68785ec3fc6dc288ec46a1b84
Summary:
I accidentally removed this in a merge, breaking a test. Fix for master
Pull Request resolved: https://github.com/pytorch/pytorch/pull/24003
Differential Revision: D16707108
Pulled By: eellison
fbshipit-source-id: 8b59f46e7932b88a7ae246a261c4daf17f23995f
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/23886
This is a series of PRs that will allow us to support adding [padding to conv](https://github.com/pytorch/pytorch/pull/22484) and also reduce the friction of adding method overloads that was brought up in https://github.com/pytorch/pytorch/pull/23266.
Support for overloaded functions following the specification in [PEP 484](https://www.python.org/dev/peps/pep-0484/#function-method-overloading).
The usage is:
```
torch.jit.overload
def add(x: int, y: int) -> int: ...
torch.jit.overload
def add(x: float, y: float) -> float: ...
def add:
return x + y
```
Follow up PRs:
- Add same API for methods
- A couple of cleanups for functions:
- don't require default params specified on the overload as well
- potentially error if invocation could be matched to multiple overloads. now it just chooses the first one, mypy does the same thing currently
Test Plan: Imported from OSS
Differential Revision: D16694863
Pulled By: eellison
fbshipit-source-id: f94f2933bc1c97fa58f31846acfe962b0630068c