Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/64678
We know we only want one declaration, so let's not create an excess std::vector (and thus a heap allocation) for that.
ghstack-source-id: 138036978
Test Plan: CI
Reviewed By: dhruvbird, tugsbayasgalan
Differential Revision: D30813785
fbshipit-source-id: c67e0100cdef5d894282939fb6d39a57309bc240
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/52881
**This PR adds:**
1. logic to parse complex constants (complex literals of the form `bj`)
2. logic to parse complex lists
3. support for complex constructors: `complex(tensor/int/float/bool, tensor/int/float/bool)`
4. Limited operator support
- `add`, `sub`, `mul`, `torch.tensor`, `torch.as_tensor`
**Follow-up work:**
1. Add complex support for unary and other registered ops.
2. support complex constructor with string as input (this is supported in Python eager mode).
3. Test all emitXYZ for all XYZ in `ir_emitter.cpp` (currently only emitConst, emitValueToTensor are tested). e.g., test loops etc.
4. onnx doesn't support complex tensors, so we should error out with a clear and descriptive error message.
Test Plan: Imported from OSS
Reviewed By: bdhirsh
Differential Revision: D27245059
Pulled By: anjali411
fbshipit-source-id: af043b5159ae99a9cc8691b5a8401503fa8d6f05
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50229
`fastmod -m 'cast(<((at|c10)::)?\w+Type>\(\)\s*)->' 'castRaw${1}->'` Presuming it builds, this is a safe change: the
result of `cast()` wasn't being saved anywhere, so we didn't need
it, so we can use a raw pointer instead of a new `shared_ptr`.
ghstack-source-id: 120769170
Test Plan: CI
Reviewed By: SplitInfinity
Differential Revision: D25837494
fbshipit-source-id: 46319100dc0dfc78f6d2b45148207f83481f2ada
Summary:
Reland of https://github.com/pytorch/pytorch/pull/35061 ; removed
the get qualified type name magic from debug strings to work around
MSVC 2017 bug.
Main points of the new API:
- You can register implementations (impl) without having to specify a schema.
- Registrations are commutative, so no matter what order your static
initializers run, you end up with the same end result.
op_registration_test.cpp contains a reasonably comprehensive accounting
for the available API surface
How does this implementation proceed? The basic concept is to relax the
internal invariants of Dispatcher data structures to allow the
possibility that a FunctionSchema is not specified in an Operator.
- DispatchKeyExtractor has an uninitialized state where it doesn't look
for dispatch keys in any arguments of the stack. It can have a
schema (de)registered to itself post facto with
registerSchema/unregisterSchema.
- DispatchTable has a new constructor taking only an OperatorName for
the uninitialized state. It can have a schema (de)registered to itself
post facto with registerSchema/unregisterSchema
- OperatorDef maintains counts of both defs and well as defs_and_impls.
defs_and_impls keeps track of the outstanding impl registrations; you
may have impl registrations but no defs. If there are no defs (no
schema), the operator is not returned by findSchema. A new
findOperatorByName fucntion unconditionally returns the OperatorHandle
even if there's no schema. OperatorHandle::hasSchema can be used
to check if the operator has schema.
- Replaced 'registerKernel' with 'registerImpl', which is the new
interface for directly registering kernels without implementations.
- Because 'registerImpl' no longer requires an OperatorHandle, change
'registerDef' to only return a RegistrationHandleRAII. This is marginally
less efficient (since we're doing two hash table lookups on a registration
now), but this won't matter in the long term, and probably doesn't
matter now either.
- Rename registerBackendFallbackKernel to registerFallback (this exposed
a bunch of places where we're improperly directly interfacing with Dispatcher;
we need to add this capability to the true public API)
- All code generated internal registrations are switched to use the new
API. This includes VariableType registrations (which previously
weren't converted) and the mobile autograd stuff
- Switch the new-style def()/impl() APIs to interact directly with Dispatcher,
rather than indirecting through the old API
- We deleted alias analysis kind merging entirely. As a nod to BC, it's
possible to define a full schema with alias analysis kind, and then
later do another full schema def with missing alias analysis kind, but
the opposite direction is not allowed. We can remove this entirely
following the plan at https://github.com/pytorch/pytorch/issues/35040
- Schema matching is moved inside the dispatcher, because we might not
be able to immediately schema match at the point of an impl() (because
we don't have the schema yet). To do this, we store the inferred
function schema inside a KernelEntry, so we can check it when we get
the real schema.
- Registered kernel functions now store a debug string which
can be used to more easily identify them. Tests use this to
distinguish between multiple distinct registrations; regular
invocations get only very basic information.
Because we need our static initializers to work no matter what order
they're run, the testing strategy on this PR is quite involved.
The general concept:
- Bind a (very gimped) version of the dispatcher API from Python,
so that we can easily write a more complex testing harness
using expect tests.
- For series of registrations we want to test, exhaustively
test every possible permutation of registrations (and
deregistrations), and show that the intermediate states
agree no matter what path is taken.
- Intermediate states are rendered using a new dumpState()
debugging method that prints the internal state of the
dispatcher. This method may be generally useful for people
who want to see what's in the dispatcher.
- Simultaneously, add a new invariant testing function which
checks that the internal invariants of the dispatcher are
upheld (so we don't have to print internal implementation
details of the dispatcher)
The testing framework found a few bugs in development. For example,
here is a case where we registered schema too early, before checking
if it was valid:
```
Traceback (most recent call last):
File "test/test_dispatch.py", line 164, in test_def_impl_schema_mismatch
], raises=True)
File "test/test_dispatch.py", line 135, in commute
results=results, raises=raises)
File "test/test_dispatch.py", line 83, in run_permutation
.format(ctor_order[:i], op_ix))
File "test/test_dispatch.py", line 59, in check_invariants
.format(expected_provenance, actual_provenance)
AssertionError: 'name[16 chars]ema: (none)\ncatchall: boxed unboxed :: (Tenso[18 chars]0)\n' != 'name[16 chars]ema: test::foo(Tensor x, Tensor y) -> (Tensor)[53 chars]0)\n'
name: test::foo
- schema: (none)
+ schema: test::foo(Tensor x, Tensor y) -> (Tensor)
catchall: boxed unboxed :: (Tensor _0) -> (Tensor _0)
: expected from running ctors (1,); actual from running ctors (1,) and then failing to run ctor 0 (did this failure leave the dispatcher in a wedged state? it shouldn't!)
```
There are also C++ smoketests for the API. These tests comprehensively
cover the C++ API surface of the new operator registration API, but
don't check very hard if the API does the right thing (that's what
test_dispatch.py is for)
Some miscellaneous changes which could have been split into other
PRs, but I was too lazy to do so:
- Add torch::jit::parseName (mirroring parseSchema/parseSchemaOrName)
- Add cloneWithName functionality to FunctionSchema
- Unconditionally generate schema registration, even when type_method_dispatch
is a dict. The one exception is for manual registrations....
- Add fallback, CppFunction::makeFallthrough and
CppFunction::makeFromBoxedFunction to public API of op_registration, so we can
stop calling internal registerImpl directly
- Add new syntax sugar dispatch_autograd for registering autograd kernels
- Minor OperatorName cleanup, storing OperatorName in DispatchTable
and defining operator<< on OperatorName
- Refactored the op registration API to take FunctionSchema directly.
We now do namespacing by post facto fixing up the OperatorName
embedded in FunctionSchema. This also means that you can
now do torch::import("ns1").def("ns2::blah") and have the ns2
override ns1 (although maybe this is not the correct behavior.)
- New torch::schema public API, for attaching alias analysis kind
annotation kinds. This meant we had to template up some function
signatures which previously took const char*. There's now a nice
comment explaining this strategy.
- torch::import now takes std::string which means we can use
the namespacing from Python
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35629
Differential Revision: D20724551
Pulled By: ezyang
fbshipit-source-id: befa46a1affb4ec4ae1fb39e3564a63695a6ca41
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35061
Main points of the new API:
- You can register implementations (impl) without having to specify a schema.
- Registrations are commutative, so no matter what order your static
initializers run, you end up with the same end result.
op_registration_test.cpp contains a reasonably comprehensive accounting
for the available API surface
How does this implementation proceed? The basic concept is to relax the
internal invariants of Dispatcher data structures to allow the
possibility that a FunctionSchema is not specified in an Operator.
- DispatchKeyExtractor has an uninitialized state where it doesn't look
for dispatch keys in any arguments of the stack. It can have a
schema (de)registered to itself post facto with
registerSchema/unregisterSchema.
- DispatchTable has a new constructor taking only an OperatorName for
the uninitialized state. It can have a schema (de)registered to itself
post facto with registerSchema/unregisterSchema
- OperatorDef maintains counts of both defs and well as defs_and_impls.
defs_and_impls keeps track of the outstanding impl registrations; you
may have impl registrations but no defs. If there are no defs (no
schema), the operator is not returned by findSchema. A new
findOperatorByName fucntion unconditionally returns the OperatorHandle
even if there's no schema. OperatorHandle::hasSchema can be used
to check if the operator has schema.
- Replaced 'registerKernel' with 'registerImpl', which is the new
interface for directly registering kernels without implementations.
- Because 'registerImpl' no longer requires an OperatorHandle, change
'registerDef' to only return a RegistrationHandleRAII. This is marginally
less efficient (since we're doing two hash table lookups on a registration
now), but this won't matter in the long term, and probably doesn't
matter now either.
- Rename registerBackendFallbackKernel to registerFallback (this exposed
a bunch of places where we're improperly directly interfacing with Dispatcher;
we need to add this capability to the true public API)
- All code generated internal registrations are switched to use the new
API. This includes VariableType registrations (which previously
weren't converted) and the mobile autograd stuff
- Switch the new-style def()/impl() APIs to interact directly with Dispatcher,
rather than indirecting through the old API
- We deleted alias analysis kind merging entirely. As a nod to BC, it's
possible to define a full schema with alias analysis kind, and then
later do another full schema def with missing alias analysis kind, but
the opposite direction is not allowed. We can remove this entirely
following the plan at https://github.com/pytorch/pytorch/issues/35040
- Schema matching is moved inside the dispatcher, because we might not
be able to immediately schema match at the point of an impl() (because
we don't have the schema yet). To do this, we store the inferred
function schema inside a KernelEntry, so we can check it when we get
the real schema.
- Registered kernel functions now store a debug string which
can be used to more easily identify them. There's some best
effort stuff based on __FUNCSIG__ but this is only really
capable of reporting types and not function symbols. Tests
use this to distinguish between multiple distinct registrations.
Because we need our static initializers to work no matter what order
they're run, the testing strategy on this PR is quite involved.
The general concept:
- Bind a (very gimped) version of the dispatcher API from Python,
so that we can easily write a more complex testing harness
using expect tests.
- For series of registrations we want to test, exhaustively
test every possible permutation of registrations (and
deregistrations), and show that the intermediate states
agree no matter what path is taken.
- Intermediate states are rendered using a new dumpState()
debugging method that prints the internal state of the
dispatcher. This method may be generally useful for people
who want to see what's in the dispatcher.
- Simultaneously, add a new invariant testing function which
checks that the internal invariants of the dispatcher are
upheld (so we don't have to print internal implementation
details of the dispatcher)
The testing framework found a few bugs in development. For example,
here is a case where we registered schema too early, before checking
if it was valid:
```
Traceback (most recent call last):
File "test/test_dispatch.py", line 164, in test_def_impl_schema_mismatch
], raises=True)
File "test/test_dispatch.py", line 135, in commute
results=results, raises=raises)
File "test/test_dispatch.py", line 83, in run_permutation
.format(ctor_order[:i], op_ix))
File "test/test_dispatch.py", line 59, in check_invariants
.format(expected_provenance, actual_provenance)
AssertionError: 'name[16 chars]ema: (none)\ncatchall: boxed unboxed :: (Tenso[18 chars]0)\n' != 'name[16 chars]ema: test::foo(Tensor x, Tensor y) -> (Tensor)[53 chars]0)\n'
name: test::foo
- schema: (none)
+ schema: test::foo(Tensor x, Tensor y) -> (Tensor)
catchall: boxed unboxed :: (Tensor _0) -> (Tensor _0)
: expected from running ctors (1,); actual from running ctors (1,) and then failing to run ctor 0 (did this failure leave the dispatcher in a wedged state? it shouldn't!)
```
There are also C++ smoketests for the API. These tests comprehensively
cover the C++ API surface of the new operator registration API, but
don't check very hard if the API does the right thing (that's what
test_dispatch.py is for)
Some miscellaneous changes which could have been split into other
PRs, but I was too lazy to do so:
- Add torch::jit::parseName (mirroring parseSchema/parseSchemaOrName)
- Add cloneWithName functionality to FunctionSchema
- Unconditionally generate schema registration, even when type_method_dispatch
is a dict. The one exception is for manual registrations....
- Add fallback, CppFunction::makeFallthrough and
CppFunction::makeFromBoxedFunction to public API of op_registration, so we can
stop calling internal registerImpl directly
- Add new syntax sugar dispatch_autograd for registering autograd kernels
- Minor OperatorName cleanup, storing OperatorName in DispatchTable
and defining operator<< on OperatorName
- Refactored the op registration API to take FunctionSchema directly.
We now do namespacing by post facto fixing up the OperatorName
embedded in FunctionSchema. This also means that you can
now do torch::import("ns1").def("ns2::blah") and have the ns2
override ns1 (although maybe this is not the correct behavior.)
- New torch::schema public API, for attaching alias analysis kind
annotation kinds. This meant we had to template up some function
signatures which previously took const char*. There's now a nice
comment explaining this strategy.
- torch::import now takes std::string which means we can use
the namespacing from Python
Signed-off-by: Edward Z. Yang <ezyang@fb.com>
Test Plan: Imported from OSS
Differential Revision: D20680520
Pulled By: ezyang
fbshipit-source-id: 5d39a28e4ec7c73fe4b1fb2222e865ab65e188f5
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/35115
This commit runs the newly added tools/clang_format.py on the JIT
codebase and includes all of the formatting changes thus produced.
Testing:
Ran the script, CI.
Test Plan: Imported from OSS
Reviewed By: eellison
Differential Revision: D20568523
Pulled By: SplitInfinity
fbshipit-source-id: e09bdb982ccf090eecfb7c7b461b8d0681eef82b
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34515
Once upon a time we thought this was necessary. In reality it is not, so
removing it.
For backcompat, our public interface (defined in `api/`) still has
typedefs to the old `script::` names.
There was only one collision: `Pass` as a `Stmt` and `Pass` as a graph
transform. I renamed one of them.
Test Plan: Imported from OSS
Differential Revision: D20353503
Pulled By: suo
fbshipit-source-id: 48bb911ce75120a8c9e0c6fb65262ef775dfba93