[jit] Reduce refcounting of Types (#65345)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/65345

FooType::get() can return a const reference. Inconveniently, converting shared_ptr<FooType> to shared_ptr<Type> requires a copy & refcount bump, so to properly take advantage of this in unshapedType() we need to take a const Type& in isSubtypeOf(), which is good practice anyway -- don't require a shared_ptr if you don't need to take ownership.
ghstack-source-id: 140044165

Test Plan:
CI

perf says c10::unshapedType time decreased from 2.8% to 2.2% during static runtime startup, though I expect this to be generally beneficial.

Reviewed By: hlu1

Differential Revision: D31027361

fbshipit-source-id: 676feb81db9f74ad7b8651d8774f4ecb4cfa6ab8
This commit is contained in:
Scott Wolchok
2021-10-08 09:01:42 -07:00
committed by Facebook GitHub Bot
parent 1ae468a484
commit 2d885ab73d
69 changed files with 421 additions and 405 deletions

View File

@ -145,7 +145,7 @@ TEST(IRParserTest, InferredTypeIsTensor) {
graph(%a):
return (%a))IR",
&*graph);
AT_ASSERT(graph->inputs()[0]->type()->isSubtypeOf(TensorType::get()));
AT_ASSERT(graph->inputs()[0]->type()->isSubtypeOf(*TensorType::get()));
}
TEST(IRParserTest, ValueReuse) {
@ -260,7 +260,7 @@ TEST(IRParserTest, FileCheck) {
return (%a))IR";
parseIR(text, &*graph);
AT_ASSERT(graph->inputs()[0]->type()->isSubtypeOf(TensorType::get()));
AT_ASSERT(graph->inputs()[0]->type()->isSubtypeOf(*TensorType::get()));
torch::jit::testing::FileCheck().run(text, *graph);
}