Files
pytorch/torch/csrc/jit/runtime/custom_operator.h
Scott Wolchok 8fc1064b7f [PyTorch] Reduce code size of register_prim_ops.cpp (#61494)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/61494

Creating a constexpr array and then looping over it is much cheaper than emitting a function call per item.
ghstack-source-id: 136639302

Test Plan:
fitsships

Buildsizebot some mobile apps to check size impact.

Reviewed By: dhruvbird, iseeyuan

Differential Revision: D29646977

fbshipit-source-id: 6144999f6acfc4e5dcd659845859702051344d88
2021-08-27 12:56:35 -07:00

33 lines
1.1 KiB
C++

#pragma once
#include <ATen/core/op_registration/op_registration.h>
#include <ATen/core/stack.h>
#include <torch/csrc/jit/runtime/operator.h>
namespace torch {
namespace jit {
/// Registration class for new operators. Effectively calls
/// `torch::jit::registerOperator` for every supplied operator, but allows doing
/// so in the global scope when a `RegisterOperators` object is assigned to a
/// static variable.
/// Note: This is *not* the custom operator API. If you want to register custom
/// operators, take a look at torch::RegisterOperators.
struct TORCH_API RegisterOperators {
RegisterOperators() = default;
/// Registers a vector of already created `Operator`s.
/// The operator element is now optional to filter null ops. It's backward
/// compatible and works for selective operator registration.
explicit RegisterOperators(std::vector<c10::optional<Operator>> operators) {
for (c10::optional<Operator>& o : operators) {
if (o) {
registerOperator(std::move(o.value()));
}
}
}
};
} // namespace jit
} // namespace torch