Files
pytorch/torch/csrc/jit/mobile/interpreter.h
Martin Yuan 7fc06ea541 Bytecode export flow (#25187)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25187

The bytecode export flow: dump the bytecode format for the light weighted interpreter.
* The bytecode is generated without input spec optimization. It would be more generic (input independent) with no obvious performance degradation (to be tested).
* Main API: torch::jit::script::Module::save(filename, extra_files, bool *bytecode_format* = false).
* Both bytecode and module object are exported in pickle format.
    * The module object (in data.pkl) is the same as the original JIT model.
    * The serializer is dependent on pickle only (no protobuf or Json).
    * The major functionality is forked in ScriptModuleSerializer2::serialize().
    * The test loader is test_bc_export.cpp.
* Simple APIs are added in Code and its implementation to get necessary information (instructions, operators and constants).
* Since there's no dependency on graph/node, GetAttr is promoted from an operator to first-class instruction (https://github.com/pytorch/pytorch/pull/25151) .
* Some definitions (instructions, writeArchive, etc) that are shared by full JIT and bytecode are pulled out of the local namespace (https://github.com/pytorch/pytorch/pull/25148).

The output layout looks like:

* folders of methods.
    * In each method folder (for example, forward/):
        * bytecode.pkl: instructions and operators
        * constants{.pkl,/}: constant list in constants.pkl. If there are tensors in constants, the binary tensor files in constants/ folder.
* data{.pkl,/}: the module object, with binary tensor files in data/ folder. The same as in torchscript.

Test Plan: Imported from OSS

Differential Revision: D17076411

fbshipit-source-id: 46eb298e7320d1e585b0101effc0fcfd09219046
2019-09-25 16:35:45 -07:00

33 lines
830 B
C++

#pragma once
#include <ATen/core/ivalue.h>
#include <ATen/core/operator_name.h>
#include <torch/csrc/jit/instruction.h>
#include <aten/src/ATen/core/dispatch/Dispatcher.h>
namespace torch{
namespace jit{
namespace mobile {
using Stack = std::vector<c10::IValue>;
struct Code {
std::vector<Instruction> instructions_;
std::vector<c10::OperatorName> op_names_;
std::vector<c10::optional<c10::OperatorHandle>> operators_;
std::vector<c10::IValue> constants_;
size_t register_size_; // Aggregated output size.
};
struct InterpreterState {
TORCH_API explicit InterpreterState(std::shared_ptr<Code> code);
TORCH_API bool run(Stack& stack);
private:
std::shared_ptr<Code> code_;
c10::IValue& reg(size_t reg);
std::vector<c10::IValue> registers_;
};
} // namespace mobile
} // namespace torch
} // namespace jit