Files
pytorch/torch
Han Qi (qihqi) fed12ff680 [BE][flatbuffer] Remove code duplications and refactor (#79184)
Summary:
Remove code dup in import.cpp / export_modules.cpp such that
1. Only one copy of switching logic (detect flatbuffer / is_flatbuffer);
2. Move detection of includeness of flatbuffer to runtime (so no more macros)

This also reverts the dependency of import.cpp -> flatbuffer_loader.cpp to flatbuffer_loader.cpp -> import.cpp.

Differential Revision: D36926217

Pull Request resolved: https://github.com/pytorch/pytorch/pull/79184
Approved by: https://github.com/zhxchen17
2022-06-20 16:37:38 +00:00
..
2022-06-17 18:14:21 +00:00
2022-06-11 05:46:36 +00:00
2022-06-07 22:50:14 +00:00
2022-03-22 02:02:43 +00:00
2022-06-11 04:06:40 +00:00
2022-06-20 12:57:07 +00:00
2022-06-03 22:38:56 +00:00
2022-06-13 21:02:00 +00:00
2022-04-20 19:03:00 +00:00
2022-05-05 05:52:40 +00:00
2022-06-10 13:44:45 +00:00
2022-04-11 21:55:59 +00:00
2022-06-07 22:50:14 +00:00
2022-05-10 11:01:02 +00:00

Note [TH abstraction violation]
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

TH/THC provide some hpp headers, which are proper C++ headers rather than
C headers.  These headers serve double duty as *internal implementation
detail* headers, whose contents should largely not be used by external
clients.

Ideally, we would not install these headers at all; instead, you should
use public functions (in headers like `THTensor.h`, NOT `THTensor.hpp`)
to manipulate these structs.  However, there are a few places
in torch/csrc where we violate this abstraction.  They are marked with
a pointer to this note.  Each of those sites will have to be refactored
when we refactor the guts of THTensor and related structures.