This pr
moved some model generation scripts to a central place (mobile net v2, android test model)
updated scripts so these models can run on Android (Java doesn't support certain scalar types as return value)
updated model generation script to take arguments (generate models for android/ios, checked-in model or on-the-fly model)
add Android instrumentation tests for these new models
After this change, the Android instrumentation test will run 35 models which covered 91% of production ops. The coverage information can be found in this file: https://github.com/pytorch/pytorch/blob/master/test/mobile/model_test/coverage.yaml
Note that these models are checked-in for back compatibility check (to ensure they can run with newer pytorch versions).
The script generates models for mobile test. For each model we have a "checked-in" version
and an "on-the-fly" version. The "on-the-fly" version will be generated during test, and
should not be committed to the repo. The "checked-in" version is used for back compatibility check.
Note that Android only support checked-in model right now. iOS can test both (in another pr).
use 'gen_test_model.py android-test' to generate on the fly models for android
use 'gen_test_model.py ios-test' to generate on the fly models for ios
use 'python gen_test_model.py android' to generate checked-in models for android
use 'python gen_test_model.py ios' to generate on-the-fly models for ios
use 'gen_test_model.py <model_name_no_suffix>' to update the given checked-in model
Pull Request resolved: https://github.com/pytorch/pytorch/pull/74793
Approved by: https://github.com/kit1980
Added tests for lite interpreter. By default the run_test.sh will use lite interpreter, unless manually set BUILD_LITE_INTERPRETER=0
Also fixed model generation script for android instrumentation test and README.
Verified test can pass for both full jit and lite interpreter. Also tested on emulator and real device using different abis.
Lite interpreter
```
./scripts/build_pytorch_android.sh x86
./android/run_tests.sh
```
Full JIT
```
BUILD_LITE_INTERPRETER=0 ./scripts/build_pytorch_android.sh x86
BUILD_LITE_INTERPRETER=0 ./android/run_tests.sh
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/72736
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40785
The main goal of this change is to support creating Tensors specifying blob in NHWC (ChannelsLast) format.
ChannelsLast is supported only for 4-dim tensors, this is enforced on LibTorch side, I have not added asserts on java side in case that this limitation will be changed in future and not to have double asserts.
Additional changes in `aten/src/ATen/templates/Functions.h`:
`from_blob` creates `at::empty({0}, options)` tensor first and sets it Storage with sizes and strides afterwards.
But as ChannelsLast is only for 4-dim tensors - it fails on that creation, as dim==1.
I've added `zero_sizes()` function that returns `{0, 0, 0, 0}` for ChannelsLast and ChannelsLast3d.
Test Plan: Imported from OSS
Reviewed By: dreiss
Differential Revision: D22396244
Pulled By: IvanKobzarev
fbshipit-source-id: 02582d748a554e0f859aefe71cd2c1e321fb8979
Summary:
TLDR; initial commit of android java-jni wrapper of pytorchscript c++ api
The main idea is to provide java interface for android developers to use pytorchscript modules.
java API tries to repeat semantic of c++ and python pytorchscript API
org.pytorch.Module (wrapper of torch::jit::script::Module)
- static Module load(String path)
- IValue forward(IValue... inputs)
- IValue runMethod(String methodName, IValue... inputs)
org.pytorch.Tensor (semantic of at::Tensor)
- newFloatTensor(long[] dims, float[] data)
- newFloatTensor(long[] dims, FloatBuffer data)
- newIntTensor(long[] dims, int[] data)
- newIntTensor(long[] dims, IntBuffer data)
- newByteTensor(long[] dims, byte[] data)
- newByteTensor(long[] dims, ByteBuffer data)
org.pytorch.IValue (semantic of at::IValue)
- static factory methods to create pytorchscript supported types
Examples of usage api could be found in PytorchInstrumentedTests.java:
Module module = Module.load(path);
IValue input = IValue.tensor(Tensor.newByteTensor(new long[]{1}, Tensor.allocateByteBuffer(1)));
IValue output = module.forward(input);
Tensor outputTensor = output.getTensor();
ThreadSafety:
Api is not thread safe, all synchronization must be done on caller side.
Mutability:
org.pytorch.Tensor buffer is DirectBuffer with native byte order, can be created with static factory methods specifing DirectBuffer.
At the moment org.pytorch.Tensor does not hold at::Tensor on jni side, it has: long[] dimensions, type, DirectByteBuffer blobData
Input tensors are mutable (can be modified and used for the next inference),
Uses values from buffer on the momment of Module#forward or Module#runMethod calls.
Buffers of input tensors is used directly by input at::Tensor
Output is copied from output at::Tensor and is immutable.
Dependencies:
Jni level is implemented with usage of fbjni library, that was developed in Facebook,
and was already used and opensourced in several opensource projects,
added to the repo as submodule from personal account to be able to switch submodule
when fbjni will be opensourced separately.
ghstack-source-id: b39c848359a70d717f2830a15265e4aa122279c0
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25084
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25105
Reviewed By: dreiss
Differential Revision: D16988107
Pulled By: IvanKobzarev
fbshipit-source-id: 41ca7c9869f8370b8504c2ef8a96047cc16516d4