Commit Graph

13 Commits

Author SHA1 Message Date
aef820926c Add some tests for 3d channels last (#118283)
Part of a multi-PR work to fix #59168.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/118283
Approved by: https://github.com/albanD
2024-01-30 01:26:47 +00:00
f70844bec7 Enable UFMT on a bunch of low traffic Python files outside of main files (#106052)
Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Pull Request resolved: https://github.com/pytorch/pytorch/pull/106052
Approved by: https://github.com/albanD, https://github.com/Skylion007
2023-07-27 01:01:17 +00:00
8d45f555d7 [BE] [1/3] Rewrite super() calls in caffe2 and benchmarks (#94587)
Rewrite Python built-in class `super()` calls. Only non-semantic changes should be applied.

- #94587
- #94588
- #94592

Also, methods with only a `super()` call are removed:

```diff
class MyModule(nn.Module):
-   def __init__(self):
-       super().__init__()
-
    def forward(self, ...):
        ...
```

Some cases that change the semantics should be kept unchanged. E.g.:

f152a79be9/caffe2/python/net_printer.py (L184-L190)

f152a79be9/test/test_jit_fuser_te.py (L2628-L2635)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/94587
Approved by: https://github.com/ezyang
2023-02-11 18:19:48 +00:00
33b9726e6b Revert "add model test for Android"
This reverts commit 91ef3c82615d6ede05d5b86f1bd5571ea95e4ef1.

Reverted https://github.com/pytorch/pytorch/pull/74793 on behalf of https://github.com/seemethere
2022-03-29 22:08:22 +00:00
91ef3c8261 add model test for Android
This pr

moved some model generation scripts to a central place (mobile net v2, android test model)
updated scripts so these models can run on Android (Java doesn't support certain scalar types as return value)
updated model generation script to take arguments (generate models for android/ios, checked-in model or on-the-fly model)
add Android instrumentation tests for these new models
After this change, the Android instrumentation test will run 35 models which covered 91% of production ops. The coverage information can be found in this file: https://github.com/pytorch/pytorch/blob/master/test/mobile/model_test/coverage.yaml

Note that these models are checked-in for back compatibility check (to ensure they can run with newer pytorch versions).

The script generates models for mobile test. For each model we have a "checked-in" version
and an "on-the-fly" version. The "on-the-fly" version will be generated during test, and
should not be committed to the repo. The "checked-in" version is used for back compatibility check.

Note that Android only support checked-in model right now. iOS can test both (in another pr).

use 'gen_test_model.py android-test' to generate on the fly models for android
use 'gen_test_model.py ios-test' to generate on the fly models for ios
use 'python gen_test_model.py android' to generate checked-in models for android
use 'python gen_test_model.py ios' to generate on-the-fly models for ios
use 'gen_test_model.py <model_name_no_suffix>' to update the given checked-in model

Pull Request resolved: https://github.com/pytorch/pytorch/pull/74793
Approved by: https://github.com/kit1980
2022-03-29 04:46:14 +00:00
99bcadced4 improve android instrumentation test and update README
Added tests for lite interpreter. By default the run_test.sh will use lite interpreter, unless manually set BUILD_LITE_INTERPRETER=0

Also fixed model generation script for android instrumentation test and README.

Verified test can pass for both full jit and lite interpreter. Also tested on emulator and real device using different abis.

Lite interpreter
```
./scripts/build_pytorch_android.sh x86
./android/run_tests.sh
```

Full JIT
```
BUILD_LITE_INTERPRETER=0 ./scripts/build_pytorch_android.sh x86
BUILD_LITE_INTERPRETER=0 ./android/run_tests.sh
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/72736
2022-02-22 08:05:33 +00:00
1022443168 Revert D30279364: [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: revert-hammer

Differential Revision:
D30279364 (b004307252)

Original commit changeset: c1ed77dfe43a

fbshipit-source-id: eab50857675c51e0088391af06ec0ecb14e2347e
2021-08-12 11:45:01 -07:00
b004307252 [codemod][lint][fbcode/c*] Enable BLACK by default
Test Plan: manual inspection & sandcastle

Reviewed By: zertosh

Differential Revision: D30279364

fbshipit-source-id: c1ed77dfe43a3bde358f92737cd5535ae5d13c9a
2021-08-12 10:58:35 -07:00
09eefec627 Clean up some type annotations in android (#49944)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/49944

Upgrades type annotations from Python2 to Python3

Test Plan: Sandcastle tests

Reviewed By: xush6528

Differential Revision: D25717539

fbshipit-source-id: c621e2712e87eaed08cda48eb0fb224f6b0570c9
2021-01-07 15:42:55 -08:00
c40e3f9f98 [android][jni] Support Tensor MemoryFormat in java wrappers (#40785)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/40785

The main goal of this change is to support creating Tensors specifying blob in NHWC (ChannelsLast) format.

ChannelsLast is supported only for 4-dim tensors, this is enforced on LibTorch side, I have not added asserts on java side in case that this limitation will be changed in future and not to have double asserts.

Additional changes in `aten/src/ATen/templates/Functions.h`:

`from_blob` creates `at::empty({0}, options)` tensor first and sets it Storage with sizes and strides afterwards.

But as ChannelsLast is only for 4-dim tensors - it fails on that creation, as dim==1.

I've added `zero_sizes()` function that returns `{0, 0, 0, 0}` for ChannelsLast and ChannelsLast3d.

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D22396244

Pulled By: IvanKobzarev

fbshipit-source-id: 02582d748a554e0f859aefe71cd2c1e321fb8979
2020-09-03 17:01:35 -07:00
f7ba68e1f7 Support IValue string type (#26517)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26517

Support IValue string kind

added 2 instrumented tests -> regenerated test.pt

# Test plan
Start android emulator
```
cd ./android/
gradle pytorch_android:cAT
```
tests passed

# Nits
Moved method IValue#getBool() - to have an order: bool, long, double, string

Test Plan: Imported from OSS

Differential Revision: D17513683

Pulled By: IvanKobzarev

fbshipit-source-id: d328f25772b61f54fb6fd3b2afacde3d7372f25c
2019-09-20 17:29:42 -07:00
56245ffe05 Fix python lints for generate_test_torchscripts.py (#25107)
Summary:
Fix lints, checked with flake8
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25107

Reviewed By: zrphercule

Differential Revision: D16991296

Pulled By: IvanKobzarev

fbshipit-source-id: 5b69d716e3c458dc2cfe5b668a390c7272b1c74f
2019-08-23 11:37:23 -07:00
d62bca9792 jni-java wrapper for pytorchScript api (#25084)
Summary:
TLDR; initial commit of android java-jni wrapper of pytorchscript c++ api

The main idea is to provide java interface for android developers to use pytorchscript modules.
java API tries to repeat semantic of c++ and python pytorchscript API

org.pytorch.Module (wrapper of torch::jit::script::Module)
 - static Module load(String path)
 - IValue forward(IValue... inputs)
 - IValue runMethod(String methodName, IValue... inputs)

org.pytorch.Tensor (semantic of at::Tensor)
 - newFloatTensor(long[] dims, float[] data)
 - newFloatTensor(long[] dims, FloatBuffer data)

 - newIntTensor(long[] dims, int[] data)
 - newIntTensor(long[] dims, IntBuffer data)

 - newByteTensor(long[] dims, byte[] data)
 - newByteTensor(long[] dims, ByteBuffer data)

org.pytorch.IValue (semantic of at::IValue)
 - static factory methods to create pytorchscript supported types

Examples of usage api could be found in PytorchInstrumentedTests.java:

Module module = Module.load(path);
IValue input = IValue.tensor(Tensor.newByteTensor(new long[]{1}, Tensor.allocateByteBuffer(1)));
IValue output = module.forward(input);
Tensor outputTensor = output.getTensor();

ThreadSafety:
Api is not thread safe, all synchronization must be done on caller side.

Mutability:
org.pytorch.Tensor buffer is DirectBuffer with native byte order, can be created with static factory methods specifing DirectBuffer.
At the moment org.pytorch.Tensor does not hold at::Tensor on jni side, it has: long[] dimensions, type, DirectByteBuffer blobData

Input tensors are mutable (can be modified and used for the next inference),
Uses values from buffer on the momment of Module#forward or Module#runMethod calls.
Buffers of input tensors is used directly by input at::Tensor

Output is copied from output at::Tensor and is immutable.

Dependencies:
Jni level is implemented with usage of fbjni library, that was developed in Facebook,
and was already used and opensourced in several opensource projects,
added to the repo as submodule from personal account to be able to switch submodule
when fbjni will be opensourced separately.

ghstack-source-id: b39c848359a70d717f2830a15265e4aa122279c0
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25084
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25105

Reviewed By: dreiss

Differential Revision: D16988107

Pulled By: IvanKobzarev

fbshipit-source-id: 41ca7c9869f8370b8504c2ef8a96047cc16516d4
2019-08-23 10:42:44 -07:00