254 Commits

Author SHA1 Message Date
14c1ab049d [Codemod][FBSourceGoogleJavaFormatLinter] Daily arc lint --take GOOGLEJAVAFORMAT
Reviewed By: zertosh

Differential Revision: D20415422

fbshipit-source-id: 860f8dd9dce0a2420792bafb7d3e58bd883ab7e4
2020-03-13 06:27:03 -07:00
c235be42dd [jit] kill script namespace (#34515)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34515

Once upon a time we thought this was necessary. In reality it is not, so
removing it.

For backcompat, our public interface (defined in `api/`) still has
typedefs to the old `script::` names.

There was only one collision: `Pass` as a `Stmt` and `Pass` as a graph
transform. I renamed one of them.

Test Plan: Imported from OSS

Differential Revision: D20353503

Pulled By: suo

fbshipit-source-id: 48bb911ce75120a8c9e0c6fb65262ef775dfba93
2020-03-11 23:32:48 -07:00
7aca9afdfb [pytorch] remove boilerplate setQEngine() from PyTorch mobile predictors (#34556)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34556

According to
https://github.com/pytorch/pytorch/pull/34012#discussion_r388581548,
this `at::globalContext().setQEngine(at::QEngine::QNNPACK);` call isn't
really necessary for mobile.

In Context.cpp it selects the last available QEngine if the engine isn't
set explicitly. For OSS mobile prebuild it should only include QNNPACK
engine so the default behavior should already be desired behavior.

It makes difference only when USE_FBGEMM is set - but it should be off
for both OSS mobile build and internal mobile build.

Test Plan: Imported from OSS

Differential Revision: D20374522

Pulled By: ljk53

fbshipit-source-id: d4e437a03c6d4f939edccb5c84f02609633a0698
2020-03-11 00:55:14 -07:00
9a5e9d8cec [pytorch][mobile] change mobile build scripts to build PyTorch by default (#34203)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/34203

Currently cmake and mobile build scripts still build libcaffe2 by
default. To build pytorch mobile users have to set environment variable
BUILD_PYTORCH_MOBILE=1 or set cmake option BUILD_CAFFE2_MOBILE=OFF.

PyTorch mobile has been released for a while. It's about time to change
CMake and build scripts to build libtorch by default.

Changed caffe2 CI job to build libcaffe2 by setting BUILD_CAFFE2_MOBILE=1
environment variable. Only found android CI for libcaffe2 - do we ever
have iOS CI for libcaffe2?

Test Plan: Imported from OSS

Differential Revision: D20267274

Pulled By: ljk53

fbshipit-source-id: 9d997032a599c874d62fbcfc4f5d4fbf8323a12e
2020-03-05 23:40:47 -08:00
dbe850af5b [jit] do the code reorg (#33851)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33851

Rationale and context described in #33828.

Script to reproduce the move:
https://gist.github.com/suo/16cbefaaeb67ca5a7c6caffd49b7f6e9
ghstack-source-id: 99079645

Test Plan: Make sure CI passes

Reviewed By: jamesr66a

Differential Revision: D20133869

fbshipit-source-id: 390e9241a9c85366d9005c492ac31f10aa96488e
2020-02-27 13:02:51 -08:00
6aecfd1e80 Mobile Backend: NHWC memory layout + XNNPACK integration. (#33722)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/33722

In order to improve CPU performance on floating-point models on mobile, this PR introduces a new CPU backend for mobile that implements the most common mobile operators with NHWC memory layout support through integration with XNNPACK.

XNNPACK itself, and this codepath, are currently only included in the build, but the actual integration is gated with USE_XNNPACK preprocessor guards.  This preprocessor symbol is intentionally not passed on to the compiler, so as to enable this rollout in multiple stages in follow up PRs.  This changeset will build XNNPACK as part of the build if the identically named USE_XNNPACK CMAKE variable, defaulted to ON, is enabled, but will not actually expose or enable this code path in any other way.

Furthermore, it is worth pointing out that in order to efficiently map models to these operators, some front-end method of exposing this backend to the user is needed.  The less efficient implementation would be to hook these operators into their corresponding native implementations, granted that a series of XNNPACK-specific conditions are met, much like how NNPACK is integrated with PyTorch today for instance.

Having said that, while the above implementation is still expected to outperform NNPACK based on the benchmarks I ran, the above integration would be leave a considerable gap between the performance achieved and the maximum performance potential XNNPACK enables, as it does not provide a way to compute and factor out one-time operations out of the inner most forward() loop.

The more optimal solution, and one we will  decide on soon, would involve either providing a JIT pass that maps nn operators onto these newly introduced operators, while allowing one-time calculations to be factored out, much like quantized mobile models.  Alternatively, new eager-mode modules can also be introduced that would directly call into these implementations either through c10 or some other mechanism, also allowing for decoupling of op creation from op execution.

This PR does not include any of the front end changes  mentioned above.  Neither does it include the mobile threadpool unification present in the original https://github.com/pytorch/pytorch/issues/30644.  Furthermore, this codepath seems to be faster than NNPACK in a good number of use cases, which can potentially allow us to remove NNPACK from aten to make the codebase a little simpler, granted that there is widespread support for such a move.

Regardless, these changes will be introduced gradually and in a more controlled way in subsequent PRs.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/32509

Test Plan:
Build: CI
Functionality: Not exposed

Reviewed By: dreiss

Differential Revision: D20069796

Pulled By: AshkanAliabadi

fbshipit-source-id: d46c1c91d4bea91979ea5bd46971ced5417d309c
2020-02-24 21:58:56 -08:00
039dc90854 Revert D19521853: [pytorch][PR] Mobile Backend: NHWC memory layout + XNNPACK integration.
Test Plan: revert-hammer

Differential Revision:
D19521853

Original commit changeset: 99a1fab31d0e

fbshipit-source-id: 76dfc1f481797ba2386997533cf19957637687d6
2020-02-23 22:07:19 -08:00
941b42428a Mobile Backend: NHWC memory layout + XNNPACK integration. (#32509)
Summary:
In order to improve CPU performance on floating-point models on mobile, this PR introduces a new CPU backend for mobile that implements the most common mobile operators with NHWC memory layout support through integration with XNNPACK.

XNNPACK itself, and this codepath, are currently only included in the build, but the actual integration is gated with USE_XNNPACK preprocessor guards.  This preprocessor symbol is intentionally not passed on to the compiler, so as to enable this rollout in multiple stages in follow up PRs.  This changeset will build XNNPACK as part of the build if the identically named USE_XNNPACK CMAKE variable, defaulted to ON, is enabled, but will not actually expose or enable this code path in any other way.

Furthermore, it is worth pointing out that in order to efficiently map models to these operators, some front-end method of exposing this backend to the user is needed.  The less efficient implementation would be to hook these operators into their corresponding **native** implementations, granted that a series of XNNPACK-specific conditions are met, much like how NNPACK is integrated with PyTorch today for instance.

Having said that, while the above implementation is still expected to outperform NNPACK based on the benchmarks I ran, the above integration would be leave a considerable gap between the performance achieved and the maximum performance potential XNNPACK enables, as it does not provide a way to compute and factor out one-time operations out of the inner most forward() loop.

The more optimal solution, and one we will  decide on soon, would involve either providing a JIT pass that maps nn operators onto these newly introduced operators, while allowing one-time calculations to be factored out, much like quantized mobile models.  Alternatively, new eager-mode modules can also be introduced that would directly call into these implementations either through c10 or some other mechanism, also allowing for decoupling of op creation from op execution.

This PR does not include any of the front end changes  mentioned above.  Neither does it include the mobile threadpool unification present in the original https://github.com/pytorch/pytorch/issues/30644.  Furthermore, this codepath seems to be faster than NNPACK in a good number of use cases, which can potentially allow us to remove NNPACK from aten to make the codebase a little simpler, granted that there is widespread support for such a move.

Regardless, these changes will be introduced gradually and in a more controlled way in subsequent PRs.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32509

Reviewed By: dreiss

Differential Revision: D19521853

Pulled By: AshkanAliabadi

fbshipit-source-id: 99a1fab31d0ece64961df074003bb852c36acaaa
2020-02-23 19:08:42 -08:00
b28a834813 [codemod][lint][fbcode] Apply google-java-format
Test Plan: Sandcastle. Visual inspection.

Reviewed By: scottrice

Differential Revision: D19878711

fbshipit-source-id: be56f70b35825140676be511903e5274d1808f25
2020-02-13 12:14:14 -08:00
bc2e05a398 Update Docs for building PyTorch for Android.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/32578

Reviewed By: ljk53

Differential Revision: D19588904

Pulled By: dreiss

fbshipit-source-id: 2934752b9c5b94f2f141417669d8385be44d703b
2020-01-30 17:12:03 -08:00
eab99ab08e [android] fbjni DoNotStrip annotation for oss native methods (#32567)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32567

As a first change to support proguard.
even if these methods could be not called from java, on jni level we register them and this registration will fail if methods are stripped.

Adding DoNotStrip to all native methods that are registered in OSS.

After integration of consumerProguardFiles in fbjni that prevents stripping by proguard DoNotStrip it will fix errors with proguard on.

Test Plan: Imported from OSS

Differential Revision: D19624684

Pulled By: IvanKobzarev

fbshipit-source-id: cd7d9153e9f8faf31c99583cede4adbf06bab507
2020-01-29 11:52:53 -08:00
e4f43bf7a5 Set rpath for JNI library on Mac (#32247)
Summary:
Without this, dlopen won't look in the proper directory for dependencies
(like libtorch and fbjni).
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32247

Test Plan:
Build libpytorch_jni.dylib on Mac, replaced the one from the libtorch
nightly, and was able to run the Java demo.

Differential Revision: D19501498

Pulled By: dreiss

fbshipit-source-id: 13ffdff9622aa610f905d039f951ee9a3fdc6b23
2020-01-21 11:30:39 -08:00
7e3c438913 Renaming IValue List functions (#32093)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/32093

toGenericListRef -> toListRef
isGenericList -> isList
toGenericList -> toList
toXListRef -> toXVector

Test Plan: Imported from OSS

Reviewed By: suo

Differential Revision: D19369767

Pulled By: zdevito

fbshipit-source-id: 4f0078f95b83e6586524c03f7bcf206722fdd9ae
2020-01-17 15:17:45 -08:00
104b2c610b Tensor prep from image in native (#31426)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31426

Tensor convertion from YUV image is moved to native with optimizations to eliminate branching inside loop, no variables declaration, less ops.

Perf stat from local devices - measuring converting 320x240 image from camera to 1,3,224,224 tensor;
Legend:
Java - current java impl
JavaOpt - current java impl + the same optimizations with no if/else in for, declare variables outside of for, inlining etc.
C - C impl

```
Nexus 5
JavaOpt N:25 avg:119.24 min: 87 max:177 p10:102 p25:105 p50:115 p75:127 p90:150
      C N:25 avg: 17.24 min: 14 max: 39 p10: 14 p25: 15 p50: 15 p75: 16 p90: 23
   Java N:25 avg:139.96 min: 70 max:214 p10: 89 p25:110 p50:139 p75:173 p90:181
avg C vs JavaOpt 6.91x

Pixel 3 XL
JavaOpt N:19 avg: 16.11 min: 12 max: 19 p10: 14 p25: 15 p50: 16 p75: 18 p90: 19
      C N:19 avg:  5.79 min:  3 max: 10 p10:  4 p25:  5 p50:  6 p75:  6 p90:  9
   Java N:19 avg: 16.21 min: 12 max: 20 p10: 14 p25: 15 p50: 16 p75: 18 p90: 20
avg C vs JavaOpt 2.78x

Full build with 4 abis inside:
Pixel 3 XL
JavaOpt N:25 avg: 18.84 min: 16 max: 24 p10: 16 p25: 17 p50: 18 p75: 20 p90: 22
      C N:25 avg:  7.96 min:  5 max: 10 p10:  7 p25:  7 p50:  8 p75:  9 p90:  9
avg C vs JavaOpt 2.36x
```

Test Plan: Imported from OSS

Differential Revision: D19165429

Pulled By: IvanKobzarev

fbshipit-source-id: 3b54e545f6fbecbc5bb43216aca81061e70bd369
2020-01-15 17:10:00 -08:00
de5821d291 Torchscript print to logcat (#31456)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31456

External request https://discuss.pytorch.org/t/jit-android-debugging-the-model/63950

By default torchscript print function goes to stdout. For android it is not seen in logcat by default.
This change propagates it to logcat.

Test Plan: Imported from OSS

Differential Revision: D19171405

Pulled By: IvanKobzarev

fbshipit-source-id: f9c88fa11d90bb386df9ed722ec9345fc6b25a34
2020-01-15 16:44:56 -08:00
4daa3dedbe Fix IValue.isList
Summary: I think this was wrong before?

Test Plan: Not sure.

Reviewed By: IvanKobzarev

Differential Revision: D19221358

fbshipit-source-id: 27e675cac15dde29e026305f4b4e6cc774e15767
2020-01-07 16:33:36 -08:00
1b4d3d5748 Properly return data from non-contiguous tensors in Java
Summary:
These were returning incorrect data before.  Now we make a contiguous copy
before converting to Java.  Exposing raw data to the user might be faster in
some cases, but it's not clear that it's worth the complexity and code size.

Test Plan: New unit test.

Reviewed By: IvanKobzarev

Differential Revision: D19221361

fbshipit-source-id: 22ecdad252c8fd968f833a2be5897c5ae483700c
2020-01-07 16:33:31 -08:00
2d6a2c898c Support tensors with a storage offset in Java (#31584)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31584

These were returning incorrect data before.

Test Plan: New unit test.

Reviewed By: IvanKobzarev

Differential Revision: D19221360

fbshipit-source-id: b3f01de086857027f8e952a1c739f60814a57acd
2020-01-07 16:33:26 -08:00
6d1fa8296b Support tensors with empty shape in Java
Summary: These are valid tensors.

Test Plan: New unit test.

Reviewed By: IvanKobzarev

Differential Revision: D19221362

fbshipit-source-id: fa9af2fc539eb7381627b3d473241a89859ef2ba
2020-01-07 16:33:21 -08:00
346a349111 Update all instances of 1.4.0 -> 1.5.0 (#31785)
Summary:
Done with:

```
❯ sed -i 's/1\.4\.0/1.5.0/g' $(find -type f -not -path "./third_party/*")
```

This was previously done in separate commits, but it would be beneficial to bump all included projects within this repository at the same time.

Old bumps for reference:
* [iOS]Update Cocoapods to 1.4.0: https://github.com/pytorch/pytorch/pull/30326
* [android] Change nightly builds version to 1.4.0-SNAPSHOT: https://github.com/pytorch/pytorch/pull/27381
* Roll master to 1.4.0: https://github.com/pytorch/pytorch/pull/27374

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31785

Differential Revision: D19277925

Pulled By: seemethere

fbshipit-source-id: f72ad082f0566004858c9374879f4b1bee169f9c
2020-01-07 08:00:17 -08:00
492ca46e71 Fix androidTest - exclude host tests from it
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31522

Test Plan: Imported from OSS

Reviewed By: dreiss

Differential Revision: D19200861

Pulled By: IvanKobzarev

fbshipit-source-id: a6024f3013398f9e0d237e06c984a20493d42f11
2020-01-06 11:29:46 -08:00
c808eed04a Nightly dimension, input shape in gradle (#30195)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30195

1. Added flavorDimensions 'build' local/nightly
to be able to test the latest nightlies

```
cls && gradle clean test_app:installMobNet2QuantNightlyDebug -PABI_FILTERS=x86 --refresh-dependencies && adb shell am start -n org.pytorch.testapp.mobNet2Quant/org.pytorch.testapp.MainActivity
```

 2. To be able to change all new model setup editing only `test_app/build.gradle`
 Inlined model asset file names to `build.gradle`

Extracted input tensor shape to `build.gradle` (BuildConfig)

Test Plan: Imported from OSS

Differential Revision: D18893394

Pulled By: IvanKobzarev

fbshipit-source-id: 1fae9989d6f4b02afb42f8e26d0f3261d7ca929b
2019-12-20 16:08:04 -08:00
3a19980b78 Tensor class created from java does not call native methods
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31520

Test Plan: Imported from OSS

Reviewed By: iseeyuan

Differential Revision: D19199477

Pulled By: IvanKobzarev

fbshipit-source-id: ba51454586a9385dba4ab73936f907346e0105d1
2019-12-20 14:40:54 -08:00
35b249769d Exclude lite interpreter Java files from OSS host build
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31204

Test Plan: Imported from OSS

Differential Revision: D19200610

Pulled By: dreiss

fbshipit-source-id: 0cf41c99b4c2604afc2dccfebbea213c0e1f9638
2019-12-20 13:32:27 -08:00
930d0751e6 Java Tensor hybrid, owns at::Tensor, no memcopy for java outputs. (#30501)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30501

**Motivation**:
In current state output of libtorch Module forward,runMethod is mem copied to java ByteBuffer, which is allocated, at least in some versions of android, on java heap. That could lead to intensive garbage collection.

**Change**:
Output java tensor becomes owner of output at::Tensor and holds it (as `pytorch_jni::TensorHybrid::tensor_` field) alive until java part is not destroyed by GC. For that org.pytorch.Tensor becomes 'Hybrid' class in fbjni naming and starts holding member field `HybridData mHybridData;`

If construction of it starts from java side - java constructors of subclasses (we need all the fields initialized, due to this `mHybridData` is not declared final, but works as final) call `this.mHybridData = super.initHybrid();` to initialize cpp part (`at::Tensor tensor_`).

If construction starts from cpp side - cpp side is initialiaed using provided at::Tensor with `makeCxxInstance(std::move(tensor))` and is passed to java method `org.pytorch.Tensor#nativeNewTensor` as parameter `HybridData hybridData`, which holds native pointer to cpp side.

In that case `initHybrid()` method is not called, but parallel set of ctors of subclasses are used, which stores `hybridData` in `mHybridData`.

Renaming:
`JTensor` -> `TensorHybrid`

Removed method:
`JTensor::newAtTensorFromJTensor(JTensor)` becomes trivial `TensorHybrid->cthis()->tensor()`

Test Plan: Imported from OSS

Differential Revision: D18893320

Pulled By: IvanKobzarev

fbshipit-source-id: df94775d2a010a1ad945b339101c89e2b79e0f83
2019-12-15 21:36:20 -08:00
701e05dcbb Buck test targets robolectric,instrumentattion
Summary:
Buck targets for robolectric and instrumentation tests for pytorch android:
```
buck test fbsource//fbandroid/mode/server //xplat/caffe2/android:test_host
```
```
buck test //xplat/caffe2/android:test_instrumentation
```
For both:
```
buck test fbsource//fbandroid/mode/server //xplat/caffe2/android:pytorch
```

Models in assets:
`pt_android_test_asset` - creates buck target that can be included in both robolectric and instrumentation tests that contains asset created from provided torchscript sources as separate file, using the latest binaries of libtorch.

`pt_gen_test_asset_bin`  does that tacing, usage format
```
generate_test_asset input_file.jit output_file.py
```

Example of test-host setup for users of pytorch android:
robolectric tests:

```
load("fbsource//xplat/caffe2:pt_defs.bzl", "pt_android_test_asset", "pt_predictor_binary", "PT_ANDRIOID_TEST_HOST_JNI_DEPS")

pt_android_test_asset(
    name = "test_asset",
    src = "test_asset.jit",
    asset_name = "test_asset.pt",
)

robolectric3_test(
    name = "example_test_host",
    srcs = [...],
    jni_deps = PT_ANDRIOID_TEST_HOST_JNI_DEPS,
    deps = [
        ":pytorch_common",
        ":test_asset",
        "//fbandroid/java/com/facebook/soloader/annotation:annotation",
        "//fbandroid/java/com/facebook/testing/robolectric/v3:v3",
        "//fbandroid/libraries/soloader/java/com/facebook/soloader:soloader",
        "//fbandroid/third-party/java/robolectric3/robolectric:robolectric",
    ],
)
```

COMMON_LINKER_FLAGS = ["-Wl,--no-as-needed"] can not be applied on MacOs

Test Plan:
```
[twsvcscm@od0187.atn1 /data/sandcastle/boxes/fbsource (b416b20a)]$ buck test fbsource//fbandroid/mode/server //xplat/caffe2/android:pytorch
Parsing buck files: finished in 7.2 sec
Creating action graph: finished in 0.7 sec
Building: finished in 11.9 sec (100%) 791/791 jobs, 0 updated
  Total time: 19.9 sec
Testing: finished in 11.0 sec (30 PASS/0 FAIL)
RESULTS FOR //xplat/caffe2/android:test_host //xplat/caffe2/android:test_instrumentation
PASS     159ms 15 Passed   0 Skipped   0 Failed   org.pytorch.PytorchHostTests
PASS     152ms 15 Passed   0 Skipped   0 Failed   org.pytorch.PytorchInstrumentedTests (localhost:31930)
TESTS PASSED
```

OSS changes test:
```
gradle -p android pytorch_android:cAT passes
```

Reviewed By: dreiss

Differential Revision: D18799005

fbshipit-source-id: 881609826a837efebc8526aee40355c5a62947d0
2019-12-14 20:29:52 -08:00
065685180d Loading module from android asset (#30378)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30378

Loading module directly from android assets. Iteration on https://github.com/pytorch/pytorch/pull/30109
Loading Module:
```
mModule = AndroidUtils.loadModuleFromAsset(assetName, getAssets());
```

`org.pytorch.AndroidUtils` is excluded from pytorch_jni host build

Testing:
test_app module load switched to this approach and works fine
```
gradle test_app:installMobNet2QuantDebug -PABI_FILTERS=x86 && adb shell am start -n org.pytorch.testapp.mobNet2Quant/org.pytorch.testapp.MainActivity
```

Test Plan: Imported from OSS

Differential Revision: D18893269

Pulled By: IvanKobzarev

fbshipit-source-id: a7c73776f40e9c67bef233da05db56cc6efbe76a
2019-12-14 20:29:37 -08:00
f7c92f60ba Typo in filename align with classname
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/31235

Test Plan: Imported from OSS

Differential Revision: D19001793

Pulled By: IvanKobzarev

fbshipit-source-id: ae7f410be6b3c291f1feb3027b5b4a6b7ce15ab3
2019-12-12 23:16:29 -08:00
db90a5b992 Switch to open sourced fbjni (#30175)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30175

fbjni was opensourced and java part is published as 'com.facebook.fbjni:fbjni-java-only:0.0.3'
switching to it.
We still need submodule fbjni inside the repo (which is already pointing to  https://github.com/facebookincubator/fbjni) for so linking.

**Packaging changes**:
before that `libfbjni.so` came from pytorch_android_fbjni dependency, as we also linked fbjni in `pytorch_android/CMakeLists.txt` - it was built in pytorch_android, but excluded for publishing. As we had 2 libfbjni.so there was a hack to exclude it for publishing and resolve duplication locally.
```
        if (rootProject.isPublishing()) {
            exclude '**/libfbjni.so'
        } else {
            pickFirst '**/libfbjni.so'
        }
```

After this change fbjni.so will be packaged inside pytorch_android.aar artefact and we do not need this gradle logic.

I will update README in separate PR after landing previous PR to readme(https://github.com/pytorch/pytorch/pull/30128) to avoid conflicts

Test Plan: Imported from OSS

Differential Revision: D18982235

Pulled By: IvanKobzarev

fbshipit-source-id: 5097df2557858e623fa480625819a24a7e8ad840
2019-12-12 20:05:22 -08:00
ca8cb3241a Expose setNumThreads to android api (#31205)
Summary:
PR https://github.com/pytorch/pytorch/pull/31033 was unlanded due to macos build failure:
https://app.circleci.com/jobs/github/pytorch/pytorch/3916388

This PR has changes that `setNumThreads` is only for android and moved to separate class `org.pytorch.PytorchAndroid` as a static function which is better as it has global effect
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31205

Reviewed By: dreiss

Differential Revision: D18977250

Pulled By: IvanKobzarev

fbshipit-source-id: 4995859808af498c82933c4db52bd7c7dfae90e5
2019-12-12 18:57:27 -08:00
c0bcfd0445 Revert D18923167: Expose setNumThreads to android api
Test Plan: revert-hammer

Differential Revision:
D18923167

Original commit changeset: 8d98c2edbff4

fbshipit-source-id: 7db37cff298c511d0dd9eb373811c769e4a73be9
2019-12-12 09:23:58 -08:00
6225443009 Expose setNumThreads to android api (#31033)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/31033

Intention:
There are requests from users to control number of threads from android side:
https://discuss.pytorch.org/t/android-pytorch-forward-method-running-in-a-separate-thread-slow-down-ui-thread/63516/2
https://discuss.pytorch.org/t/threading-of-model-pytorch-android/62490/2

At the moment `setNumThreads` is placed in `org.pytorch.Module`, but this method changes global threadPool size, in future we will move it to some separate class to repeat python binding structure, which has torch.set_num_threads()

Test Plan: Imported from OSS

Differential Revision: D18923167

Pulled By: IvanKobzarev

fbshipit-source-id: 8d98c2edbff42e9b673509672dce3f2dd03a923e
2019-12-11 14:20:14 -08:00
38986e1dea Split libtorch.so back into libtorch_{cpu,cuda,hip} (#30315)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30315

The new structure is that libtorch_cpu contains the bulk of our
code, and libtorch depends on libtorch_cpu and libtorch_cuda.
This is a reland of https://github.com/pytorch/pytorch/pull/29731 but
I've extracted all of the prep work into separate PRs which can be
landed before this one.

Some things of note:

* torch/csrc/cuda/nccl.cpp was added to the wrong list of SRCS, now fixed (this didn't matter before because previously they were all in the same library)
* The dummy file for libtorch was brought back from the dead; it was previously deleted in #20774
In an initial version of the patch, I forgot to make torch_cuda explicitly depend on torch_cpu. This lead to some very odd errors, most notably "bin/blob_test: hidden symbol `_ZNK6google8protobuf5Arena17OnArenaAllocationEPKSt9type_infom' in lib/libprotobuf.a(arena.cc.o) is referenced by DSO"
* A number of places in Android/iOS builds have to add torch_cuda explicitly as a library, as they do not have transitive dependency calculation working correctly
* I had to torch_cpu/torch_cuda caffe2_interface_library so that they get whole-archived linked into torch when you statically link. And I had to do this in an *exported* fashion because torch needs to depend on torch_cpu_library. In the end I exported everything and removed the redefinition in the Caffe2Config.cmake. However, I am not too sure why the old code did it in this way in the first place; however, it doesn't seem to have broken anything to switch it this way.
* There's some uses of `__HIP_PLATFORM_HCC__` still in `torch_cpu` code, so I had to apply it to that library too (UGH). This manifests as a failer when trying to run the CUDA fuser. This doesn't really matter substantively right now because we still in-place HIPify, but it would be good to fix eventually. This was a bit difficult to debug because of an unrelated HIP bug, see https://github.com/ROCm-Developer-Tools/HIP/issues/1706

Fixes #27215 (as our libraries are smaller), and executes on
part of the plan in #29235.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18790941

Pulled By: ezyang

fbshipit-source-id: 01296f6089d3de5e8365251b490c51e694f2d6c7
2019-12-04 08:04:57 -08:00
bc2e6d10fa Back out "Revert D17908478: Switch PyTorch/Caffe2 to C++14"
Summary: Original commit changeset: 775d2e29be0b

Test Plan: CI

Reviewed By: mruberry

Differential Revision: D18775520

fbshipit-source-id: a350b3f86b66d97241f208786ee67e9a51172eac
2019-12-03 14:33:43 -08:00
a2ed50c920 Revert D17908478: Switch PyTorch/Caffe2 to C++14
Test Plan: revert-hammer

Differential Revision:
D17908478

Original commit changeset: 6e340024591e

fbshipit-source-id: 775d2e29be0bc3a0db64f164c8960c44d4877d5d
2019-11-27 14:57:05 -08:00
d0acc9c085 Switch PyTorch/Caffe2 to C++14 (#30406)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30406

ghstack-source-id: 94642238

Test Plan: waitforsandcastle

Differential Revision: D17908478

fbshipit-source-id: 6e340024591ec2c69521668022999df4a33b4ddb
2019-11-27 10:47:31 -08:00
5ada5363fc GenericDict/List type use unshapedType() (#30428)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30428

Reported issue https://discuss.pytorch.org/t/incomprehensible-behaviour/61710

Steps to reproduce:

```
class WrapRPN(nn.Module):
    def __init__(self):
        super().__init__()

    def forward(self, features):
        # type: (Dict[str, Tensor]) -> int
        return 0
```

```
#include <torch/script.h>

int main() {
  torch::jit::script::Module module = torch::jit::load("dict_str_tensor.pt");

  torch::Tensor tensor = torch::rand({2, 3});
  at::IValue ivalue{tensor};
  c10::impl::GenericDict dict{c10::StringType::get(),ivalue.type()};
  dict.insert("key", ivalue);
  module.forward({dict});
}
```

ValueType of `c10::impl::GenericDict` is from the first specified element as `ivalue.type()`
It fails on type check in` function_schema_inl.h` !value.type()->isSubtypeOf(argument.type())
as `DictType::isSubtypeOf` requires equal KeyType and ValueType, while `TensorType`s are different.

Fix:
Use c10::unshapedType for creating Generic List/Dict

Test Plan: Imported from OSS

Differential Revision: D18717189

Pulled By: IvanKobzarev

fbshipit-source-id: 1e352a9c776a7f7e69fd5b9ece558f1d1849ea57
2019-11-26 17:38:36 -08:00
e9cc4a5942 Add @DoNotStrip to nativeNewTensor method. (#30472)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30472

Add DoNotStrip to nativeNewTensor method.
ghstack-source-id: 94596624

Test Plan:
Triggered build on diff for automation_fbandroid_fallback_release.

buck install -r fb4a

Tested BI cloaking using pytext lite interpreter.

Obverse that logs are sent to scuba table:

{F223408345}

Reviewed By: linbinyu

Differential Revision: D18709087

fbshipit-source-id: 74fa7a0665640c294811a50913a60ef8d6b9b672
2019-11-26 12:16:33 -08:00
ab5774547a Add info about transitive dependencies in case of using local aars (#30128)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30128

Preview: https://github.com/pytorch/pytorch/tree/gh/IvanKobzarev/23/head/android

Based on users issue: https://discuss.pytorch.org/t/android-somethings-went-wrong-with-pytorch-android-1-4-0-snapshot/61009/3

Test Plan: Imported from OSS

Differential Revision: D18702658

Pulled By: IvanKobzarev

fbshipit-source-id: 14928baccd58ddbe633fad03038271d8333c4b49
2019-11-26 06:53:40 -08:00
20dfae4099 Fix the crashes for c++ not able to find java class through Jni (#30390)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30390

Fix the crashes for c++ not able to find java class through Jni
ghstack-source-id: 94499644

Test Plan: buck install -r fb4a

Reviewed By: ljk53

Differential Revision: D18667992

fbshipit-source-id: aa1b19c6dae39d46440f4a3e691054f7f8b1d42e
2019-11-25 14:51:23 -08:00
90cb1e67ff Fix exception message in Java Tensor
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30205

Test Plan: Imported from OSS

Reviewed By: linbinyu

Differential Revision: D18653568

Pulled By: dreiss

fbshipit-source-id: a5fcb809eba641a7fbd0e99e835eceeb248e680c
2019-11-22 12:04:49 -08:00
f5ef3a6fb6 disable JIT optimizer in Android wrapper for mobile custom build (#30285)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30285

PR #30144 introduced custom build script to tailor build to specific
models. It requires a list of all potentially used ops at build time.

Some JIT optimization passes can transform the IR by replacing
operators, e.g. decompose pass can replace aten::addmm with aten::mm if
coefficients are 1s.

Disabling optimization pass can ensure that the list of ops we dump from
the model is the list of ops that are needed.

Test Plan: - rerun the test on PR #30144 to verify the raw list without aten::mm works.

Differential Revision: D18652777

Pulled By: ljk53

fbshipit-source-id: 084751cb9a9ee16d8df7e743e9e5782ffd8bc4e3
2019-11-22 00:25:04 -08:00
e5fc86130a Remove unnecessary linker flags from JNI host build (#30206)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30206

- --whole-archive isn't needed because we link libtorch as a dynamic
  dependency, rather than static.
- --gc-sections isn't necessary because most (all?) of the code in our
  JNI library is used (and we're not staticly linking libtorch).
  Removing this one is useful because it's not supported by lld.

Test Plan:
Built on Linux.  Library size was unchanged.
Upcoming diff enables Mac JNI build.

Differential Revision: D18653500

Pulled By: dreiss

fbshipit-source-id: 49ce46fb86a775186f803ada50445b4b2acb54a8
2019-11-21 20:10:06 -08:00
352731bd6e Revert D18632773: Split libtorch.so back into libtorch_{cpu,cuda,hip}
Test Plan: revert-hammer

Differential Revision:
D18632773

Original commit changeset: ea717c81e0d7

fbshipit-source-id: 18601439f9f81c9f389020e5a0e4e04adb21772d
2019-11-21 15:01:09 -08:00
ec30d9028a Split libtorch.so back into libtorch_{cpu,cuda,hip} (#29731)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29731

The new structure is that libtorch_cpu contains the bulk of our
code, and libtorch depends on libtorch_cpu and libtorch_cuda.

Some subtleties about the patch:
- There were a few functions that crossed CPU-CUDA boundary without API macros. I just added them, easy enough. An inverse situation was aten/src/THC/THCTensorRandom.cu where we weren't supposed to put API macros directly in a cpp file.
- DispatchStub wasn't getting all of its symbols related to static members on DispatchStub exported properly. I tried a few fixes but in the end I just moved everyone off using DispatchStub to dispatch CUDA/HIP (so they just use normal dispatch for those cases.) Additionally, there were some mistakes where people incorrectly were failing to actually import the declaration of the dispatch stub, so added includes for those cases.
- torch/csrc/cuda/nccl.cpp was added to the wrong list of SRCS, now fixed (this didn't matter before because previously they were all in the same library)
- The dummy file for libtorch was brought back from the dead; it was previously deleted in #20774
- In an initial version of the patch, I forgot to make torch_cuda explicitly depend on torch_cpu. This lead to some very odd errors, most notably "bin/blob_test: hidden symbol `_ZNK6google8protobuf5Arena17OnArenaAllocationEPKSt9type_infom' in lib/l
ibprotobuf.a(arena.cc.o) is referenced by DSO"
- A number of places in Android/iOS builds have to add torch_cuda explicitly as a library, as they do not have transitive dependency calculation working correctly. This situation also happens with custom C++ extensions.
- There's a ROCm compiler bug where extern "C" on functions is not respected. There's a little workaround to handle this.
- Because I was too lazy to check if HIPify was converting TORCH_CUDA_API into TORCH_HIP_API, I just made it so HIP build also triggers the TORCH_CUDA_API macro. Eventually, we should translate and keep the nature of TORCH_CUDA_API constant in all cases.

Fixes #27215 (as our libraries are smaller), and executes on
part of the plan in #29235.

Signed-off-by: Edward Z. Yang <ezyang@fb.com>

Test Plan: Imported from OSS

Differential Revision: D18632773

Pulled By: ezyang

fbshipit-source-id: ea717c81e0d7554ede1dc404108603455a81da82
2019-11-21 11:27:33 -08:00
fd74a19aa4 apply clang format -i (#30180)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30180

Just applying `clang-format -i` to not mix it with other changes

Test Plan: Imported from OSS

Differential Revision: D18627473

Pulled By: IvanKobzarev

fbshipit-source-id: ed341e356fea31b8515de29d5ea2ede07e8b66a2
2019-11-20 16:46:43 -08:00
1b26e3ff6d fbjni gradle obey ABI_FILTERS parameter
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/30135

Test Plan: Imported from OSS

Differential Revision: D18610031

Pulled By: IvanKobzarev

fbshipit-source-id: 7dd8240b71e9f6d77f723243991cd1b5c9984df6
2019-11-19 20:09:48 -08:00
8e3486de81 No debug symbols in release android buidls (#30123)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30123

In groovy string `'false'` is resolved as boolean `true`

thats why even as in `gradle.properties`:
```
nativeLibsDoNotStrip=false
```
branch `if (nativeLibsDoNotStrip)` always passed

Test Plan: Imported from OSS

Differential Revision: D18606907

Pulled By: IvanKobzarev

fbshipit-source-id: c10140e775624294c732e78ae3c41e05c7c9ad92
2019-11-19 16:44:56 -08:00
26dabad5a4 Add LiteModule java class for lite interpreter. (#30061)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30061
Create INativePeer Interface and move NativePeer class from Module.java. Create LiteModuleLoader and LiteNativePeer.java for Lite Interpreter binding.
ghstack-source-id: 94169187

Reviewed By: dreiss

Differential Revision: D18511688

fbshipit-source-id: 1a69c94b28c8a02631f53079ca7ddcaa57eca38f
2019-11-18 19:53:20 -08:00
4f94aed8a3 Reformatting module class. (#29957)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/29957

Reformatting module class.
ghstack-source-id: 94058645

Test Plan: buck build xplat/caffe2/android:pytorch

Reviewed By: iseeyuan

Differential Revision: D18548185

fbshipit-source-id: 8c1f5cbf491d42915e091e6245b4f308eb162f93
2019-11-18 18:39:29 -08:00