Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54042
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53881
1. Fix position_weighted optimizer: Position weighted layer uses default optimizer but is actually gradient_slice, which will cause problem if we do not handle it properly in the new optimizier. The solution is to use sparseadagrad when it is gradient_slices.
2. Optimizer implementation of v1 and v2: using 1st momentum with/without bias_correction.
3. also implemented decoupled weight decay in the new optimizer.
Test Plan:
buck test //caffe2/caffe2/fb/dper/layer_models/tests/split_1:sparse_nn_test_2 -- test_mlp_optimization
buck test //caffe2/caffe2/python:optimizer_test -- TestDecayAdagrad
buck test //caffe2/caffe2/python/operator_test:decay_adagrad_test
ctr_mbl_feed work flow: f255731660
oc work flow: f255739503
Reviewed By: 0x10cxR1
Differential Revision: D26839668
fbshipit-source-id: 2b6881c1a88540ef5766be40f5e80001257e2199
Summary:
There is a module called `2to3` which you can target for future specifically to remove these, the directory of `caffe2` has the most redundant imports:
```2to3 -f future -w caffe2```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/45033
Reviewed By: seemethere
Differential Revision: D23808648
Pulled By: bugra
fbshipit-source-id: 38971900f0fe43ab44a9168e57f2307580d36a38
Summary: Use the newly added counter op in sparse adagrad
Reviewed By: chocjy, ellie-wen
Differential Revision: D19221100
fbshipit-source-id: d939d83e3b5b3179f57194be2e8864d0fbbee2c1
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/36399
Added caffe2 python wrapper and unit test for the STORM C++ operator.
Test Plan:
All newly added unit tests passed using "buck test //caffe2/caffe2/python:optimizer_test -- TestStorm"
{F233644598}
Reviewed By: chocjy
Differential Revision: D18841013
fbshipit-source-id: f692bc18412839db140202ec9a971e556db0e54f
Summary: We added caffe2 python wrapper and unit test for the SparseRAdam C++ operator.
Test Plan:
Unit test is constructed following the design pattern of [Wngrad optimizer](https://our.intern.facebook.com/intern/diff/D8655724/). Test passed smoothly.
buck test //caffe2/caffe2/python:optimizer_test -- TestSparseRAdam
Test result:
{F221144048}
Reviewed By: wx1988
Differential Revision: D18330650
fbshipit-source-id: e0f4724c2b616b665e2a0fe2e5c3430696cca7ee
Summary:
Goal of this PR is to unify cuda and hip device types in caffe2 python front end.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14221
Differential Revision: D13148564
Pulled By: bddppq
fbshipit-source-id: ef9bd2c7d238200165f217097ac5727e686d887b
* [GanH]: two_task_discriminator
as titled
and adding label smooth
* [Dper2] Simplified UI options needed for blob magnitude visualization
* [GanH]: fix tags
as titled
* Added type and shape inference for GatherRange operator
This helps with type / shape inference when using this operator in layers.
Also just a nice to have in general.
* Demonstrate Caffe2 exception handling with StoreHandlerTimeoutError in Python
We'd like to catch and recover from certain Caffe2 net exceptions. Use this diff to demonstrate a pattern of registering a pybind exception mapping and catching in Pythonusing caffe2::StoreHandlerTimeoutException.
* Bind Gloo IoException to IoError in Python
Allow peer failure handling and recovery using an exception based mechanism. This diff registers gloo::IoException with pybind.
* [GanH]: add label smoothing to softmax with loss
as titled
* [C2] Enable LARS in Adagrad and hook it to DPER
* [DPER] Don't pass LayerModelHelper in create_trainer_nodes
Since we're planning to get rid of it eventually and I want to get access to
NetDef only interface ASAP - I'm looking towards removing all references to
LMH, where we don't really need them.
* fix bugs in LambdaRankNdcgOp
the loss and gradient in LambdaRankNdcgOp are incorrect. The loss should be negative log of probs instead of log.
* Restrict thread pool on iOS to only big cores
Historically, iPhones exposed only one type of cores, and Caffe2 thread pool used all of them.
However, iPhone 8/iPhone X exposes 2 big + 4 LITTLE cores. As our thread pool doesn't support work stealing or other forms of load balancing, fast cores end up waiting for the slow ones, and it may be better to restrict execution to only 2 fast cores, like we do on Android.
* Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine
Remove SparseLength Sum/WeightedSum/Mean operators with fp16 engine
* make clang happy and get fewer warnings
make clang happy and get fewer warnings
* [Personalization] Support add_output_schema() in layer_model_helper
Problem:
Currently the output_schema of sparse_nn can only be set once. https://fburl.com/efth5zer.
Solution:
For flexibility, we want to add fields to output_schema incrementally.
Plan:
Wrap the change of `model._output_schema` into a new function `add_output_schema()` for adding additional output_schema.
Callsite:
The add_output_schema() should be called instead at https://fburl.com/efth5zer
Reference:
The newly added `add_output_schema()` will be similar to `add_loss()` in https://fburl.com/t2ii8njh
* [C2] Implement Layer-wise Adaptive Rate Scaling (LARS)
* [C2] Implement Layer-wise Adaptive Rate Scaling (LARS)
* add unit test for Lars
* set default value for lars to be None
* remove lars for subclasses of SgdOptimizer
Summary:
Add `RmsPropOptimizer` to `optimizer.py` so RMSProp can be used as an optimizer.
`RmpsPropOptimizer` uses `RmpPropOp` to update the gradient and `MomentumSGDUpdateOp` to update the model parameters.
Differential Revision: D6118279
fbshipit-source-id: e38b8380ff74c1d1bb1e87fc300b6b55e32cd2e0
Summary: Add support for SparseMomentumSGDUpdate and tests for momentum SGD in both dense and sparse cases
Reviewed By: akyrola
Differential Revision: D6234834
fbshipit-source-id: 9848c29ea06794ef35f1ebaff0f5e81eac4f4db9
Summary: According to GitHub issue #1168, YellowFin's accuracy between Caffe2 and Numpy models from tests are not good enough in some environments. Results were very close on my machine. GitHub's Travis failed on some tests which I later disabled. Therefore the difference doesn't come from logical differences but from loss of precision on some machines. It is safe to disable equivalency test if equivalency was already once tested.
Reviewed By: akyrola
Differential Revision: D5777049
fbshipit-source-id: c249a205d94b52c3928c37481f15227d500aafd0
Summary:
Replaced std::copysign(x) with (x > 0 ? 1 : -1).
std::copysign is not available on some Android platforms which was detected in GitHub's Travis tests:
"/home/travis/build/caffe2/caffe2/caffe2/sgd/yellowfin_op.cc:57:23: error: 'copysign' is not a member of 'std'"
Reviewed By: akyrola
Differential Revision: D5756384
fbshipit-source-id: 56bc220d2c6216ff45b9cc47ed02aebf6ad439a5
Summary: Disabling test for YellowFin that does not pass test in Travis. Difference comes from numerical reasons. Test passes on my cpu / math libraries. Decide whether to merge it.
Reviewed By: Yangqing
Differential Revision: D5754144
fbshipit-source-id: b6ed6628f962d6904a8d522f0cf4080d7878acad
Summary:
Added YellowFin optimizer to Caffe2.
This implemention is different from the original: It has separate alpha and mu for each parameter and it uses different version of Momentum SGD.
Tests / benchmarks for the optimizer are to be done. Some refactor of the code is to be done before pushing. This is still a working version.
Reviewed By: akyrola
Differential Revision: D5652689
fbshipit-source-id: c10dc0424f47c3051b454aede1d121902cb759a8
Summary: While there is currently support for scaling the base learning rate when loading the model, there is not support for scaling the base learning rate during training. This is needed for LATTE's seq2seq translation models, as the learning schedule is not predefined and is modified at runtime.
Reviewed By: jhcross
Differential Revision: D5701391
fbshipit-source-id: ae3bec45f238db1a2be7af9c04d720067e9095d5
Summary: Moved code for global norm-based gradient clipping from fb specific workflows (seq2seq) to the open-source caffe2 optimizer library
Reviewed By: jhcross
Differential Revision: D5637453
fbshipit-source-id: 7e73c9a1c97c28a152c188467b27a6449f79242e
Summary:
This will fix the test by querying how many instances of the optimizer are already created.
Because OSS tests doesn't run in isolation causing number of created instances of optimizer to be >= 0.
Reviewed By: akyrola
Differential Revision:
D5462433
Tags: easy
fbshipit-source-id: 7a9ab4fe5345f5d5138abb461ba7a990d9ace840
Summary:
Fix case when optimizer isn't called within a device scope context.
Fix OptimizerContext lr blob names
Reviewed By: volkhin
Differential Revision: D5421046
fbshipit-source-id: 186a0d05f40d4442c5ba5736084626da73a0c0f1
Summary: this diff adds optimizer into param_info, and the associated implementations for modelhelper and brew to set optimizer for each individual parameter.
Reviewed By: kennyhorror
Differential Revision: D5385432
fbshipit-source-id: 5d682f9d1ab077e04a5d76a24d71470f4e64fc92
Summary:
Add add_weight_decay to optimizer + test.
In D5142973 I accidentally removed weight decay from resnet50 trainer, so this restores it.
Reviewed By: asaadaldien
Differential Revision: D5173594
fbshipit-source-id: c736d8955eddff151632ae6be11afde0883f7531
Summary:
Adds support for generating and training pfp16 models. Added SGD optimizer for multi-precision trainers and a new callback to data_parallel_model in order to help multi-precision models keep their different copies of parameters in sync during training.
Closes https://github.com/caffe2/caffe2/pull/697
Differential Revision: D5159712
Pulled By: salexspb
fbshipit-source-id: 60a889494d2e2f4df1d720331e19f638c5eb95cc
Summary:
hankun is using the optimizer, but having mixed set of of GPU and CPU operators. Currently this won't work with optimizer since it adds optimizers for all parameters in the current device scope. But we can actually infer the device that a param belongs to by looking at the device option in the param_init_net.
Added a test as well.
Reviewed By: salexspb
Differential Revision: D5133652
fbshipit-source-id: ad8689d75ac1f5c78981bae1b6978fe91e40ef0f
Summary:
1. Adds a function to return auxiliary parameters for each optimizer. This function can be used to serialize the optimizers so that they can be recovered.
2. Fixes the bug that the iteration blob is not incremented by one in each iteration. Suppose there are k parameters using the adam learning rate optimizer, the iteration blob is incremented by k based on the original implementation.
Reviewed By: azzolini
Differential Revision: D4872397
fbshipit-source-id: d86711feedda2ba83af5f2a18141b06a6a473733
Summary:
These are all essentially no-op changes which allow for nose-style (or pytest-style) test discovery.
With this patch, you can use any of these methods to discover and run tests under `caffe2/python`:
```
python -m unittest discover -p '*test*.py' caffe2/python/
python -m nose caffe2/python/
python -m pytest caffe2/python/
```
Future work:
* Get all of the tests to pass
* Some seem to be testing operations which don't have GPU implementations
* I get a segfault unless I set `CUDA_VISIBLE_DEVICES=0`
* Some tests are flaky
* Allow test discovery throughout the whole project (e.g. the `experiments/` dir)
Closes https://github.com/caffe2/caffe2/pull/199
Reviewed By: pietern
Differential Revision: D4704504
Pulled By: Yangqing
fbshipit-source-id: 8f5687ec9c8aa873dfaff30dbf44272bc38a206b
Summary:
The current optimizer code in c2/python has the following issues:
(1) the optimizers in sgd.py cannot config per param-blob optimizer;
(2) sgd.py is a bad file name. optimizer.py is a better name;
(3) layer_model_helper.py has another set of optimizer code (which supports per param-blob optimizer)
This diff did the following
(1) create optimizer objects so that we can config per param-blob optimizer and that are also compatible to the existing optimizer code
(2) the new optimizer code are much more modulized
(3) move the optimizer code to file with better name (optimizer.py)
(4) replace the optimizer imports in the existing code
will do in next diffs
(1) optimizers with structured parameters for dper2
(2) get rid of the optimizer code in layer_model_helper.py
Reviewed By: salexspb
Differential Revision: D4609013
fbshipit-source-id: 2e2d6dfa8685d10498f89069157453d9feca3f27