mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 13:44:15 +08:00
Use intrusive_ptr in Storage; replace unique_ptr<Storage> with Storage (#10488)
Summary: ``` Use intrusive_ptr in Storage; replace unique_ptr<Storage> with Storage This patch does two major changes: - It replaces the use of Retainable in Storage with a new implementation based on intrusive_ptr. This will be necessary because Caffe2 will be using this class to implement intrusive_ptrs, and we need to line these up for the merge. One good thing about the new implementation is that the default copy/move constructors/assignment operators and destructor work automatically, instead of needing to be hardcoded into Storage/Tensor. - It replaces all places where we returned std::unique_ptr<Storage> with Storage, collapsing an unnecessary double indirection that is no longer necessary now that we have correctly working copy/move constructors. I didn't initially want to do step (2), but it was very important to eliminate all bare uses of new Storage and new StorageImpl, and this making the API change was the most straightforward way to do this. HOW TO FIX YOUR CODE IN THE NEW API - You no longer need to dereference the result of tensor.storage() to pass it to set. So, instead of: x.set_(*y.storage()); just write: x.set_(y.storage()); - If you were accessing methods on StorageImpl via the pImpl() method, you must use the dot operator to run pImpl(). Even better; just drop pImpl, we now have method forwarding. So, instead of: storage->pImpl()->data(); just do: storage->data(); // storage.pImpl()->data() works too but is not as recommended - storage->getDevice() is no more; instead use storage->device().index() MISC CODE UPDATES - retain, release, weak_retain, weak_release and weak_lock are now reimplemented using the "blessed API", and renamed to make it clearer that their use is discouraged. - nvcc OS X and general OS X portability improvements to intrusive_ptr - A new comment in intrusive_ptr describing how stack allocated intrusive_ptr_targets work differently than heap allocated ones from c10::make_intrusive CAVEAT EMPTOR - THStorage_weakRetain used to work on strong pointers, but it NO LONGER works with intrusive_ptr. You must reclaim the strong pointer into a real strong pointer, construct a weak pointer from it, and then release the strong and weak pointers. See StorageSharing.cpp for an example. ``` Pull Request resolved: https://github.com/pytorch/pytorch/pull/10488 Reviewed By: gchanan Differential Revision: D9306134 Pulled By: ezyang fbshipit-source-id: 02d58ef62dab8e4da6131e1a24834a65c21048e2
This commit is contained in:
committed by
Facebook Github Bot
parent
abb209ef25
commit
19031c68dc
@ -110,7 +110,7 @@ TEST_CASE("rnn") {
|
||||
LSTM model(2, 2);
|
||||
for (auto& v : model->parameters()) {
|
||||
float size = v->numel();
|
||||
auto p = static_cast<float*>(v->storage()->pImpl()->data());
|
||||
auto p = static_cast<float*>(v->storage().data());
|
||||
for (size_t i = 0; i < size; i++) {
|
||||
p[i] = i / size;
|
||||
}
|
||||
@ -118,7 +118,7 @@ TEST_CASE("rnn") {
|
||||
|
||||
auto x = torch::empty({3, 4, 2}, torch::requires_grad());
|
||||
float size = x.numel();
|
||||
auto p = static_cast<float*>(x.storage()->pImpl()->data());
|
||||
auto p = static_cast<float*>(x.storage().data());
|
||||
for (size_t i = 0; i < size; i++) {
|
||||
p[i] = (size - i) / size;
|
||||
}
|
||||
|
Reference in New Issue
Block a user