Files
pytorch/c10/core/UndefinedTensorImpl.cpp
Laith Sakka d0a9629435 [do not revert] Compute contiguity symbolically to avoid dde, and introduce c++ sym_is_contiguous (#155590)
When we compute contiguity for a tensor with dynamic shapes we first:
1) Try to compute it without guarding.
2) If all shapes hinted, compute it with potentially adding guards.
3) if any input is not hinted, compute it symbolically.

sym_is_contiguous return a SymBool that is then either evaluated or guard_or_false can be called
on it to avoid data dependent errors.

ex:
 bool is_contiguous = input.sym_is_contiguous().guard_or_false(__FILE__, __LINE__);
is_contiguous_or_false is a helper function that does that.

In this PR I only handle default contiguity, will follow up with changes for other formats like  channel_last .
We use this patter in this PR for several locations to avoid DDEs.
Differential Revision: [D77183032](https://our.internmc.facebook.com/intern/diff/D77183032)

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155590
Approved by: https://github.com/ezyang
2025-07-01 21:39:38 +00:00

52 lines
1.5 KiB
C++

#include <c10/core/UndefinedTensorImpl.h>
#include <c10/util/Exception.h>
namespace c10 {
// should this use the globalContext? Can it get a context passed in somehow?
UndefinedTensorImpl::UndefinedTensorImpl()
: TensorImpl(DispatchKey::Undefined, caffe2::TypeMeta(), std::nullopt) {
set_storage_access_should_throw();
// TODO: accessing the sizes on an undefined tensor is not meaningful
// and should error too, but empirically it does not!
set_custom_sizes_strides(SizesStridesPolicy::CustomStrides);
}
c10::SymBool UndefinedTensorImpl::sym_is_contiguous_custom(
MemoryFormat format) const {
return is_contiguous_default(format);
}
IntArrayRef UndefinedTensorImpl::strides_custom() const {
TORCH_CHECK(false, "strides() called on an undefined Tensor");
}
SymIntArrayRef UndefinedTensorImpl::sym_strides_custom() const {
TORCH_CHECK(false, "sym_strides() called on an undefined Tensor");
}
#ifdef DEBUG
bool UndefinedTensorImpl::has_storage() const {
TORCH_INTERNAL_ASSERT_DEBUG_ONLY(
!storage_, "UndefinedTensorImpl assumes that storage_ is never set");
return false;
}
#endif
void UndefinedTensorImpl::set_storage_offset(int64_t) {
TORCH_CHECK(false, "set_storage_offset() called on an undefined Tensor");
}
const char* UndefinedTensorImpl::tensorimpl_type_name() const {
return "UndefinedTensorImpl";
}
#ifdef _WIN32
UndefinedTensorImpl& UndefinedTensorImpl::getInstance() {
static UndefinedTensorImpl instance;
return instance;
}
#else
UndefinedTensorImpl UndefinedTensorImpl::_singleton;
#endif
} // namespace c10