mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-21 13:44:15 +08:00
torch.btrifact for tensors with greater than 3 dimensions (#14964)
Summary: Motivation: - Earlier, `torch.btrifact` could not handle tensors with greater than 3 dimensions. This is because of the check: > AT_CHECK(THTensor_(nDimension)(a) == 3, "expected 3D tensor, got size: ", a->sizes()); What is in this PR?: - Move `btrifact` to ATen - Remove relation to TH/THC. - Handle tensors with more than three dimensions - Tests - Docs modifications: added a note about the non-pivoting variant. [blocked due to old magma-cuda binaries] Pull Request resolved: https://github.com/pytorch/pytorch/pull/14964 Differential Revision: D14405106 Pulled By: soumith fbshipit-source-id: f051f5d6aaa45f85836a2867176c065733563184
This commit is contained in:
committed by
Facebook Github Bot
parent
b161ac9634
commit
f268370b42
@ -5411,6 +5411,11 @@ Batch LU factorization.
|
||||
Returns a tuple containing the LU factorization and pivots. Pivoting is done if
|
||||
:attr:`pivot` is set.
|
||||
|
||||
.. note::
|
||||
LU factorization with :attr:`pivot` = ``True`` is not available for CPU, and attempting
|
||||
to do so will throw an error. However, LU factorization with :attr:`pivot` = ``True`` is
|
||||
available for CUDA.
|
||||
|
||||
Arguments:
|
||||
A (Tensor): the tensor to factor
|
||||
pivot (bool, optional): controls whether pivoting is done
|
||||
|
Reference in New Issue
Block a user