41 Commits

Author SHA1 Message Date
3806e9767b Refactor out headeronly ArrayRef (#164991)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/164991
Approved by: https://github.com/swolchok
2025-10-17 18:32:39 +00:00
dd1b6621bc Remove C10_DEPRECATED references in c10 (#151058)
Summary:
Revive https://github.com/pytorch/pytorch/pull/138406.  Only limit the scope to files in c10.

Summary from the original PR,
```
Looking in the code I see

// NB: __cplusplus doesn't work for MSVC, so for now MSVC always uses
// the "__declspec(deprecated)" implementation and not the C++14
// "[[deprecated]]" attribute. We tried enabling "[[deprecated]]" for C++14 on
// MSVC, but ran into issues with some older MSVC versions.
But looking at the MSVC C++ support table I see that the [[deprecated]] attribute is supported as of MSVC 2015 and that the vast majority of C++17 features became supported in MSVC 2015 or later.

Since PyTorch is C++17 now, I infer that PyTorch must not support versions of MSVC earlier than MSVC 2015, so the versions of MSVC supported by PyTorch must support [[deprecated]].

Therefore, since we are finished deprecating old MSVCs we can deprecate C10_DEPRECATED.
```

Test Plan: CI

Differential Revision: D72762767

Pull Request resolved: https://github.com/pytorch/pytorch/pull/151058
Approved by: https://github.com/r-barnes
2025-06-12 13:38:03 +00:00
1c63612567 Fix & unit test for c10::ArrayRef constructed from user-defined types (#139758)
Fixes #139391

Pull Request resolved: https://github.com/pytorch/pytorch/pull/139758
Approved by: https://github.com/ezyang
2024-11-06 04:23:05 +00:00
42994234a6 std::value/std::type -> std::_v/std::_t (#138746)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138746
Approved by: https://github.com/cyyever, https://github.com/malfet
2024-10-26 20:59:24 +00:00
dbf0fa811a Remove C10_HOST_CONSTEXPR_EXCEPT_WIN_CUDA and CONSTEXPR_EXCEPT_WIN_CUDA (#138479)
BC linter suppressed due to removal of `tools/linter/adapters/constexpr_linter.py`
Pull Request resolved: https://github.com/pytorch/pytorch/pull/138479
Approved by: https://github.com/eqy, https://github.com/malfet
2024-10-24 07:51:05 +00:00
fc9093c3d2 Revert "Remove C10_DEPRECATED (#138406)"
This reverts commit 70ec86d7542d461ff6f01ba1a1c9a4f38637af8e.

Reverted https://github.com/pytorch/pytorch/pull/138406 on behalf of https://github.com/wdvr due to failing internal tests - see D64714374 ([comment](https://github.com/pytorch/pytorch/pull/138406#issuecomment-2429912896))
2024-10-22 18:00:41 +00:00
70ec86d754 Remove C10_DEPRECATED (#138406)
Looking in the code I see
```
// NB: __cplusplus doesn't work for MSVC, so for now MSVC always uses
// the "__declspec(deprecated)" implementation and not the C++14
// "[[deprecated]]" attribute. We tried enabling "[[deprecated]]" for C++14 on
// MSVC, but ran into issues with some older MSVC versions.
```
But looking at the [MSVC C++ support table](https://learn.microsoft.com/en-us/cpp/overview/visual-cpp-language-conformance?view=msvc-170) I see that the `[[deprecated]]` attribute is supported as of MSVC 2015 and that the vast majority of C++17 features became supported in MSVC 2015 _or later_.

Since PyTorch is C++17 now, I infer that PyTorch must not support versions of MSVC earlier than MSVC 2015, so the versions of MSVC supported by PyTorch must support `[[deprecated]]`.

Therefore, since we are finished deprecating old MSVCs we can deprecate `C10_DEPRECATED`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/138406
Approved by: https://github.com/cyyever, https://github.com/malfet
2024-10-21 20:57:27 +00:00
279ddfc6ee Add type check for dilation in torch.quantized_max_pool3d() (#137845)
Fixes #136716

repro:

```python
import torch

input = torch.randn([1, 1, 1, 1, 1])
input = torch.quantize_per_tensor(input, 0.1, 10, torch.qint32)
torch.quantized_max_pool3d(input, (1, 1, 1), (1, 1, 1), (0, 0, 0), (-3, 1, 1)) # crash

input = torch.randn([1, 1, 1, 1, 1])
input = torch.quantize_per_tensor(input, 0.1, 10, torch.qint32)
result = torch.nn.functional.max_pool3d(input, (1, 1, 1), (1, 1, 1), (0, 0, 0), (-3, 1, 1))  # crash
```

result:

```
RuntimeError: Expected dilation >= 1
```

Pull Request resolved: https://github.com/pytorch/pytorch/pull/137845
Approved by: https://github.com/albanD
2024-10-21 16:15:57 +00:00
ed327876f5 [codemod] c10:optional -> std::optional (#126135)
Generated by running the following from PyTorch root:
```
find . -regex ".*\.\(cpp\|h\|cu\|hpp\|cc\|cxx\)$" | grep -v "build/" | xargs -n 50 -P 4 perl -pi -e 's/c10::optional/std::optional/'
```

`c10::optional` is just an alias for `std::optional`. This removes usages of that alias in preparation for eliminating it entirely.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/126135
Approved by: https://github.com/Skylion007, https://github.com/malfet, https://github.com/albanD, https://github.com/aaronenyeshi
2024-05-14 19:35:51 +00:00
cyy
ad507789d1 [Reland] [11/N] Enable clang-tidy warnings on c10/util/*.h (#116751)
Reland of #116353 with C++ diagnostic macros restored.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/116751
Approved by: https://github.com/albanD
2024-01-08 11:07:58 +00:00
1c69d0bdb5 Revert "[11/N] Enable clang-tidy warnings on c10/util/*.h (#116353)"
This reverts commit 37aae5932c26c3729d68b6ebdf00e618fe229b1c.

Reverted https://github.com/pytorch/pytorch/pull/116353 on behalf of https://github.com/izaitsevfb due to Reverting, breaks internal builds: error: implicit conversion from 'long long' to 'float' may lose precision [-Werror,-Wimplicit-int-float-conversion] ([comment](https://github.com/pytorch/pytorch/pull/116353#issuecomment-1876045800))
2024-01-03 22:22:11 +00:00
cyy
37aae5932c [11/N] Enable clang-tidy warnings on c10/util/*.h (#116353)
This PR enables clang-tidy coverage on c10/util/*.h
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116353
Approved by: https://github.com/albanD
2023-12-30 14:38:39 +00:00
cyy
9a0c217a0a [9/N] Fixes clang-tidy warnings in c10/util/*.h (#116185)
Continued work to clean headers in c10/util.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/116185
Approved by: https://github.com/Skylion007
2023-12-22 09:35:44 +00:00
cyy
1544c37520 [7/N] Fixes clang-tidy warnings in c10/{core,util}/*.h (#115495)
This PR continues to fix clang-tidy warnings for headers in c10/core and c10/util.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/115495
Approved by: https://github.com/malfet
2023-12-19 02:14:30 +00:00
cyy
e4f3e5434f [Reland] Elimates c10::guts::to_string (#108748)
Reland of PR #108480, after relanding another blocking PR.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/108748
Approved by: https://github.com/huydhn
2023-09-07 13:35:17 +00:00
8da04e023e Revert "Eliminate c10::guts::to_string (#108480)"
This reverts commit 4146be192ead477360a2763c5005e46a9485c3bf.

Reverted https://github.com/pytorch/pytorch/pull/108480 on behalf of https://github.com/huydhn due to Sorry for reverting this, but this is needed to keep trunk green after https://github.com/pytorch/pytorch/pull/108479 was reverted.  Both will need to be relanded ([comment](https://github.com/pytorch/pytorch/pull/108480#issuecomment-1707067595))
2023-09-05 18:04:53 +00:00
cyy
4146be192e Eliminate c10::guts::to_string (#108480)
This PR replace c10::guts::to_string with std::to_string. The major part of changes is using void* as optimizer state key since string is used only for serialization and using pointers as hashing keys is more efficient than a string.
Some other guts functions in the affected source files are also replaced.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/108480
Approved by: https://github.com/Skylion007
2023-09-04 08:12:53 +00:00
c9f4f01981 Add security guards to avoid crashes in torch::jit module (#102156)
Hi!

I've been fuzzing different pytorch modules with with [sydr-fuzz](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch), and found a multiple crashes in torch::jit::load() function.

All found errors could be reproduced with provided docker: [Dockerfile](https://github.com/ispras/oss-sydr-fuzz/tree/master/projects/pytorch).

### Crash in torch/csrc/jit/unpickler.cpp:1075

[crash-1f59083b8396c5b62b4705c7556e68f129e833b1.zip](https://github.com/pytorch/pytorch/files/11552947/crash-1f59083b8396c5b62b4705c7556e68f129e833b1.zip)

```asan
    "#0  0x00007ffff7a5600b in raise () from /lib/x86_64-linux-gnu/libc.so.6",
    "#1  0x00007ffff7a35859 in abort () from /lib/x86_64-linux-gnu/libc.so.6",
    "#2  0x00007ffff7ce3911 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#3  0x00007ffff7cef38c in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#4  0x00007ffff7cef3f7 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#5  0x00007ffff7cef6a9 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#6  0x00007ffff7ce6326 in std::__throw_length_error(char const*) () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#7  0x00007ffff7d87edc in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::_M_create(unsigned long&, unsigned long) () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#8  0x00007ffff7d88880 in std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >::reserve(unsigned long) () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#9  0x000000000ea52931 in torch::jit::Unpickler::readBytes[abi:cxx11](unsigned long) (this=this@entry=0x7fffffffac10, length=length@entry=8358680908539635837) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:1075",
    "#10 0x000000000ea4c3a0 in torch::jit::Unpickler::readInstruction (this=0x7fffffff90d0) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:355",
    "#11 0x000000000ea49eb8 in torch::jit::Unpickler::run (this=0x7fffffffac10) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:251",
    "#12 0x000000000ea49b12 in torch::jit::Unpickler::parse_ivalue (this=0x7fffffffac10) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204",
    "#13 0x000000000e960a9f in torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>) (archive_name=..., pickle_prefix=..., tensor_prefix=..., type_resolver=..., obj_loader=..., device=..., stream_reader=..., type_parser=<optimized out>, storage_context=...) at /pytorch/torch/csrc/jit/serialization/import_read.cpp:53",
    "#14 0x000000000e8ef599 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive (this=0x7fffffffbc60, archive_name=...) at /pytorch/torch/csrc/jit/serialization/import.cpp:184",
    "#15 0x000000000e8eb886 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::deserialize (this=<optimized out>, device=..., extra_files=..., restore_shapes=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:287",
    "#16 0x000000000e8e9cc5 in torch::jit::import_ir_module (cu=..., in=..., device=..., extra_files=..., load_debug_files=<optimized out>, restore_shapes=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:386",
    "#17 0x000000000e8f37bf in torch::jit::import_ir_module (cu=..., in=..., device=..., load_debug_files=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:322",
    "#18 0x000000000e8f615a in torch::jit::load (in=..., device=..., load_debug_files=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:482",
    "#19 0x00000000005c2d61 in LLVMFuzzerTestOneInput (data=<optimized out>, size=1663) at /load.cc:42",
    "#20 0x00000000005c2a8e in ExecuteFilesOnyByOne (argc=2, argv=0x7fffffffc6b8, callback=callback@entry=0x5c2ae0 <LLVMFuzzerTestOneInput(uint8_t const*, size_t)>) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:255",
    "#21 0x00000000005c2899 in LLVMFuzzerRunDriver (argcp=argcp@entry=0x7fffffffc5b4, argvp=argvp@entry=0x7fffffffc5b8, callback=0x5c2ae0 <LLVMFuzzerTestOneInput(uint8_t const*, size_t)>) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:364",
    "#22 0x00000000005c2459 in main (argc=2, argv=0x7fffffffc6b8) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:300"

```

### Crash in torch/csrc/jit/unpickler.cpp:386

[crash-2e9923de375c393e700e8c0441f0ebe8252ca364.zip](https://github.com/pytorch/pytorch/files/11552950/crash-2e9923de375c393e700e8c0441f0ebe8252ca364.zip)

```asan
    "#0  0x00007ffff7a5600b in raise () from /lib/x86_64-linux-gnu/libc.so.6",
    "#1  0x00007ffff7a35859 in abort () from /lib/x86_64-linux-gnu/libc.so.6",
    "#2  0x00007ffff7ce3911 in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#3  0x00007ffff7cef38c in ?? () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#4  0x00007ffff7cef3f7 in std::terminate() () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#5  0x00007ffff7cef6a9 in __cxa_throw () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#6  0x00007ffff7ce6326 in std::__throw_length_error(char const*) () from /lib/x86_64-linux-gnu/libstdc++.so.6",
    "#7  0x0000000000670aff in std::vector<c10::IValue, std::allocator<c10::IValue> >::reserve (this=this@entry=0x7fffffff9750, __n=__n@entry=18446744073709551614) at /usr/include/c++/10/bits/vector.tcc:70",
    "#8  0x000000000ea4d5cd in torch::jit::Unpickler::readInstruction (this=0x7fffffffac10) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:386",
    "#9  0x000000000ea49eb8 in torch::jit::Unpickler::run (this=0x7fffffffac10) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:251",
    "#10 0x000000000ea49b12 in torch::jit::Unpickler::parse_ivalue (this=0x7fffffffac10) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204",
    "#11 0x000000000e960a9f in torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>) (archive_name=..., pickle_prefix=..., tensor_prefix=..., type_resolver=..., obj_loader=..., device=..., stream_reader=..., type_parser=<optimized out>, storage_context=...) at /pytorch/torch/csrc/jit/serialization/import_read.cpp:53",
    "#12 0x000000000e8ef599 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive (this=0x7fffffffbc60, archive_name=...) at /pytorch/torch/csrc/jit/serialization/import.cpp:184",
    "#13 0x000000000e8eb886 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::deserialize (this=<optimized out>, device=..., extra_files=..., restore_shapes=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:287",
    "#14 0x000000000e8e9cc5 in torch::jit::import_ir_module (cu=..., in=..., device=..., extra_files=..., load_debug_files=<optimized out>, restore_shapes=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:386",
    "#15 0x000000000e8f37bf in torch::jit::import_ir_module (cu=..., in=..., device=..., load_debug_files=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:322",
    "#16 0x000000000e8f615a in torch::jit::load (in=..., device=..., load_debug_files=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:482",
    "#17 0x00000000005c2d61 in LLVMFuzzerTestOneInput (data=<optimized out>, size=5498) at /load.cc:42",
    "#18 0x00000000005c2a8e in ExecuteFilesOnyByOne (argc=2, argv=0x7fffffffc6b8, callback=callback@entry=0x5c2ae0 <LLVMFuzzerTestOneInput(uint8_t const*, size_t)>) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:255",
    "#19 0x00000000005c2899 in LLVMFuzzerRunDriver (argcp=argcp@entry=0x7fffffffc5b4, argvp=argvp@entry=0x7fffffffc5b8, callback=0x5c2ae0 <LLVMFuzzerTestOneInput(uint8_t const*, size_t)>) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:364",
    "#20 0x00000000005c2459 in main (argc=2, argv=0x7fffffffc6b8) at /AFLplusplus/utils/aflpp_driver/aflpp_driver.c:300"
```

### Crash in torch/csrc/jit/serialization/source_range_serialization.cpp:211

[crash-5598d386057152f606bfa69d85605499e8852625.zip](https://github.com/pytorch/pytorch/files/11552952/crash-5598d386057152f606bfa69d85605499e8852625.zip)

```asan
    "#0  torch::jit::ConcreteSourceRangeUnpickler::unpickle (this=0x99b8d80) at /pytorch/torch/csrc/jit/serialization/source_range_serialization.cpp:211",
    "#1  0x0000000004042566 in torch::jit::ConcreteSourceRangeUnpickler::findSourceRangeThatGenerated (this=0x99aa1c0, range=...) at /pytorch/torch/csrc/jit/serialization/source_range_serialization.cpp:229",
    "#2  0x00000000007b5cc8 in torch::jit::Source::findSourceRangeThatGenerated (this=<optimized out>, range=...) at /pytorch/torch/csrc/jit/frontend/source_range.cpp:144",
    "#3  torch::jit::SourceRange::findSourceRangeThatGenerated (this=0x7fffffffa650) at /pytorch/torch/csrc/jit/frontend/source_range.h:384",
    "#4  torch::jit::SourceRange::highlight (this=0x7fffffffa650, out=...) at /pytorch/torch/csrc/jit/frontend/source_range.cpp:149",
    "#5  0x00000000007a0e74 in torch::jit::Lexer::expected (this=this@entry=0x99979a0, what=..., t=...) at /pytorch/torch/csrc/jit/frontend/lexer.h:461",
    "#6  0x000000000079fcaa in torch::jit::Lexer::lexRaw (this=this@entry=0x99979a0, whitespace_token=false) at /pytorch/torch/csrc/jit/frontend/lexer.h:552",
    "#7  0x000000000079fd23 in torch::jit::Lexer::lex (this=this@entry=0x99979a0) at /pytorch/torch/csrc/jit/frontend/lexer.h:487",
    "#8  0x00000000007a1da1 in torch::jit::Lexer::next (this=this@entry=0x99979a0) at /pytorch/torch/csrc/jit/frontend/lexer.h:436",
    "#9  0x0000000003bff6a8 in torch::jit::Lexer::nextIf (this=0x99979a0, kind=330) at /pytorch/torch/csrc/jit/frontend/lexer.h:444",
    "#10 torch::jit::ParserImpl::parseReturnAnnotation (this=this@entry=0x99979a0) at /pytorch/torch/csrc/jit/frontend/parser.cpp:703",
    "#11 0x0000000003bfd500 in torch::jit::ParserImpl::parseDecl (this=this@entry=0x99979a0) at /pytorch/torch/csrc/jit/frontend/parser.cpp:729",
    "#12 0x0000000003bfb725 in torch::jit::ParserImpl::parseFunction (this=this@entry=0x99979a0, is_method=true) at /pytorch/torch/csrc/jit/frontend/parser.cpp:755",
    "#13 0x0000000003bfdc28 in torch::jit::ParserImpl::parseStmt (this=this@entry=0x99979a0, in_class=<optimized out>) at /pytorch/torch/csrc/jit/frontend/parser.cpp:599",
    "#14 0x0000000003bfd8dd in torch::jit::ParserImpl::parseStatements (this=this@entry=0x99979a0, expect_indent=<optimized out>, in_class=<optimized out>) at /pytorch/torch/csrc/jit/frontend/parser.cpp:697",
    "#15 0x0000000003bfc4ba in torch::jit::ParserImpl::parseClass (this=0x99979a0) at /pytorch/torch/csrc/jit/frontend/parser.cpp:747",
    "#16 0x0000000003bfaddc in torch::jit::Parser::parseClass (this=<optimized out>) at /pytorch/torch/csrc/jit/frontend/parser.cpp:812",
    "#17 0x0000000004008e2d in torch::jit::SourceImporterImpl::parseSourceIfNeeded (this=this@entry=0x95d41f0, qualifier=...) at /pytorch/torch/csrc/jit/serialization/import_source.cpp:182",
    "#18 0x0000000004008ab7 in torch::jit::SourceImporterImpl::findNamedType (this=this@entry=0x95d41f0, name=...) at /pytorch/torch/csrc/jit/serialization/import_source.cpp:135",
    "#19 0x000000000400d010 in torch::jit::SourceImporterImpl::resolveType (this=0x95d41f0, name=..., loc=...) at /pytorch/torch/csrc/jit/serialization/import_source.cpp:261",
    "#20 0x0000000003c20821 in torch::jit::ScriptTypeParser::parseTypeFromExpr (this=this@entry=0x7fffffffb658, expr=...) at /pytorch/torch/csrc/jit/frontend/script_type_parser.cpp:238",
    "#21 0x0000000003c20acc in torch::jit::ScriptTypeParser::parseType (this=0x7fffffffb658, str=...) at /pytorch/torch/csrc/jit/frontend/script_type_parser.cpp:312",
    "#22 0x0000000004019416 in torch::jit::SourceImporter::loadType (this=<optimized out>, name=...) at /pytorch/torch/csrc/jit/serialization/import_source.cpp:786",
    "#23 0x0000000003ff365e in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0::operator()(c10::QualifiedName const&) const (this=<optimized out>, qn=...) at /pytorch/torch/csrc/jit/serialization/import.cpp:146",
    "#24 std::__invoke_impl<c10::StrongTypePtr, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&>(std::__invoke_other, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&) (__f=..., __args=...) at /usr/include/c++/10/bits/invoke.h:60",
    "#25 std::__invoke_r<c10::StrongTypePtr, torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&>(torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0&, c10::QualifiedName const&) (__fn=..., __args=...) at /usr/include/c++/10/bits/invoke.h:113",
    "#26 std::_Function_handler<c10::StrongTypePtr (c10::QualifiedName const&), torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&)::$_0>::_M_invoke(std::_Any_data const&, c10::QualifiedName const&) (__functor=..., __args=...) at /usr/include/c++/10/bits/std_function.h:291",
    "#27 0x000000000404e5c4 in std::function<c10::StrongTypePtr (c10::QualifiedName const&)>::operator()(c10::QualifiedName const&) const (this=0x7fffffffbf28, __args=...) at /usr/include/c++/10/bits/std_function.h:622",
    "#28 torch::jit::Unpickler::readGlobal (this=this@entry=0x7fffffffbd50, module_name=..., class_name=...) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:820",
    "#29 0x0000000004049ce5 in torch::jit::Unpickler::readInstruction (this=this@entry=0x7fffffffbd50) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:496",
    "#30 0x00000000040497a8 in torch::jit::Unpickler::run (this=0x7fffffffbd50) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:251",
    "#31 0x00000000040494f9 in torch::jit::Unpickler::parse_ivalue (this=0x99aa1c0) at /pytorch/torch/csrc/jit/serialization/unpickler.cpp:204",
    "#32 0x00000000040075f8 in torch::jit::readArchiveAndTensors(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, c10::optional<std::function<c10::StrongTypePtr (c10::QualifiedName const&)> >, c10::optional<std::function<c10::intrusive_ptr<c10::ivalue::Object, c10::detail::intrusive_target_default_null_type<c10::ivalue::Object> > (c10::StrongTypePtr, c10::IValue)> >, c10::optional<c10::Device>, caffe2::serialize::PyTorchStreamReader&, c10::Type::SingletonOrSharedTypePtr<c10::Type> (*)(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&), std::shared_ptr<torch::jit::DeserializationStorageContext>) (archive_name=..., pickle_prefix=..., tensor_prefix=..., type_resolver=..., obj_loader=..., device=..., stream_reader=..., type_parser=0x0, storage_context=...) at /pytorch/torch/csrc/jit/serialization/import_read.cpp:53",
    "#33 0x0000000003ff3545 in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::readArchive (this=this@entry=0x7fffffffc2b8, archive_name=...) at /pytorch/torch/csrc/jit/serialization/import.cpp:184",
    "#34 0x0000000003fed8bf in torch::jit::(anonymous namespace)::ScriptModuleDeserializer::deserialize (this=this@entry=0x7fffffffc2b8, device=device@entry=..., extra_files=..., restore_shapes=220) at /pytorch/torch/csrc/jit/serialization/import.cpp:287",
    "#35 0x0000000003febb0f in torch::jit::import_ir_module (cu=..., in=..., device=..., device@entry=..., extra_files=..., load_debug_files=true, restore_shapes=<optimized out>) at /pytorch/torch/csrc/jit/serialization/import.cpp:386",
    "#36 0x0000000003feb7a1 in torch::jit::import_ir_module (cu=..., in=..., device=..., device@entry=..., load_debug_files=false) at /pytorch/torch/csrc/jit/serialization/import.cpp:322",
    "#37 0x0000000003ff015a in torch::jit::load (in=..., device=device@entry=..., load_debug_files=true) at /pytorch/torch/csrc/jit/serialization/import.cpp:482",
    "#38 0x00000000004a1655 in LLVMFuzzerTestOneInput (data=0x981a680 \"PK\\003\\004\", size=1609) at /load.cc:42",
    "#39 0x00000000004a1dbf in main ()"
```

### Segmentation fault in /pytorch/aten/src/ATen/core/ivalue.h:526

[crash-9bd059c1ae85ab9cdb41d786932214d942baa189.zip](https://github.com/pytorch/pytorch/files/11552956/crash-9bd059c1ae85ab9cdb41d786932214d942baa189.zip)

```asan
    "==8528==ERROR: AddressSanitizer: SEGV on unknown address (pc 0x00000e55d97e bp 0x7fffffffb4d0 sp 0x7fffffffb360 T0)",
    "==8528==The signal is caused by a READ memory access.",
    "==8528==Hint: this fault was caused by a dereference of a high value address (see register values below).  Disassemble the provided pc to learn which register was used.",
    "    #0 0xe55d97e in c10::IValue::isTuple() const /pytorch/aten/src/ATen/core/ivalue.h:526:26",
    "    #1 0xe55d97e in torch::distributed::rpc::GloballyUniqueId::fromIValue(c10::IValue const&) /pytorch/torch/csrc/distributed/rpc/types.cpp:60:3",
    "    #2 0xe4b04fb in torch::distributed::rpc::ScriptRemoteCall::fromIValues(std::vector<c10::IValue, std::allocator<c10::IValue> >&) /pytorch/torch/csrc/distributed/rpc/script_remote_call.cpp:33:20",
    "    #3 0xe4b1ed5 in torch::distributed::rpc::ScriptRemoteCall::fromMessage(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/rpc/script_remote_call.cpp:80:10",
    "    #4 0xe55f8a0 in torch::distributed::rpc::deserializeRequest(torch::distributed::rpc::Message const&) /pytorch/torch/csrc/distributed/rpc/utils.cpp:108:14",
    "    #5 0x6120a8 in LLVMFuzzerTestOneInput /message_deserialize.cc:192:27",
    "    #6 0x535de1 in fuzzer::Fuzzer::ExecuteCallback(unsigned char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerLoop.cpp:611:15",
    "    #7 0x51fcec in fuzzer::RunOneTest(fuzzer::Fuzzer*, char const*, unsigned long) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:324:6",
    "    #8 0x525a3b in fuzzer::FuzzerDriver(int*, char***, int (*)(unsigned char const*, unsigned long)) /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerDriver.cpp:860:9",
    "    #9 0x54eff2 in main /llvm-project-llvmorg-14.0.6/compiler-rt/lib/fuzzer/FuzzerMain.cpp:20:10",
    "    #10 0x7ffff7a37082 in __libc_start_main (/lib/x86_64-linux-gnu/libc.so.6+0x24082) (BuildId: 1878e6b475720c7c51969e69ab2d276fae6d1dee)",
    "    #11 0x51a60d in _start (/message_deserialize_fuzz+0x51a60d)",
    "",
    "AddressSanitizer can not provide additional info.",
    "SUMMARY: AddressSanitizer: SEGV /pytorch/aten/src/ATen/core/ivalue.h:526:26 in c10::IValue::isTuple() const",
    "==8528==ABORTING"
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/102156
Approved by: https://github.com/ezyang
2023-05-27 04:23:01 +00:00
a34a9c3471 Perf: Apply more clang-tidy fixups to torch headers (#91445)
Applies so more fixes to headers that may have been missed before for performance optimization.cc @jgong5 @mingfeima @XiaobingSuper @sanchitintel @ashokei @jingxu10 @EikanWang @ezyang since this more in the series of the clang-tidy fixup

This is PR fixes 3 main issues:
1. Use emplacement more in headers
1. Avoid unnecessary copies and use const ref when possible
1. Default any special functions when possible to make them potentially trivial and more readable.
1. There is also one change in this PR that tries to prevent unnecessary math promotion, the rest of these changes are in another PR
Pull Request resolved: https://github.com/pytorch/pytorch/pull/91445
Approved by: https://github.com/ezyang
2022-12-29 23:43:45 +00:00
69e048b090 List of SymInt rebase on master
Fixes #ISSUE_NUMBER

Pull Request resolved: https://github.com/pytorch/pytorch/pull/75115
Approved by: https://github.com/ezyang
2022-04-20 02:09:55 +00:00
d026057bb3 [PyTorch] Update SmallVector from LLVM (#69110)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/69110

I pasted the current LLVM code, reapplied the modifications listed in the code comments, caught a few more in the diff/build process. The trivially copyable detection is different now; if gcc builds fail, will try reverting to C10_IS_TRIVIALLY_COPYABLE or copying what LLVM is doing.

The motivation for this change is that, as noted in an existing comment, C10_IS_TRIVIALLY_COPYABLE did the wrong thing for std::unique_ptr, which caused problems with D32454856 / #68412.

ghstack-source-id: 145327773

Test Plan: CI

Reviewed By: bhosmer, mruberry

Differential Revision: D32733017

fbshipit-source-id: 9452ab90328e3fdf457aad23a26f2f6835b0bd3d
2021-12-10 11:57:19 -08:00
6707dfeefb Remove 9.2 related macros for CONSTEXPR (#65066)
Summary:
Removes C10_HOST_CONSTEXPR_EXCEPT_CUDA92 references in the code

Pull Request resolved: https://github.com/pytorch/pytorch/pull/65066

Reviewed By: driazati

Differential Revision: D31022520

Pulled By: janeyx99

fbshipit-source-id: f02cdc6caba5b48405575242921f5845ff18f729
2021-09-17 17:31:20 -07:00
65f33ec85c Follow-up fix for compilation error on CUDA92 (#60287)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/60287

Follow up of #60017

Test Plan: Imported from OSS

Reviewed By: gchanan

Differential Revision: D29236208

Pulled By: ejguan

fbshipit-source-id: f1acf9630b45fea8cbdf7d64e47661643d0a52b8
2021-06-21 13:29:11 -07:00
691183bb74 Fix compile failure on CUDA92 (#60017)
Summary:
Fixes https://github.com/pytorch/pytorch/issues/60016

For CUDA 92
- OptionalBase was not check if `is_arrayref`
- constexpr seems not expect to raise Exception for cuda 92

Pull Request resolved: https://github.com/pytorch/pytorch/pull/60017

Reviewed By: malfet

Differential Revision: D29139515

Pulled By: ejguan

fbshipit-source-id: 4f4f6d9fe6a5f2eadf913de0a9781cc9f2e6ac6f
2021-06-16 12:23:08 -07:00
1798ff02e4 [PyTorch] Optimize c10::optional<ArrayRef<T>> for size (#59333)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/59333

Code comment should explain this in sufficient detail. In brief, making it 16 bytes should get it to be passed in registers.
ghstack-source-id: 130631329

Test Plan: Updated optional_test and added static_assert in Optional.cpp.

Reviewed By: ezyang

Differential Revision: D28843027

fbshipit-source-id: 3029f05e03a9f04ca7337962e7770cdeb9a608d9
2021-06-07 11:35:17 -07:00
2a78f6376c TensorIterator: Reduce serial_for_each static overhead (#58909)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/58909

Test Plan: Imported from OSS

Reviewed By: mruberry

Differential Revision: D28776507

Pulled By: ngimel

fbshipit-source-id: 4f0283d03b26aa5785b687b78d77e6b0efcbaf65
2021-05-30 21:08:54 -07:00
44cc873fba [PyTorch] Autoformat c10 (#56830)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/56830

Opt into formatting on GitHub and format everything. This is a trial run before turning on formatting for more and eventually all of the codebase.

Test Plan: CI

Reviewed By: zertosh

Differential Revision: D27979080

fbshipit-source-id: a80f0c48691c08ae8ca0af06377b87e6a2351151
2021-04-30 21:23:28 -07:00
be51de4047 Minor doc improvement(?) on ArrayRef::slice (#50541)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/50541

I found the current phrasing to be confusing

Test Plan: N/A

Reviewed By: ngimel

Differential Revision: D25909205

fbshipit-source-id: 483151d01848ab41d57b3f3b3775ef69f1451dcf
2021-01-14 18:09:34 -08:00
c21f89970f Remove c++14-conditional constexpr (#30916)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30916

These macros said "make it constexpr if we're in C++14". Since we're now always C++14, we can just say "constexpr" isntead.
ghstack-source-id: 96369584

Test Plan: waitforsandcastle

Differential Revision: D18869635

fbshipit-source-id: f41751e4e26fad6214ec3a98db2d961315fd73ff
2020-01-07 16:40:11 -08:00
647569e546 get rid of choco install (#30897)
Summary:
7zip and cmake are part of base image, no need to re-install. Remove the install step can make build/test more stable.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30897

Differential Revision: D19232961

Pulled By: mingbowan

fbshipit-source-id: fa3bbd1325839a2a977bf13fdbd97fda43793b8d
2019-12-27 13:12:04 -08:00
f0243ea712 Use [[deprecated]] instead of C10_DEPRECATED (#30918)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/30918

This is a C++14 feature we can use now
ghstack-source-id: 95811482

Test Plan: waitforsandcastle

Differential Revision: D18869636

fbshipit-source-id: b5b3d78b61b6ceb2deda509131f8502e95b1d057
2019-12-17 15:21:34 -08:00
5c67b01467 Switch internal CUDA build to C++14 (#26757)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/26757

This doesn't switch any open source builds or CI.
The internal fbcode build is C++17 already for quite some time, but in CUDA code, we had it restricted to C++11.
This diff changes that to C++14.

Because this doesn't change anything open source, the risk of this is low.
ghstack-source-id: 90728524

Test Plan: waitforsandcastle

Differential Revision: D17558142

fbshipit-source-id: 9cfd47e38e71d5a2fdae2f535c01f281bf007d9a
2019-09-26 14:57:21 -07:00
e8cc1fddb7 Fix cpp_extensions test failures with GCC 9.1 from ArrayRef(initializer_list) (#25384)
Summary:
These are test failures due to `-Werror` in `test/cpp_extensions/setup.py` that look like:
```
$ python test/run_test.py -i cpp_extensions
Test executor: ['/home/rgommers/anaconda3/envs/pytorch-gcc91/bin/python']
Running test_cpp_extensions ... [2019-08-29 02:19:03.421117]
running install
running build
running build_py
creating build
creating build/lib.linux-x86_64-3.6
creating build/lib.linux-x86_64-3.6/torch_test_cpp_extension
copying torch_test_cpp_extension/__init__.py -> build/lib.linux-x86_64-3.6/torch_test_cpp_extension
running build_ext
building 'torch_test_cpp_extension.cpp' extension
creating build/temp.linux-x86_64-3.6
gcc -pthread -B /home/rgommers/anaconda3/envs/pytorch-gcc91/compiler_compat -Wl,--sysroot=/ -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I/home/rgommers/code/pytorch/torch/include -I/home/rgommers/code/pytorch/torch/include/torch/csrc/api/include -I/home/rgommers/code/pytorch/torch/include/TH -I/home/rgommers/code/pytorch/torch/include/THC -I/home/rgommers/anaconda3/envs/pytorch-gcc91/include/python3.6m -c extension.cpp -o build/temp.linux-x86_64-3.6/extension.o -g -Werror -DTORCH_API_INCLUDE_EXTENSION_H -DTORCH_EXTENSION_NAME=cpp -D_GLIBCXX_USE_CXX11_ABI=1 -std=c++11
cc1plus: warning: command line option ‘-Wstrict-prototypes’ is valid for C/ObjC but not for C++
In file included from /home/rgommers/code/pytorch/torch/include/c10/core/MemoryFormat.h:5,
                 from /home/rgommers/code/pytorch/torch/include/ATen/core/Tensor.h:5,
                 from /home/rgommers/code/pytorch/torch/include/ATen/Tensor.h:2,
                 from /home/rgommers/code/pytorch/torch/include/ATen/Context.h:4,
                 from /home/rgommers/code/pytorch/torch/include/ATen/ATen.h:5,
                 from /home/rgommers/code/pytorch/torch/include/torch/csrc/api/include/torch/types.h:3,
                 from /home/rgommers/code/pytorch/torch/include/torch/csrc/api/include/torch/data/dataloader_options.h:4,
                 from /home/rgommers/code/pytorch/torch/include/torch/csrc/api/include/torch/data/dataloader/base.h:3,
                 from /home/rgommers/code/pytorch/torch/include/torch/csrc/api/include/torch/data/dataloader/stateful.h:3,
                 from /home/rgommers/code/pytorch/torch/include/torch/csrc/api/include/torch/data/dataloader.h:3,
                 from /home/rgommers/code/pytorch/torch/include/torch/csrc/api/include/torch/data.h:3,
                 from /home/rgommers/code/pytorch/torch/include/torch/csrc/api/include/torch/all.h:4,
                 from /home/rgommers/code/pytorch/torch/include/torch/extension.h:4,
                 from extension.cpp:1:
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h: In instantiation of ‘constexpr c10::ArrayRef<T>::ArrayRef(const std::initializer_list<_Tp>&) [with T = long int]’:
/home/rgommers/code/pytorch/torch/include/c10/core/TensorImpl.h:1464:34:   required from here
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h:103:39: error: initializing ‘c10::ArrayRef<long int>::Data’ from ‘std::initializer_list<long int>::begin’ does not extend the lifetime of the underlying array [-Werror=init-list-lifetime]
  103 |       : Data(Vec.begin() == Vec.end() ? static_cast<T*>(nullptr) : Vec.begin()),
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h: In instantiation of ‘constexpr c10::ArrayRef<T>::ArrayRef(const std::initializer_list<_Tp>&) [with T = unsigned char]’:
/home/rgommers/code/pytorch/torch/include/ATen/NativeFunctions.h:47:1:   required from here
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h:103:39: error: initializing ‘c10::ArrayRef<unsigned char>::Data’ from ‘std::initializer_list<unsigned char>::begin’ does not extend the lifetime of the underlying array [-Werror=init-list-lifetime]
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h: In instantiation of ‘constexpr c10::ArrayRef<T>::ArrayRef(const std::initializer_list<_Tp>&) [with T = signed char]’:
/home/rgommers/code/pytorch/torch/include/ATen/NativeFunctions.h:47:1:   required from here
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h:103:39: error: initializing ‘c10::ArrayRef<signed char>::Data’ from ‘std::initializer_list<signed char>::begin’ does not extend the lifetime of the underlying array [-Werror=init-list-lifetime]
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h: In instantiation of ‘constexpr c10::ArrayRef<T>::ArrayRef(const std::initializer_list<_Tp>&) [with T = short int]’:
/home/rgommers/code/pytorch/torch/include/ATen/NativeFunctions.h:47:1:   required from here
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h:103:39: error: initializing ‘c10::ArrayRef<short int>::Data’ from ‘std::initializer_list<short int>::begin’ does not extend the lifetime of the underlying array [-Werror=init-list-lifetime]
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h: In instantiation of ‘constexpr c10::ArrayRef<T>::ArrayRef(const std::initializer_list<_Tp>&) [with T = int]’:
/home/rgommers/code/pytorch/torch/include/ATen/NativeFunctions.h:47:1:   required from here
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h:103:39: error: initializing ‘c10::ArrayRef<int>::Data’ from ‘std::initializer_list<int>::begin’ does not extend the lifetime of the underlying array [-Werror=init-list-lifetime]
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h: In instantiation of ‘constexpr c10::ArrayRef<T>::ArrayRef(const std::initializer_list<_Tp>&) [with T = float]’:
/home/rgommers/code/pytorch/torch/include/ATen/NativeFunctions.h:47:1:   required from here
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h:103:39: error: initializing ‘c10::ArrayRef<float>::Data’ from ‘std::initializer_list<float>::begin’ does not extend the lifetime of the underlying array [-Werror=init-list-lifetime]
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h: In instantiation of ‘constexpr c10::ArrayRef<T>::ArrayRef(const std::initializer_list<_Tp>&) [with T = double]’:
/home/rgommers/code/pytorch/torch/include/ATen/NativeFunctions.h:47:1:   required from here
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h:103:39: error: initializing ‘c10::ArrayRef<double>::Data’ from ‘std::initializer_list<double>::begin’ does not extend the lifetime of the underlying array [-Werror=init-list-lifetime]
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h: In instantiation of ‘constexpr c10::ArrayRef<T>::ArrayRef(const std::initializer_list<_Tp>&) [with T = bool]’:
/home/rgommers/code/pytorch/torch/include/ATen/NativeFunctions.h:47:1:   required from here
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h:103:39: error: initializing ‘c10::ArrayRef<bool>::Data’ from ‘std::initializer_list<bool>::begin’ does not extend the lifetime of the underlying array [-Werror=init-list-lifetime]
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h: In instantiation of ‘constexpr c10::ArrayRef<T>::ArrayRef(const std::initializer_list<_Tp>&) [with T = c10::Half]’:
/home/rgommers/code/pytorch/torch/include/ATen/NativeFunctions.h:47:1:   required from here
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h:103:39: error: initializing ‘c10::ArrayRef<c10::Half>::Data’ from ‘std::initializer_list<c10::Half>::begin’ does not extend the lifetime of the underlying array [-Werror=init-list-lifetime]
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h: In instantiation of ‘constexpr c10::ArrayRef<T>::ArrayRef(const std::initializer_list<_Tp>&) [with T = c10::BFloat16]’:
/home/rgommers/code/pytorch/torch/include/ATen/NativeFunctions.h:47:1:   required from here
/home/rgommers/code/pytorch/torch/include/c10/util/ArrayRef.h:103:39: error: initializing ‘c10::ArrayRef<c10::BFloat16>::Data’ from ‘std::initializer_list<c10::BFloat16>::begin’ does not extend the lifetime of the underlying array [-Werror=init-list-lifetime]
cc1plus: all warnings being treated as errors
error: command 'gcc' failed with exit status 1
Traceback (most recent call last):
  File "test/run_test.py", line 438, in <module>
    main()
  File "test/run_test.py", line 430, in main
    raise RuntimeError(message)
RuntimeError: test_cpp_extensions failed!
```

The warnings look valid, the code isn't guaranteed to work (although in practice it does seem to).  Using `std::begin` keeps the underlying array for the `initializer_list` going out of scope.

Note that the same warning is reported in https://github.com/pytorch/vision/issues/1173#issuecomment-517308733 (Cc ShahriarSS)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/25384

Differential Revision: D17113146

Pulled By: ezyang

fbshipit-source-id: 477c414481fb3664a8cb92728f4111e6317b309e
2019-09-10 09:09:52 -07:00
8b02522b93 Avoid copy in ArrayRef<->vector comparison (#22218)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/22218

-

Differential Revision: D15990763

fbshipit-source-id: 53c98f915fadc8a65aea896c80292d5804d967a4
2019-06-26 13:36:21 -07:00
b527e48588 Use c10::List (#21177)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21177

- Integrate c10::ListPtr into IValue and the c10 dispatcher.
- Streamline conversion to/from IValue. Before, we had IValue::to<> and kernel_functor.h had its own ivalue_to_arg_type and return_type_to_ivalue. They are now unified. Also, this means that nested types like Dicts of Lists of Optional of Dict of ... do work as expected now

Differential Revision: D15476433

fbshipit-source-id: bde9df80df20091aa8e6ae17ba7e90abd149b954
2019-06-12 13:58:24 -07:00
5b33698776 Fix build error in c10 on Windows (#21005)
Summary:
Targets https://github.com/pytorch/pytorch/issues/20635#issuecomment-496265510
Reference:
1. https://docs.microsoft.com/en-us/cpp/preprocessor/predefined-macros?view=vs-2015#microsoft-specific-predefined-macros
2. https://docs.microsoft.com/en-us/cpp/cpp/deprecated-cpp?view=vs-2019
Pull Request resolved: https://github.com/pytorch/pytorch/pull/21005

Differential Revision: D15543134

Pulled By: ezyang

fbshipit-source-id: f32709b018a7de651cb31575fc6117bfc4dd3bd1
2019-06-03 09:53:24 -07:00
73a97387c1 Replace AT_CHECK with TORCH_CHECK [shard 9/10]
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/20435

Reviewed By: jerryzh168

Differential Revision: D15318877

fbshipit-source-id: 4d83571187ea14a604fef83ac355d328b46d93e1
2019-05-15 08:05:59 -07:00
ce969c0bc4 Add tests for argument types (#19290)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/19290

Add test cases for the supported argument types
And TODOs for some unsupported ones that we might want to support.

Reviewed By: dzhulgakov

Differential Revision: D14931920

fbshipit-source-id: c47bbb295a54ac9dc62569bf5c273368c834392c
2019-04-18 17:20:13 -07:00
1fc05bd285 Mark IntList as deprecated; add C10_DEPRECATED_USING (#16824)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16824

There was a big wooly yak getting the deprecated macros to work.
Gory details are in Deprecated.h

Reviewed By: smessmer

Differential Revision: D13978429

fbshipit-source-id: f148e5935ac36eacc481789d22c7a9443164fe95
2019-02-13 08:51:20 -08:00
4404762d7d Rename IntList to IntArrayRef. (#16751)
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/16751

This was made more complicated by the fact that ivalue::IntList
is a thing.  So I had to fix all of the sites where we referring
to IValue post facto.

The following codemods were run, in this order:

```
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntList IntArrayRef
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in IntArrayRef::create IntList::create
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in ivalue::IntArrayRef ivalue::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in Tag::IntArrayRef Tag::IntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in isIntArrayRef isIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in toIntArrayRef toIntList
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'Shared<IntArrayRef>' 'Shared<IntList>'
codemod -m -d . --extensions cc,cpp,cu,cuh,h,hpp,py,cwrap,yaml,in 'intrusive_ptr<IntArrayRef>' 'intrusive_ptr<IntList>'
```

Some manual fixups were done afterwards; they can be reviewed separately
at https://github.com/pytorch/pytorch/pull/16752

Reviewed By: dzhulgakov

Differential Revision: D13954363

fbshipit-source-id: b5c40aacba042402155a2f5a229fa6db7992ac64
2019-02-05 14:54:34 -08:00
0478d32cb8 Move AlignOf, SmallVector and ArrayRef to c10.
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/13916

Reviewed By: smessmer

Differential Revision: D13046722

fbshipit-source-id: 1583d3170d60e22f0a535cd1fd56bdf928186f5d
2018-11-14 11:13:16 -08:00