Logo
Explore Help
Register Sign In
frozenleaves/pytorch
1
0
Fork 0
You've already forked pytorch
mirror of https://github.com/pytorch/pytorch.git synced 2025-10-20 12:54:11 +08:00
Code Issues Packages Projects Releases Wiki Activity
Files
main
pytorch/test/cpp/aoti_inference
History
Mu-Chu Lee 2291199e9b [AOTInductor] Use CudaCachingAllocator for memory allocation (#162893)
Summary:
Use c10::CudaCachingAllocator for AOTInductor's initial constant buffer
allocation.

Test Plan:
Activate test under test/cpp/aoti_inference/test.cpp

Reviewers:

Subscribers:

Tasks:

Tags:

Pull Request resolved: https://github.com/pytorch/pytorch/pull/162893
Approved by: https://github.com/desertfire
2025-09-17 17:08:20 +00:00
..
aoti_custom_class.cpp
[AOTI] Fix #140546 and support AOTI package load for Intel GPU. (#140664)
2024-12-10 05:05:08 +00:00
aoti_custom_class.h
…
CMakeLists.txt
[AOTI] Fix AOT inductor CMake build dependency order (#157557)
2025-07-04 14:33:36 +00:00
compile_model.py
[AOTI] Fix test_aoti_inference CPU build issue (#134675)
2024-08-28 17:42:19 +00:00
generate_lowered_cpu.py
[AOTInductor] Add standalone test for compilation from ExportedProgram (#142327)
2024-12-10 06:50:09 +00:00
standalone_compile.sh
[AOTInductor] Add standalone test for compilation from ExportedProgram (#142327)
2024-12-10 06:50:09 +00:00
standalone_test.cpp
[AOTInductor] Add standalone test for compilation from ExportedProgram (#142327)
2024-12-10 06:50:09 +00:00
test.cpp
[AOTInductor] Use CudaCachingAllocator for memory allocation (#162893)
2025-09-17 17:08:20 +00:00
test.py
[AOTInductor] Add test for enabling CUDACachingAllocator for AOTInductor's Weight (#159279)
2025-07-29 02:52:10 +00:00
Powered by Gitea Version: 1.24.0-rc0 Page: 1043ms Template: 2ms
English
Bahasa Indonesia Deutsch English Español Français Gaeilge Italiano Latviešu Magyar nyelv Nederlands Polski Português de Portugal Português do Brasil Suomi Svenska Türkçe Čeština Ελληνικά Български Русский Українська فارسی മലയാളം 日本語 简体中文 繁體中文(台灣) 繁體中文(香港) 한국어
Licenses API