mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Improve process_group_agent() serialization speed (#29785)
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/29785 TLDR: This change improves process_group's serialization speed: Serialize_Tensor64: 12.38us -> 1.99us (~-84%) Deserialize_Tensor64: 33.89us -> 5.62us (~-84%) Serialize_Tensor1M: 525.74us -> 285.43us (~-45%) Deserialize_Tensor1M: 892.61us -> 273.68us (~-70%) After speaking with the jit team, we had consensus that torch::save()/load() are somewhat high-overhead for RPC serialization, mostly intended for persistent disk data. (Particularly, for large tensors, 35% of the time is spent in CRC checking, even with the fb-side changes to subsitute 40x faster SSE-accelerated crc checking; Also, for small tensors, the zip container overhead is considerable, as is the overhead of lexing/parsing an embedded text python program for each RPC). The jit team encouraged us to use jit::pickler, with the WriteableTensorData way of outputting result tensors (not the default side-tensor table, or with pickling the actual tensors). This ends up just pickling some tensor metadata, and giving us some tensor blobs that we can mindlessly blit over the wire (they copy to cpu memory if needed). There is yet no standardized container format for the pickled data (there is jit::pickle_save() checked in, but but it's experimental, no load function is yet provided), but they encouraged us to just use something sensible for this, and possibly revisit later. For now, I made the directory headers slightly http-inspired. Note that serialization is just one component of the pipeline, but that said, we also see reasonable reductions in end-to-end echo times (noisier): ProcessGroupAgent_Echo(Tensor_Small) 855.25us -> 492.65us (~-42%) ProcessGroupAgent_Echo(Tensor_1M) 10.82ms -> 6.94ms (~-35%) ProcessGroupAgent_Echo(Small_NoTensor) 688.82us -> 301.72us (~-56%) ProcessGroupAgent_Echo(1MB_NoTensor) 4.65ms -> 3.71ms (~-20%) I moved the "wire serialization" logic to a separate file to assist with unittesting. ghstack-source-id: 94694682 Test Plan: buck test mode/dev-nosan caffe2/test/cpp/api:serialize buck test mode/dev-nosan caffe2/test/... Differential Revision: D18493938 fbshipit-source-id: 07ddfe87dbe56472bc944f7d070627052c94a8f4
This commit is contained in:
committed by
Facebook Github Bot
parent
1350b99de4
commit
f4e7e9039d
27
test/cpp/rpc/CMakeLists.txt
Normal file
27
test/cpp/rpc/CMakeLists.txt
Normal file
@ -0,0 +1,27 @@
|
||||
set(TORCH_RPC_TEST_DIR "${TORCH_ROOT}/test/cpp/rpc")
|
||||
set(TORCH_RPC_TEST_SOURCES
|
||||
${TORCH_ROOT}/test/cpp/common/main.cpp
|
||||
${TORCH_RPC_TEST_DIR}/test_wire_serialization.cpp
|
||||
)
|
||||
|
||||
add_executable(test_cpp_rpc ${TORCH_RPC_TEST_SOURCES})
|
||||
target_include_directories(test_cpp_rpc PRIVATE ${ATen_CPU_INCLUDE})
|
||||
target_link_libraries(test_cpp_rpc PRIVATE torch gtest)
|
||||
|
||||
if (USE_CUDA)
|
||||
target_link_libraries(test_cpp_rpc PRIVATE
|
||||
${CUDA_LIBRARIES}
|
||||
${CUDA_NVRTC_LIB}
|
||||
${CUDA_CUDA_LIB}
|
||||
${TORCH_CUDA_LIBRARIES})
|
||||
|
||||
target_compile_definitions(test_cpp_rpc PRIVATE "USE_CUDA")
|
||||
endif()
|
||||
|
||||
if (INSTALL_TEST)
|
||||
install(TARGETS test_cpp_rpc DESTINATION bin)
|
||||
# Install PDB files for MSVC builds
|
||||
if (MSVC AND BUILD_SHARED_LIBS)
|
||||
install(FILES $<TARGET_PDB_FILE:test_cpp_rpc> DESTINATION bin OPTIONAL)
|
||||
endif()
|
||||
endif()
|
Reference in New Issue
Block a user