mirror of
https://github.com/deepspeedai/DeepSpeed.git
synced 2025-10-21 08:43:50 +08:00
In the process of adding onebit optimizers support for XPU devices, we have noticed that for different accelerator, the main difference of implementation of `compressed_allreduce` lies on `packbits` and `unpackbits`. CUDA uses cupy and NPU uses torch_npu. Instead of replace these to xpu only functions, we provided a CompressedBackend to do the `compressed_allreduce` work where users can add their own packbits/unpackbits kernels, which is a general path for all kinds of accelerators. In this PR, we: 1. Add CompressedBackend for onebitAdam, onebitLamb and zerooneAdam 2. Add XPU implement of packbits/unpackbits with SYCL, built in PackbitsBuilder 3. Add tests for onebit with CompressedBackend --------- Co-authored-by: Olatunji Ruwase <olruwase@microsoft.com>
27 lines
626 B
Python
27 lines
626 B
Python
# Copyright (c) Microsoft Corporation.
|
|
# SPDX-License-Identifier: Apache-2.0
|
|
|
|
# DeepSpeed Team
|
|
from .builder import SYCLOpBuilder
|
|
|
|
|
|
class PackbitsBuilder(SYCLOpBuilder):
|
|
BUILD_VAR = "DS_BUILD_PACK_BITS"
|
|
NAME = "pack_bits"
|
|
|
|
def __init__(self):
|
|
super().__init__(name=self.NAME)
|
|
|
|
def absolute_name(self):
|
|
return f'deepspeed.ops.{self.NAME}_op'
|
|
|
|
def sources(self):
|
|
return ['csrc/xpu/packbits/packing.cpp']
|
|
|
|
def include_paths(self):
|
|
return ['csrc/xpu/includes']
|
|
|
|
def cxx_args(self):
|
|
args = super().cxx_args()
|
|
return args + self.version_dependent_macros()
|