mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 12:54:11 +08:00
# Moativation This PR is used to enable _int_mm on Intel GPU. And _int_mm is used by int8 quantization on torchao. # Model Test Result: We run meta-llama/Llama-3.1-8B-Instruct on Intel GPU and A100 using torchao int8-dynamic-quantization. The model configs as below: Precision : torch.bfloat16 quantization configuration : Int8DynamicActivationInt8WeightConfig dataset : wikitext Result: The perplexity values for Intel GPU and A100 are 9.582953453063965 and 9.57755184173584, respectively. Pull Request resolved: https://github.com/pytorch/pytorch/pull/157769 Approved by: https://github.com/EikanWang, https://github.com/desertfire