Turn fbgemm off by default for pytorch (#14048)

Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/14048

Setting USE_FBGEMM to OFF by default until we figure out properly separating avx2 code. See [this issue](https://github.com/pytorch/pytorch/issues/13993).  Pytorch can still be compiled with fbgemm by using USE_FBGEMM=ON.

Reviewed By: jspark1105

Differential Revision: D13090454

fbshipit-source-id: 6e0e92612e4362a306e376df3dc33e8edeb066e9
This commit is contained in:
Daya S Khudia
2018-11-15 18:17:34 -08:00
committed by Facebook Github Bot
parent f17b2fdf1b
commit f66cb02016
2 changed files with 8 additions and 2 deletions

View File

@ -84,7 +84,7 @@ option(CAFFE2_STATIC_LINK_CUDA "Statically link CUDA libraries" OFF)
cmake_dependent_option(
USE_CUDNN "Use cuDNN" ON
"USE_CUDA" OFF)
option(USE_FBGEMM "Use FBGEMM (quantized 8-bit server operators)" ON)
option(USE_FBGEMM "Use FBGEMM (quantized 8-bit server operators)" OFF)
option(USE_FFMPEG "Use ffmpeg" OFF)
option(USE_GFLAGS "Use GFLAGS" ON)
option(USE_GLOG "Use GLOG" ON)

View File

@ -1,6 +1,12 @@
from .env import check_env_flag
USE_FBGEMM = False
if check_env_flag('NO_FBGEMM'):
USE_FBGEMM = False
else:
USE_FBGEMM = True
#Enable FBGEMM if explicitly enabled
if check_env_flag('USE_FBGEMM'):
USE_FBGEMM = True
else:
USE_FBGEMM = False