mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Summary: Pull Request resolved: https://github.com/pytorch/pytorch/pull/56455 CPU convolution performance is pretty important for inference, so tracking performance for CNNs often boils down to finding shapes that have either regressed or need optimization. This diff adds a benchmark harness that lets you pretty easily add new sets of convolution parameters to benchmark. I've started with an exhaustive list of layers from MobileNetV3, ResNet-18 and ResNet-50, which are fairly popular torchvision models. More to come if these prove useful. I've also added four backend configurations: - native: uses at::conv2d, which applies its own backend selection heuristics - mkldnn_none: uses mkldnn but applies no prepacking; uses the NCHW default - mkldnn_weight: prepacks weights in an mkldnn-friendly format - mkldnn_input: also prepacks the inputs in NCHW16c ghstack-source-id: 127027784 Test Plan: Ran this on my Skylake Xeon Reviewed By: ngimel Differential Revision: D27876139 fbshipit-source-id: 950e1dfa09a33cc3acc7efd579f56df8453af1f2
PyTorch Benchmarks
This folder contains scripts that produce reproducible timings of various PyTorch features.
It also provides mechanisms to compare PyTorch with other frameworks.
Setup environment
Make sure you're on a machine with CUDA, torchvision, and pytorch installed. Install in the following order:
# Install torchvision. It comes with the pytorch stable release binary
conda install pytorch torchvision -c pytorch
# Install the latest pytorch master from source.
# It should supersede the installation from the release binary.
cd $PYTORCH_HOME
python setup.py build develop
# Check the pytorch installation version
python -c "import torch; print(torch.__version__)"
Benchmark List
Please refer to each subfolder to discover each benchmark suite