mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Add more child links to benchmark readme (#104627)
Fixes #104625 Pull Request resolved: https://github.com/pytorch/pytorch/pull/104627 Approved by: https://github.com/drisspg
This commit is contained in:
committed by
PyTorch MergeBot
parent
db1ac4e29b
commit
69c4314945
@ -21,6 +21,13 @@ python -c "import torch; print(torch.__version__)"
|
||||
|
||||
## Benchmark List
|
||||
|
||||
Please refer to each subfolder to discover each benchmark suite
|
||||
Please refer to each subfolder to discover each benchmark suite. Links are provided where descriptions exist:
|
||||
|
||||
* [Fast RNNs benchmarks](fastrnns/README.md)
|
||||
* [Fast RNNs](fastrnns/README.md)
|
||||
* [Dynamo](dynamo/README.md)
|
||||
* [Functional autograd](functional_autograd_benchmark/README.md)
|
||||
* [Instruction counts](instruction_counts/README.md)
|
||||
* [Operator](operator_benchmark/README.md)
|
||||
* [Overrides](overrides_benchmark/README.md)
|
||||
* [Sparse](sparse/README.md)
|
||||
* [Tensor expression](tensorexpr/HowToRun.md)
|
||||
|
@ -24,7 +24,7 @@ For HF and TIMM models, the scripts already install the transformers and timm pa
|
||||
## Runbook
|
||||
|
||||
### Basic Usage
|
||||
There are a lot of flags in the benchmark runner, and it can be confusing to know which settings to use or what machine to run it on. In order to support apples-to-apples comparison, we have provided the following 'standard' settings in `runner.py`. This script is a wrapper over the common benchmarking infrastructure and simplifies the flags. We will continually update `runner.py` with the latest and most relevant compilers for training and inference. It also provides some graph utilities to visualize and compare results. Some of the example commands are
|
||||
There are a lot of flags in the benchmark runner, and it can be confusing to know which settings to use or what machine to run it on. In order to support apples-to-apples comparison, we have provided the following 'standard' settings in `runner.py`. This script is a wrapper over the common benchmarking infrastructure and simplifies the flags. We will continually update `runner.py` with the latest and most relevant compilers for training and inference. It also provides some graph utilities to visualize and compare results. Some of the example commands are:
|
||||
|
||||
**Inference Commands**
|
||||
* Inference compilers on torchbench models - `python benchmarks/dynamo/runner.py --suites=torchbench --inference --dtypes=float16`
|
||||
@ -46,7 +46,7 @@ One could directly call `torchbench.py`, `huggingface.py` or `timm_models.py` wi
|
||||
* TorchInductor CUDA Graphs Inference - `python benchmarks/dynamo/torchbench.py -dcuda --float32 -n50 --inductor --performance`
|
||||
|
||||
**Training Commands**
|
||||
* Torchscript (with TorchDynamo capture) NVFuser Training - `python benchmarks/dynamo/torchbench.py --float32 -dcuda --training --nvfuser --speedup-dynamo-ts --performance`
|
||||
* TorchScript (with TorchDynamo capture) NVFuser Training - `python benchmarks/dynamo/torchbench.py --float32 -dcuda --training --nvfuser --speedup-dynamo-ts --performance`
|
||||
* TorchInductor CUDA Graphs Training - `python benchmarks/dynamo/torchbench.py --float32 -dcuda --training --inductor --performance`
|
||||
|
||||
Above commands are for torchbench models. You can simply replace `torchbench.py` with `huggingface.py` for HF models, and `timm_model.py` for TIMM models.
|
||||
|
@ -1,6 +1,6 @@
|
||||
# Benchmarking tool for the autograd API
|
||||
|
||||
This folder contain a set of self-contained scripts that allow to benchmark the autograd with different common models.
|
||||
This folder contain a set of self-contained scripts that allows you to benchmark autograd with different common models.
|
||||
It is designed to run the benchmark before and after your change and will generate a table to share on the PR.
|
||||
|
||||
To do so, you can use `functional_autograd_benchmark.py` to run the benchmarks before your change (using as output `before.txt`) and after your change (using as output `after.txt`).
|
||||
|
Reference in New Issue
Block a user