This is one of a series of PRs to update us to PEP585 (changing Dict -> dict, List -> list, etc). Most of the PRs were completely automated with RUFF as follows:
Since RUFF UP006 is considered an "unsafe" fix first we need to enable unsafe fixes:
```
--- a/tools/linter/adapters/ruff_linter.py
+++ b/tools/linter/adapters/ruff_linter.py
@@ -313,6 +313,7 @@
"ruff",
"check",
"--fix-only",
+ "--unsafe-fixes",
"--exit-zero",
*([f"--config={config}"] if config else []),
"--stdin-filename",
```
Then we need to tell RUFF to allow UP006 (as a final PR once all of these have landed this will be made permanent):
```
--- a/pyproject.toml
+++ b/pyproject.toml
@@ -40,7 +40,7 @@
[tool.ruff]
-target-version = "py38"
+target-version = "py39"
line-length = 88
src = ["caffe2", "torch", "torchgen", "functorch", "test"]
@@ -87,7 +87,6 @@
"SIM116", # Disable Use a dictionary instead of consecutive `if` statements
"SIM117",
"SIM118",
- "UP006", # keep-runtime-typing
"UP007", # keep-runtime-typing
]
select = [
```
Finally running `lintrunner -a --take RUFF` will fix up the deprecated uses.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/145101
Approved by: https://github.com/bobrenjc93
This PR only adds the execution of the benchmarks on this PR and print results, following diffs will add checking out head~1 and running it and comparing.
to access results goto test pr_time_benchmarks and inspect logs:
you should see
```
+ echo 'benchmark results on current PR: '
benchmark results on current PR:
+ cat /var/lib/jenkins/workspace/test/test-reports/pr_time_benchmarks_before.txt
update_hint_regression,instruction_count,27971461254
```
Pull Request resolved: https://github.com/pytorch/pytorch/pull/131475
Approved by: https://github.com/ezyang
Summary:
This PR add a `--mode` flag and a script to collect microbenchmarks in a single JSON file. I also added a version check since benchmarks are expected to evolve; this also turned up a determinism bug in `init_from_variants`. (`set` is not ordered, unlike `dict`)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/55428
Test Plan:
Run in CI
CC: ngimel wconstab ezyang bhosmer
Reviewed By: mruberry
Differential Revision: D27775284
Pulled By: robieta
fbshipit-source-id: c8c338fedbfb2860df207fe204212a0121ecb006
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/54652
This PR adds a fairly robust runner for the instruction count microbenchmarks. Key features are:
* Timeout and retry. (In rare cases, Callgrind will hang under heavy load.)
* Robust error handling and keyboard interrupt support.
* Benchmarks are pinned to cores. (Wall times still won't be great, but it's something.)
* Progress printouts, including a rough ETA.
Test Plan: Imported from OSS
Reviewed By: pbelevich
Differential Revision: D27537823
Pulled By: robieta
fbshipit-source-id: 699ac907281d28bf7ffa08594253716ca40204ba
Summary:
Pull Request resolved: https://github.com/pytorch/pytorch/pull/53296
Part 1 of the instruction count microbenchmarks. This PR is focused on benchmark definition machinery. (Though you can run `main.py` to see it in action.) A summary of the system is given in the README.
Test Plan: Imported from OSS
Reviewed By: ngimel
Differential Revision: D26907092
Pulled By: robieta
fbshipit-source-id: 0f61457b3ce89aa59a06bf1f0e7a74ccdbf17090