Fix typos under benchmarks, test, and tools directories (#87975)

This PR fixes typos in `.md` files under benchmarks, test, and tools directories
Pull Request resolved: https://github.com/pytorch/pytorch/pull/87975
Approved by: https://github.com/kit1980
This commit is contained in:
Kazuaki Ishizaki
2022-10-29 01:26:15 +00:00
committed by PyTorch MergeBot
parent 18f3db2963
commit 14d5f139d2
5 changed files with 7 additions and 7 deletions

View File

@ -158,7 +158,7 @@ Benchmark: resnext101_32x8d with batch size 32
```
This compares throughput between `bucket_cap_mb=25` (the default) and
`bucket_cap_mb=1` on 8 DGX machines with V100 GPUs. It confims that
`bucket_cap_mb=1` on 8 DGX machines with V100 GPUs. It confirms that
even for a relatively small model on machines with a very fast
interconnect (4x 100Gb InfiniBand per machine), it still pays off to
batch allreduce calls.

View File

@ -73,7 +73,7 @@ Timer(
```
Moreover, because `signature` is provided we know that creation of `x` and `w`
is part of setup, and the overall comptation uses `x` and `w` to produce `y`.
is part of setup, and the overall computation uses `x` and `w` to produce `y`.
As a result, we can derive TorchScript'd and AutoGrad variants as well. We can
deduce that a TorchScript model will take the form:

View File

@ -374,7 +374,7 @@ unary_ops_list = op_bench.op_list(
```
#### Part 2. Create Tensors and Add Computation
In this example, both operators share the same input so we only need to implement one TorchBenchmakrBase subclass.
In this example, both operators share the same input so we only need to implement one TorchBenchmarkBase subclass.
Every new subclass is required to implement 3 methods:
* `init` is used to create tensors and set the operator name and function. In this example, the parameters to `init` are `M`, `N`, and `op_func` which have been specified in the configurations.
* `forward` includes the operator to be tested and the computation based on the created tensors in `init`. Apart from `self`, the order of the arguments must match the entries specified in `self.inputs`.

View File

@ -55,7 +55,7 @@ NOTE: currently Android simulator test does not generate on-the-fly models. Only
## Diagnose failed test
If the simulator test is falling, that means the current change will potentially break a production model. So be careful. The detailed error message can be found in test log. If the change has to be made, make sure it doesn't break existing production models, and update the failed test model as appropriate (see the next section).
You can also run these tests locally, please see the insturction in android and ios folder. Remember to generate on-the-fly test models if you want to test it locally (but don't commit these models with _temp suffix).
You can also run these tests locally, please see the instruction in android and ios folder. Remember to generate on-the-fly test models if you want to test it locally (but don't commit these models with _temp suffix).
```
python test/mobile/model_test/gen_test_model.py ios-test
```

View File

@ -51,7 +51,7 @@ Great, you are ready to run the code coverage tool for the first time! Start fro
```
python oss_coverage.py --run-only=atest
```
This command will run `atest` binary in `build/bin/` folder and generate reoports over the entire *Pytorch* folder. You can find the reports in `profile/summary`. But you may only be interested in the `aten` folder, in this case, try:
This command will run `atest` binary in `build/bin/` folder and generate reports over the entire *Pytorch* folder. You can find the reports in `profile/summary`. But you may only be interested in the `aten` folder, in this case, try:
```
python oss_coverage.py --run-only=atest --interest-only=aten
```
@ -91,9 +91,9 @@ python oss_coverage.py --run-only=atest --interest-only=c10 --summary
**2. Run tests yourself**
When you are developing a new feature, you may first run the tests yourself to make sure the implementation is all right and then want to learn its coverage. But sometimes the test take very long time and you don't want to wait to run it again when doing code coverage. In this case, you can use these arguments to accerate your development (make sure you build pytorch with the coverage option!):
When you are developing a new feature, you may first run the tests yourself to make sure the implementation is all right and then want to learn its coverage. But sometimes the test take very long time and you don't want to wait to run it again when doing code coverage. In this case, you can use these arguments to accelerate your development (make sure you build pytorch with the coverage option!):
```
# run tests when you are devloping a new feature, assume the the test is `test_nn.py`
# run tests when you are developing a new feature, assume the test is `test_nn.py`
python oss_coverage.py --run-only=test_nn.py
# or you can run it yourself
cd test/ && python test_nn.py