Fix typo under docs directory and RELEASE.md (#85896)

This PR fixes typo in rst files under docs directory and `RELEASE.md`.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/85896
Approved by: https://github.com/kit1980
This commit is contained in:
Kazuaki Ishizaki
2022-09-29 21:41:59 +00:00
committed by PyTorch MergeBot
parent 11224f34b8
commit bc57306bdd
7 changed files with 13 additions and 13 deletions

View File

@ -14,7 +14,7 @@
- [Release Candidate health validation](#release-candidate-health-validation)
- [Cherry Picking Fixes](#cherry-picking-fixes)
- [Promoting RCs to Stable](#promoting-rcs-to-stable)
- [Additonal Steps to prepare for release day](#additonal-steps-to-prepare-for-release-day)
- [Additional Steps to prepare for release day](#additional-steps-to-prepare-for-release-day)
- [Modify release matrix](#modify-release-matrix)
- [Open Google Colab issue](#open-google-colab-issue)
- [Patch Releases](#patch-releases)
@ -186,7 +186,7 @@ Promotion should occur in two steps:
**NOTE**: The promotion of wheels to PyPI can only be done once so take caution when attempting to promote wheels to PyPI, (see https://github.com/pypa/warehouse/issues/726 for a discussion on potential draft releases within PyPI)
## Additonal Steps to prepare for release day
## Additional Steps to prepare for release day
The following should be prepared for the release day
@ -264,7 +264,7 @@ For versions of Python that we support we follow the [NEP 29 policy](https://num
## Accelerator Software
For acclerator software like CUDA and ROCm we will typically use the following criteria:
For accelerator software like CUDA and ROCm we will typically use the following criteria:
* Support latest 2 minor versions
### Special support cases

View File

@ -61,7 +61,7 @@ Pytorch's C++ API provides the following ways to set CUDA stream:
.. attention::
This function may have nosthing to do with the current device. It only changes the current stream on the stream's device.
This function may have nothing to do with the current device. It only changes the current stream on the stream's device.
We recommend using ``CUDAStreamGuard``, instead, since it switches to the stream's device and makes it the current stream on that device.
``CUDAStreamGuard`` will also restore the current device and stream when it's destroyed

View File

@ -203,7 +203,7 @@ grad mode in the next forward pass.
The implementations in :ref:`nn-init-doc` also
rely on no-grad mode when initializing the parameters as to avoid
autograd tracking when updating the intialized parameters in-place.
autograd tracking when updating the initialized parameters in-place.
Inference Mode
^^^^^^^^^^^^^^

View File

@ -543,7 +543,7 @@ as follows:
where :math:`\text{clamp}(.)` is the same as :func:`~torch.clamp` while the
scale :math:`s` and zero point :math:`z` are then computed
as decribed in :class:`~torch.ao.quantization.observer.MinMaxObserver`, specifically:
as described in :class:`~torch.ao.quantization.observer.MinMaxObserver`, specifically:
.. math::

View File

@ -80,7 +80,7 @@ The following table compares the differences between Eager Mode Quantization and
| |Static, Dynamic, |Static, Dynamic, |
| |Weight Only |Weight Only |
| | | |
| |Quantiztion Aware |Quantiztion Aware |
| |Quantization Aware |Quantization Aware |
| |Training: |Training: |
| |Static |Static |
+-----------------+-------------------+-------------------+
@ -632,7 +632,7 @@ Quantization Mode Support
| |Quantization |Dataset | Works Best For | Accuracy | Notes |
| |Mode |Requirement | | | |
+-----------------------------+---------------------------------+--------------------+----------------+----------------+------------+-----------------+
|Post Training Quantization |Dyanmic/Weight Only Quantization |activation |None |LSTM, MLP, |good |Easy to use, |
|Post Training Quantization |Dynamic/Weight Only Quantization |activation |None |LSTM, MLP, |good |Easy to use, |
| | |dynamically | |Embedding, | |close to static |
| | |quantized (fp16, | |Transformer | |quantization when|
| | |int8) or not | | | |performance is |
@ -640,7 +640,7 @@ Quantization Mode Support
| | |statically quantized| | | |bound due to |
| | |(fp16, int8, in4) | | | |weights |
| +---------------------------------+--------------------+----------------+----------------+------------+-----------------+
| |Static Quantization |acivation and |calibration |CNN |good |Provides best |
| |Static Quantization |activation and |calibration |CNN |good |Provides best |
| | |weights statically |dataset | | |perf, may have |
| | |quantized (int8) | | | |big impact on |
| | | | | | |accuracy, good |
@ -652,7 +652,7 @@ Quantization Mode Support
| | |weight are fake |dataset | | |for now |
| | |quantized | | | | |
| +---------------------------------+--------------------+----------------+----------------+------------+-----------------+
| |Static Quantization |activatio nand |fine-tuning |CNN, MLP, |best |Typically used |
| |Static Quantization |activation and |fine-tuning |CNN, MLP, |best |Typically used |
| | |weight are fake |dataset |Embedding | |when static |
| | |quantized | | | |quantization |
| | | | | | |leads to bad |
@ -736,7 +736,7 @@ Backend/Hardware Support
+-----------------+---------------+------------+------------+------------+
|server GPU |TensorRT (early|Not support |Supported |Static |
| |prototype) |this it | |Quantization|
| | |requries a | | |
| | |requires a | | |
| | |graph | | |
+-----------------+---------------+------------+------------+------------+

View File

@ -16,7 +16,7 @@ machines.
CUDA support was introduced in PyTorch 1.9 and is still a **beta** feature.
Not all features of the RPC package are yet compatible with CUDA support and
thus their use is discouraged. These unsupported features include: RRefs,
JIT compatibility, dist autograd and dist optimizier, and profiling. These
JIT compatibility, dist autograd and dist optimizer, and profiling. These
shortcomings will be addressed in future releases.
.. note ::

View File

@ -470,7 +470,7 @@ ncols, *densesize)`` where ``len(batchsize) == B`` and
The batches of sparse CSR tensors are dependent: the number of
specified elements in all batches must be the same. This somewhat
artifical constraint allows efficient storage of the indices of
artificial constraint allows efficient storage of the indices of
different CSR batches.
.. note::