From bc57306bdd6a041e64d77e8bc8fdb470e6ff0815 Mon Sep 17 00:00:00 2001 From: Kazuaki Ishizaki Date: Thu, 29 Sep 2022 21:41:59 +0000 Subject: [PATCH] Fix typo under docs directory and RELEASE.md (#85896) This PR fixes typo in rst files under docs directory and `RELEASE.md`. Pull Request resolved: https://github.com/pytorch/pytorch/pull/85896 Approved by: https://github.com/kit1980 --- RELEASE.md | 6 +++--- docs/cpp/source/notes/tensor_cuda_stream.rst | 2 +- docs/source/notes/autograd.rst | 2 +- docs/source/quantization-support.rst | 2 +- docs/source/quantization.rst | 10 +++++----- docs/source/rpc.rst | 2 +- docs/source/sparse.rst | 2 +- 7 files changed, 13 insertions(+), 13 deletions(-) diff --git a/RELEASE.md b/RELEASE.md index 32f71e124141..e2b69b5bf82e 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -14,7 +14,7 @@ - [Release Candidate health validation](#release-candidate-health-validation) - [Cherry Picking Fixes](#cherry-picking-fixes) - [Promoting RCs to Stable](#promoting-rcs-to-stable) - - [Additonal Steps to prepare for release day](#additonal-steps-to-prepare-for-release-day) + - [Additional Steps to prepare for release day](#additional-steps-to-prepare-for-release-day) - [Modify release matrix](#modify-release-matrix) - [Open Google Colab issue](#open-google-colab-issue) - [Patch Releases](#patch-releases) @@ -186,7 +186,7 @@ Promotion should occur in two steps: **NOTE**: The promotion of wheels to PyPI can only be done once so take caution when attempting to promote wheels to PyPI, (see https://github.com/pypa/warehouse/issues/726 for a discussion on potential draft releases within PyPI) -## Additonal Steps to prepare for release day +## Additional Steps to prepare for release day The following should be prepared for the release day @@ -264,7 +264,7 @@ For versions of Python that we support we follow the [NEP 29 policy](https://num ## Accelerator Software -For acclerator software like CUDA and ROCm we will typically use the following criteria: +For accelerator software like CUDA and ROCm we will typically use the following criteria: * Support latest 2 minor versions ### Special support cases diff --git a/docs/cpp/source/notes/tensor_cuda_stream.rst b/docs/cpp/source/notes/tensor_cuda_stream.rst index 9de4bcc4e270..b80615e8f7f1 100644 --- a/docs/cpp/source/notes/tensor_cuda_stream.rst +++ b/docs/cpp/source/notes/tensor_cuda_stream.rst @@ -61,7 +61,7 @@ Pytorch's C++ API provides the following ways to set CUDA stream: .. attention:: - This function may have nosthing to do with the current device. It only changes the current stream on the stream's device. + This function may have nothing to do with the current device. It only changes the current stream on the stream's device. We recommend using ``CUDAStreamGuard``, instead, since it switches to the stream's device and makes it the current stream on that device. ``CUDAStreamGuard`` will also restore the current device and stream when it's destroyed diff --git a/docs/source/notes/autograd.rst b/docs/source/notes/autograd.rst index 6d0f52c66923..6eec13a7de55 100644 --- a/docs/source/notes/autograd.rst +++ b/docs/source/notes/autograd.rst @@ -203,7 +203,7 @@ grad mode in the next forward pass. The implementations in :ref:`nn-init-doc` also rely on no-grad mode when initializing the parameters as to avoid -autograd tracking when updating the intialized parameters in-place. +autograd tracking when updating the initialized parameters in-place. Inference Mode ^^^^^^^^^^^^^^ diff --git a/docs/source/quantization-support.rst b/docs/source/quantization-support.rst index e4b446839659..a681e494d55e 100644 --- a/docs/source/quantization-support.rst +++ b/docs/source/quantization-support.rst @@ -543,7 +543,7 @@ as follows: where :math:`\text{clamp}(.)` is the same as :func:`~torch.clamp` while the scale :math:`s` and zero point :math:`z` are then computed -as decribed in :class:`~torch.ao.quantization.observer.MinMaxObserver`, specifically: +as described in :class:`~torch.ao.quantization.observer.MinMaxObserver`, specifically: .. math:: diff --git a/docs/source/quantization.rst b/docs/source/quantization.rst index 4955b58cbfe2..6171c1920d93 100644 --- a/docs/source/quantization.rst +++ b/docs/source/quantization.rst @@ -80,7 +80,7 @@ The following table compares the differences between Eager Mode Quantization and | |Static, Dynamic, |Static, Dynamic, | | |Weight Only |Weight Only | | | | | -| |Quantiztion Aware |Quantiztion Aware | +| |Quantization Aware |Quantization Aware | | |Training: |Training: | | |Static |Static | +-----------------+-------------------+-------------------+ @@ -632,7 +632,7 @@ Quantization Mode Support | |Quantization |Dataset | Works Best For | Accuracy | Notes | | |Mode |Requirement | | | | +-----------------------------+---------------------------------+--------------------+----------------+----------------+------------+-----------------+ -|Post Training Quantization |Dyanmic/Weight Only Quantization |activation |None |LSTM, MLP, |good |Easy to use, | +|Post Training Quantization |Dynamic/Weight Only Quantization |activation |None |LSTM, MLP, |good |Easy to use, | | | |dynamically | |Embedding, | |close to static | | | |quantized (fp16, | |Transformer | |quantization when| | | |int8) or not | | | |performance is | @@ -640,7 +640,7 @@ Quantization Mode Support | | |statically quantized| | | |bound due to | | | |(fp16, int8, in4) | | | |weights | | +---------------------------------+--------------------+----------------+----------------+------------+-----------------+ -| |Static Quantization |acivation and |calibration |CNN |good |Provides best | +| |Static Quantization |activation and |calibration |CNN |good |Provides best | | | |weights statically |dataset | | |perf, may have | | | |quantized (int8) | | | |big impact on | | | | | | | |accuracy, good | @@ -652,7 +652,7 @@ Quantization Mode Support | | |weight are fake |dataset | | |for now | | | |quantized | | | | | | +---------------------------------+--------------------+----------------+----------------+------------+-----------------+ -| |Static Quantization |activatio nand |fine-tuning |CNN, MLP, |best |Typically used | +| |Static Quantization |activation and |fine-tuning |CNN, MLP, |best |Typically used | | | |weight are fake |dataset |Embedding | |when static | | | |quantized | | | |quantization | | | | | | | |leads to bad | @@ -736,7 +736,7 @@ Backend/Hardware Support +-----------------+---------------+------------+------------+------------+ |server GPU |TensorRT (early|Not support |Supported |Static | | |prototype) |this it | |Quantization| -| | |requries a | | | +| | |requires a | | | | | |graph | | | +-----------------+---------------+------------+------------+------------+ diff --git a/docs/source/rpc.rst b/docs/source/rpc.rst index 89f146bfd68e..2c95f6f0765f 100644 --- a/docs/source/rpc.rst +++ b/docs/source/rpc.rst @@ -16,7 +16,7 @@ machines. CUDA support was introduced in PyTorch 1.9 and is still a **beta** feature. Not all features of the RPC package are yet compatible with CUDA support and thus their use is discouraged. These unsupported features include: RRefs, - JIT compatibility, dist autograd and dist optimizier, and profiling. These + JIT compatibility, dist autograd and dist optimizer, and profiling. These shortcomings will be addressed in future releases. .. note :: diff --git a/docs/source/sparse.rst b/docs/source/sparse.rst index a5449b432e00..125b0c8619a5 100644 --- a/docs/source/sparse.rst +++ b/docs/source/sparse.rst @@ -470,7 +470,7 @@ ncols, *densesize)`` where ``len(batchsize) == B`` and The batches of sparse CSR tensors are dependent: the number of specified elements in all batches must be the same. This somewhat - artifical constraint allows efficient storage of the indices of + artificial constraint allows efficient storage of the indices of different CSR batches. .. note::