Fix typo under docs directory (#97202)

This PR fixes typo in `.rst` files under docs directory.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/97202
Approved by: https://github.com/kit1980
This commit is contained in:
Kazuaki Ishizaki
2023-03-21 01:24:10 +00:00
committed by PyTorch MergeBot
parent 793cb3f424
commit 50ed38a7eb
6 changed files with 7 additions and 7 deletions

View File

@ -44,7 +44,7 @@ nodes it organizes its gradients and parameters into buckets which
reduces communication times and allows a node to broadcast a fraction of
its gradients to other waiting nodes.
Graph breaks in distributed code means you can expect dynamo and its
Graph breaks in distributed code mean you can expect dynamo and its
backends to optimize the compute overhead of a distributed program but
not its communication overhead. Graph-breaks may interfere with
compilation speedups, if the reduced graph-size robs the compiler of
@ -378,7 +378,7 @@ Why am I getting OOMs?
Dynamo is still an alpha product so theres a few sources of OOMs and if
youre seeing an OOM try disabling the following configurations in this
order and then open an issue on Github so we can solve the root problem
order and then open an issue on GitHub so we can solve the root problem
1. If youre using dynamic shapes try disabling them, weve disabled
them by default: ``env TORCHDYNAMO_DYNAMIC_SHAPES=0 python model.py`` 2.
CUDA graphs with Triton are enabled by default in inductor but removing

View File

@ -140,7 +140,7 @@ Some of the most commonly used backends include:
* ``torch.compile(m, backend="onnxrt")`` - Uses ONNXRT for inference on CPU/GPU. `Read more <https://onnxruntime.ai/>`__
* ``torch.compile(m, backend="tensorrt")`` - Uses ONNXRT to run TensorRT for inference optimizations. `Read more <https://github.com/onnx/onnx-tensorrt>`__
* ``torch.compile(m, backend="ipex")`` - Uses IPEX for inference on CPU. `Read more <https://github.com/intel/intel-extension-for-pytorch>`__
* ``torch.compile(m, backend="tvm")`` - Uses Apach TVM for inference optimizations. `Read more <https://tvm.apache.org/>`__
* ``torch.compile(m, backend="tvm")`` - Uses Apache TVM for inference optimizations. `Read more <https://tvm.apache.org/>`__
Why do you need another way of optimizing PyTorch code?
-------------------------------------------------------

View File

@ -312,7 +312,7 @@ Here is what this code does:
2. The function ``popn`` the items, in this case, the signature is
``def popn(self, n: int) -> List[TensorVariable]:`` this hints at an
underlying contract - we are returning ``TensorVariables``. If we
take a closer look at ``sybmolic_convert.py`` and
take a closer look at ``symbolic_convert.py`` and
``InstructionTranslatorBase``/``InstructionTranslator``\ we see that
the only thing pushed onto and popped from our stack are
``VariableTracker``\ s.

View File

@ -5,7 +5,7 @@ What's happening?
-----------------
Batch Norm requires in-place updates to running_mean and running_var of the same size as the input.
Functorch does not support inplace update to a regular tensor that takes in a batched tensor (i.e.
``regular.add_(batched)`` is not allowed). So when vmaping over a batch of inputs to a single module,
``regular.add_(batched)`` is not allowed). So when vmapping over a batch of inputs to a single module,
we end up with this error
How to fix

View File

@ -1127,7 +1127,7 @@ user-defined types if they implement the ``__contains__`` method.
Identity Comparisons
""""""""""""""""""""
For all types except ``int``, ``double``, ``bool``, and ``torch.device``, operators ``is`` and ``is not`` test for the objects identity;
``x is y`` is ``True`` if and and only if ``x`` and ``y`` are the same object. For all other types, ``is`` is equivalent to
``x is y`` is ``True`` if and only if ``x`` and ``y`` are the same object. For all other types, ``is`` is equivalent to
comparing them using ``==``. ``x is not y`` yields the inverse of ``x is y``.
Boolean Operations

View File

@ -913,7 +913,7 @@ For example:
t.register_hook(fn)
t.backward()
Furthemore, it can be helpful to know that under the hood,
Furthermore, it can be helpful to know that under the hood,
when hooks are registered to a Tensor, they actually become permanently bound to the grad_fn
of that Tensor, so if that Tensor is then modified in-place,
even though the Tensor now has a new grad_fn, hooks registered before it was