mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Fix typo under docs directory (#97202)
This PR fixes typo in `.rst` files under docs directory. Pull Request resolved: https://github.com/pytorch/pytorch/pull/97202 Approved by: https://github.com/kit1980
This commit is contained in:
committed by
PyTorch MergeBot
parent
793cb3f424
commit
50ed38a7eb
@ -44,7 +44,7 @@ nodes it organizes its gradients and parameters into buckets which
|
||||
reduces communication times and allows a node to broadcast a fraction of
|
||||
its gradients to other waiting nodes.
|
||||
|
||||
Graph breaks in distributed code means you can expect dynamo and its
|
||||
Graph breaks in distributed code mean you can expect dynamo and its
|
||||
backends to optimize the compute overhead of a distributed program but
|
||||
not its communication overhead. Graph-breaks may interfere with
|
||||
compilation speedups, if the reduced graph-size robs the compiler of
|
||||
@ -378,7 +378,7 @@ Why am I getting OOMs?
|
||||
|
||||
Dynamo is still an alpha product so there’s a few sources of OOMs and if
|
||||
you’re seeing an OOM try disabling the following configurations in this
|
||||
order and then open an issue on Github so we can solve the root problem
|
||||
order and then open an issue on GitHub so we can solve the root problem
|
||||
1. If you’re using dynamic shapes try disabling them, we’ve disabled
|
||||
them by default: ``env TORCHDYNAMO_DYNAMIC_SHAPES=0 python model.py`` 2.
|
||||
CUDA graphs with Triton are enabled by default in inductor but removing
|
||||
|
@ -140,7 +140,7 @@ Some of the most commonly used backends include:
|
||||
* ``torch.compile(m, backend="onnxrt")`` - Uses ONNXRT for inference on CPU/GPU. `Read more <https://onnxruntime.ai/>`__
|
||||
* ``torch.compile(m, backend="tensorrt")`` - Uses ONNXRT to run TensorRT for inference optimizations. `Read more <https://github.com/onnx/onnx-tensorrt>`__
|
||||
* ``torch.compile(m, backend="ipex")`` - Uses IPEX for inference on CPU. `Read more <https://github.com/intel/intel-extension-for-pytorch>`__
|
||||
* ``torch.compile(m, backend="tvm")`` - Uses Apach TVM for inference optimizations. `Read more <https://tvm.apache.org/>`__
|
||||
* ``torch.compile(m, backend="tvm")`` - Uses Apache TVM for inference optimizations. `Read more <https://tvm.apache.org/>`__
|
||||
|
||||
Why do you need another way of optimizing PyTorch code?
|
||||
-------------------------------------------------------
|
||||
|
@ -312,7 +312,7 @@ Here is what this code does:
|
||||
2. The function ``popn`` the items, in this case, the signature is
|
||||
``def popn(self, n: int) -> List[TensorVariable]:`` this hints at an
|
||||
underlying contract - we are returning ``TensorVariables``. If we
|
||||
take a closer look at ``sybmolic_convert.py`` and
|
||||
take a closer look at ``symbolic_convert.py`` and
|
||||
``InstructionTranslatorBase``/``InstructionTranslator``\ we see that
|
||||
the only thing pushed onto and popped from our stack are
|
||||
``VariableTracker``\ s.
|
||||
|
@ -5,7 +5,7 @@ What's happening?
|
||||
-----------------
|
||||
Batch Norm requires in-place updates to running_mean and running_var of the same size as the input.
|
||||
Functorch does not support inplace update to a regular tensor that takes in a batched tensor (i.e.
|
||||
``regular.add_(batched)`` is not allowed). So when vmaping over a batch of inputs to a single module,
|
||||
``regular.add_(batched)`` is not allowed). So when vmapping over a batch of inputs to a single module,
|
||||
we end up with this error
|
||||
|
||||
How to fix
|
||||
|
@ -1127,7 +1127,7 @@ user-defined types if they implement the ``__contains__`` method.
|
||||
Identity Comparisons
|
||||
""""""""""""""""""""
|
||||
For all types except ``int``, ``double``, ``bool``, and ``torch.device``, operators ``is`` and ``is not`` test for the object’s identity;
|
||||
``x is y`` is ``True`` if and and only if ``x`` and ``y`` are the same object. For all other types, ``is`` is equivalent to
|
||||
``x is y`` is ``True`` if and only if ``x`` and ``y`` are the same object. For all other types, ``is`` is equivalent to
|
||||
comparing them using ``==``. ``x is not y`` yields the inverse of ``x is y``.
|
||||
|
||||
Boolean Operations
|
||||
|
@ -913,7 +913,7 @@ For example:
|
||||
t.register_hook(fn)
|
||||
t.backward()
|
||||
|
||||
Furthemore, it can be helpful to know that under the hood,
|
||||
Furthermore, it can be helpful to know that under the hood,
|
||||
when hooks are registered to a Tensor, they actually become permanently bound to the grad_fn
|
||||
of that Tensor, so if that Tensor is then modified in-place,
|
||||
even though the Tensor now has a new grad_fn, hooks registered before it was
|
||||
|
Reference in New Issue
Block a user