[Doc] fix some typos (found by codespell and typos) (#132544)

Applying doc fixes from PR https://github.com/pytorch/pytorch/pull/127267 - with CLA
Pull Request resolved: https://github.com/pytorch/pytorch/pull/132544
Approved by: https://github.com/kit1980
This commit is contained in:
Wouter Devriendt
2024-08-05 17:21:56 +00:00
committed by PyTorch MergeBot
parent 3d87dfc088
commit e8645fa2b9
18 changed files with 24 additions and 24 deletions

View File

@ -471,7 +471,7 @@ Allocator* getCPUAllocator() {
} }
// override_allow_tf32_flag = true // override_allow_tf32_flag = true
// means the allow_tf32 flags are overrided and tf32 is force disabled // means the allow_tf32 flags are overridden and tf32 is force disabled
// override_allow_tf32_flag = false // override_allow_tf32_flag = false
// means the original allow_tf32 flags are followed // means the original allow_tf32 flags are followed
thread_local bool override_allow_tf32_flag = false; thread_local bool override_allow_tf32_flag = false;

View File

@ -152,7 +152,7 @@ OperatorEntry::AnnotatedKernelContainerIterator OperatorEntry::registerKernel(
// Suppress the warning for Meta key as we are overriding C++ meta functions with python meta functions // Suppress the warning for Meta key as we are overriding C++ meta functions with python meta functions
// for some ops // for some ops
if (dispatch_key != DispatchKey::Meta) { if (dispatch_key != DispatchKey::Meta) {
TORCH_WARN_ONCE("Warning only once for all operators, other operators may also be overrided.\n", TORCH_WARN_ONCE("Warning only once for all operators, other operators may also be overridden.\n",
" Overriding a previously registered kernel for the same operator and the same dispatch key\n", " Overriding a previously registered kernel for the same operator and the same dispatch key\n",
" operator: ", (schema_.has_value() ? toString(schema_->schema) : toString(name_)), "\n", " operator: ", (schema_.has_value() ? toString(schema_->schema) : toString(name_)), "\n",
" ", (this->schema_.has_value() ? this->schema_->debug : "no debug info"), "\n", " ", (this->schema_.has_value() ? this->schema_->debug : "no debug info"), "\n",

View File

@ -1,7 +1,7 @@
Tensor Basics Tensor Basics
============= =============
The ATen tensor library backing PyTorch is a simple tensor library thats exposes The ATen tensor library backing PyTorch is a simple tensor library that exposes
the Tensor operations in Torch directly in C++14. ATen's API is auto-generated the Tensor operations in Torch directly in C++14. ATen's API is auto-generated
from the same declarations PyTorch uses so the two APIs will track each other from the same declarations PyTorch uses so the two APIs will track each other
over time. over time.

View File

@ -21,7 +21,7 @@ and can logically be seen as implemented as follows.
Its unique power lies in its ability of expressing **data-dependent control flow**: it lowers to a conditional Its unique power lies in its ability of expressing **data-dependent control flow**: it lowers to a conditional
operator (`torch.ops.higher_order.cond`), which preserves predicate, true function and false functions. operator (`torch.ops.higher_order.cond`), which preserves predicate, true function and false functions.
This unlocks great flexibilty in writing and deploying models that change model architecture based on This unlocks great flexibility in writing and deploying models that change model architecture based on
the **value** or **shape** of inputs or intermediate outputs of tensor operations. the **value** or **shape** of inputs or intermediate outputs of tensor operations.
.. warning:: .. warning::
@ -109,7 +109,7 @@ This gives us an exported program as shown below:
Notice that `torch.cond` is lowered to `torch.ops.higher_order.cond`, its predicate becomes a Symbolic expression over the shape of input, Notice that `torch.cond` is lowered to `torch.ops.higher_order.cond`, its predicate becomes a Symbolic expression over the shape of input,
and branch functions becomes two sub-graph attributes of the top level graph module. and branch functions becomes two sub-graph attributes of the top level graph module.
Here is another exmaple that showcases how to express a data-dependet control flow: Here is another example that showcases how to express a data-dependent control flow:
.. code-block:: python .. code-block:: python

View File

@ -97,7 +97,7 @@ Due to legacy design decisions, the state dictionaries of `FSDP` and `DDP` may h
To tackle these challenges, we offer a collection of APIs for users to easily manage state_dicts. `get_model_state_dict` returns a model state dictionary with keys consistent with those returned by the unparallelized model state dictionary. Similarly, `get_optimizer_state_dict` provides the optimizer state dictionary with keys uniform across all parallelisms applied. To achieve this consistency, `get_optimizer_state_dict` converts parameter IDs to fully qualified names identical to those found in the unparallelized model state dictionary. To tackle these challenges, we offer a collection of APIs for users to easily manage state_dicts. `get_model_state_dict` returns a model state dictionary with keys consistent with those returned by the unparallelized model state dictionary. Similarly, `get_optimizer_state_dict` provides the optimizer state dictionary with keys uniform across all parallelisms applied. To achieve this consistency, `get_optimizer_state_dict` converts parameter IDs to fully qualified names identical to those found in the unparallelized model state dictionary.
Note that results returned by hese APIs can be used directly with the `torch.distributed.checkpoint.save()` and `torch.distributed.checkpoint.load()` methods without requiring any additional conversions. Note that results returned by these APIs can be used directly with the `torch.distributed.checkpoint.save()` and `torch.distributed.checkpoint.load()` methods without requiring any additional conversions.
Note that this feature is experimental, and API signatures might change in the future. Note that this feature is experimental, and API signatures might change in the future.

View File

@ -52,7 +52,7 @@ Overall, the ``pipelining`` package provides the following features:
* Splitting of model code based on simple specification. * Splitting of model code based on simple specification.
* Rich support for pipeline schedules, including GPipe, 1F1B, * Rich support for pipeline schedules, including GPipe, 1F1B,
Interleaved 1F1B and Looped BFS, and providing the infrastruture for writing Interleaved 1F1B and Looped BFS, and providing the infrastructure for writing
customized schedules. customized schedules.
* First-class support for cross-host pipeline parallelism, as this is where PP * First-class support for cross-host pipeline parallelism, as this is where PP
is typically used (over slower interconnects). is typically used (over slower interconnects).
@ -149,7 +149,7 @@ model.
self.tok_embeddings = nn.Embedding(...) self.tok_embeddings = nn.Embedding(...)
# Using a ModuleDict lets us delete layers witout affecting names, # Using a ModuleDict lets us delete layers without affecting names,
# ensuring checkpoints will correctly save and load. # ensuring checkpoints will correctly save and load.
self.layers = torch.nn.ModuleDict() self.layers = torch.nn.ModuleDict()
for layer_id in range(model_args.n_layers): for layer_id in range(model_args.n_layers):

View File

@ -505,7 +505,7 @@ Input Tensor Shapes
By default, ``torch.export`` will trace the program specializing on the input By default, ``torch.export`` will trace the program specializing on the input
tensors' shapes, unless a dimension is specified as dynamic via the tensors' shapes, unless a dimension is specified as dynamic via the
``dynamic_shapes`` argumen to ``torch.export``. This means that if there exists ``dynamic_shapes`` argument to ``torch.export``. This means that if there exists
shape-dependent control flow, ``torch.export`` will specialize on the branch shape-dependent control flow, ``torch.export`` will specialize on the branch
that is being taken with the given sample inputs. For example: that is being taken with the given sample inputs. For example:

View File

@ -355,7 +355,7 @@ properties on the nodes as we see them at runtime. That might look like:
attr_itr = self.mod attr_itr = self.mod
for i, atom in enumerate(target_atoms): for i, atom in enumerate(target_atoms):
if not hasattr(attr_itr, atom): if not hasattr(attr_itr, atom):
raise RuntimeError(f"Node referenced nonexistant target {'.'.join(target_atoms[:i])}") raise RuntimeError(f"Node referenced nonexistent target {'.'.join(target_atoms[:i])}")
attr_itr = getattr(attr_itr, atom) attr_itr = getattr(attr_itr, atom)
return attr_itr return attr_itr

View File

@ -376,7 +376,7 @@ Python enums can be used in TorchScript without any extra annotation or code:
After an enum is defined, it can be used in both TorchScript and Python interchangeably After an enum is defined, it can be used in both TorchScript and Python interchangeably
like any other TorchScript type. The type of the values of an enum must be ``int``, like any other TorchScript type. The type of the values of an enum must be ``int``,
``float``, or ``str``. All values must be of the same type; heterogenous types for enum ``float``, or ``str``. All values must be of the same type; heterogeneous types for enum
values are not supported. values are not supported.

View File

@ -830,7 +830,7 @@ TorchScript Type System Definition
TSMetaType ::= "Any" TSMetaType ::= "Any"
TSPrimitiveType ::= "int" | "float" | "double" | "complex" | "bool" | "str" | "None" TSPrimitiveType ::= "int" | "float" | "double" | "complex" | "bool" | "str" | "None"
TSStructualType ::= TSTuple | TSNamedTuple | TSList | TSDict | TSOptional | TSStructuralType ::= TSTuple | TSNamedTuple | TSList | TSDict | TSOptional |
TSUnion | TSFuture | TSRRef | TSAwait TSUnion | TSFuture | TSRRef | TSAwait
TSTuple ::= "Tuple" "[" (TSType ",")* TSType "]" TSTuple ::= "Tuple" "[" (TSType ",")* TSType "]"
TSNamedTuple ::= "namedtuple" "(" (TSType ",")* TSType ")" TSNamedTuple ::= "namedtuple" "(" (TSType ",")* TSType ")"

View File

@ -638,10 +638,10 @@ keyword arguments like :func:`torch.add` does::
For speed and flexibility the ``__torch_function__`` dispatch mechanism does not For speed and flexibility the ``__torch_function__`` dispatch mechanism does not
check that the signature of an override function matches the signature of the check that the signature of an override function matches the signature of the
function being overrided in the :mod:`torch` API. For some applications ignoring function being overridden in the :mod:`torch` API. For some applications ignoring
optional arguments would be fine but to ensure full compatibility with optional arguments would be fine but to ensure full compatibility with
:class:`Tensor`, user implementations of torch API functions should take care to :class:`Tensor`, user implementations of torch API functions should take care to
exactly emulate the API of the function that is being overrided. exactly emulate the API of the function that is being overridden.
Functions in the :mod:`torch` API that do not have explicit overrides will Functions in the :mod:`torch` API that do not have explicit overrides will
return ``NotImplemented`` from ``__torch_function__``. If all operands with return ``NotImplemented`` from ``__torch_function__``. If all operands with
@ -860,7 +860,7 @@ signature of the original ``PyTorch`` function::
<Signature (input, other, out=None)> <Signature (input, other, out=None)>
Finally, ``torch.overrides.get_ignored_functions`` returns a tuple of functions Finally, ``torch.overrides.get_ignored_functions`` returns a tuple of functions
that explicitly cannot be overrided by ``__torch_function__``. This list can be that explicitly cannot be overridden by ``__torch_function__``. This list can be
useful to confirm that a function that isn't present in the dictionary returned useful to confirm that a function that isn't present in the dictionary returned
by ``get_overridable_functions`` cannot be overridden. by ``get_overridable_functions`` cannot be overridden.

View File

@ -4,7 +4,7 @@ Numerical accuracy
================== ==================
In modern computers, floating point numbers are represented using IEEE 754 standard. In modern computers, floating point numbers are represented using IEEE 754 standard.
For more details on floating point arithmetics and IEEE 754 standard, please see For more details on floating point arithmetic and IEEE 754 standard, please see
`Floating point arithmetic <https://en.wikipedia.org/wiki/Floating-point_arithmetic>`_ `Floating point arithmetic <https://en.wikipedia.org/wiki/Floating-point_arithmetic>`_
In particular, note that floating point provides limited accuracy (about 7 decimal digits In particular, note that floating point provides limited accuracy (about 7 decimal digits
for single precision floating point numbers, about 16 decimal digits for double precision for single precision floating point numbers, about 16 decimal digits for double precision

View File

@ -132,7 +132,7 @@ to Y, and Y forks to Z:
OwnerRRef -> A -> Y -> Z OwnerRRef -> A -> Y -> Z
If all of Z's messages, including the delete message, are processed by the If all of Z's messages, including the delete message, are processed by the
owner before Y's messages. the owner will learn of Z's deletion befores owner before Y's messages. the owner will learn of Z's deletion before
knowing Y exists. Nevertheless, this does not cause any problem. Because, at least knowing Y exists. Nevertheless, this does not cause any problem. Because, at least
one of Y's ancestors will be alive (A) and it will one of Y's ancestors will be alive (A) and it will
prevent the owner from deleting the ``OwnerRRef``. More specifically, if the prevent the owner from deleting the ``OwnerRRef``. More specifically, if the

View File

@ -6,4 +6,4 @@ everything that is supported by exportdb, but it covers the
most common and confusing use cases that users will run into. most common and confusing use cases that users will run into.
If you have a feature that you think needs a stronger guarantee from us to If you have a feature that you think needs a stronger guarantee from us to
support in export please create an issue in the pytorch/pytorch repo wih a module:export tag. support in export please create an issue in the pytorch/pytorch repo with a module:export tag.

View File

@ -9,7 +9,7 @@ TorchDynamo APIs for fine-grained tracing
``torch.compile`` performs TorchDynamo tracing on the whole user model. ``torch.compile`` performs TorchDynamo tracing on the whole user model.
However, it is possible that a small part of the model code cannot be However, it is possible that a small part of the model code cannot be
handeled by ``torch.compiler``. In this case, you might want to disable handled by ``torch.compiler``. In this case, you might want to disable
the compiler on that particular portion, while running compilation on the compiler on that particular portion, while running compilation on
the rest of the model. This section describe the existing APIs that the rest of the model. This section describe the existing APIs that
use to define parts of your code in which you want to skip compilation use to define parts of your code in which you want to skip compilation
@ -22,7 +22,7 @@ disable compilation are listed in the following table:
:header: "API", "Description", "When to use?" :header: "API", "Description", "When to use?"
:widths: auto :widths: auto
"``torch.compiler.disable``", "Disables Dynamo on the decorated function as well as recursively invoked functions.", "Excellent for unblocking a user, if a small portion of the model cannot be handeled with ``torch.compile``." "``torch.compiler.disable``", "Disables Dynamo on the decorated function as well as recursively invoked functions.", "Excellent for unblocking a user, if a small portion of the model cannot be handled with ``torch.compile``."
"``torch._dynamo.disallow_in_graph``", "Disallows the marked op in the TorchDynamo graph. TorchDynamo causes graph break, and runs the op in the eager (no compile) mode.\n\nThis is suitable for the ops, while ``torch.compiler.disable`` is suitable for decorating functions.", "This API is excellent for both debugging and unblocking if a custom op like ``torch.ops.fbgemm.*`` is causing issues with the ``torch.compile`` function." "``torch._dynamo.disallow_in_graph``", "Disallows the marked op in the TorchDynamo graph. TorchDynamo causes graph break, and runs the op in the eager (no compile) mode.\n\nThis is suitable for the ops, while ``torch.compiler.disable`` is suitable for decorating functions.", "This API is excellent for both debugging and unblocking if a custom op like ``torch.ops.fbgemm.*`` is causing issues with the ``torch.compile`` function."
"``torch.compile.allow_in_graph``", "The annotated callable goes as is in the TorchDynamo graph. For example, a black-box for TorchDynamo Dynamo.\n\nNote that AOT Autograd will trace through it, so the ``allow_in_graph`` is only a Dynamo-level concept.", "This API is useful for portions of the model which have known TorchDynamo hard-to-support features, like hooks or ``autograd.Function``. However, each usage of ``allow_in_graph`` **must be carefully screened** (no graph breaks, no closures)." "``torch.compile.allow_in_graph``", "The annotated callable goes as is in the TorchDynamo graph. For example, a black-box for TorchDynamo Dynamo.\n\nNote that AOT Autograd will trace through it, so the ``allow_in_graph`` is only a Dynamo-level concept.", "This API is useful for portions of the model which have known TorchDynamo hard-to-support features, like hooks or ``autograd.Function``. However, each usage of ``allow_in_graph`` **must be carefully screened** (no graph breaks, no closures)."
"``torch._dynamo.graph_break``", "Adds a graph break. The code before and after the graph break goes through TorchDynamo.", "**Rarely useful for deployment** - If you think you need this, most probably you need either ``disable`` or ``disallow_in_graph``." "``torch._dynamo.graph_break``", "Adds a graph break. The code before and after the graph break goes through TorchDynamo.", "**Rarely useful for deployment** - If you think you need this, most probably you need either ``disable`` or ``disallow_in_graph``."

View File

@ -394,7 +394,7 @@ class TestTorchFunctionOverride(TestCase):
cls._stack.close() cls._stack.close()
def test_mean_semantics(self): def test_mean_semantics(self):
"""Test that a function with one argument can be overrided""" """Test that a function with one argument can be overridden"""
t1 = DiagonalTensor(5, 2) t1 = DiagonalTensor(5, 2)
t2 = SubTensor([[1, 2], [1, 2]]) t2 = SubTensor([[1, 2], [1, 2]])
t3 = SubDiagonalTensor(5, 2) t3 = SubDiagonalTensor(5, 2)
@ -410,7 +410,7 @@ class TestTorchFunctionOverride(TestCase):
has_torch_function(object()) has_torch_function(object())
def test_mm_semantics(self): def test_mm_semantics(self):
"""Test that a function with multiple arguments can be overrided""" """Test that a function with multiple arguments can be overridden"""
t1 = DiagonalTensor(5, 2) t1 = DiagonalTensor(5, 2)
t2 = torch.eye(5) * 2 t2 = torch.eye(5) * 2
t3 = SubTensor([[1, 2], [1, 2]]) t3 = SubTensor([[1, 2], [1, 2]])

View File

@ -235,7 +235,7 @@ class TestPythonRegistration(TestCase):
self.assertFalse(torch.mul(x, y)._is_zerotensor()) self.assertFalse(torch.mul(x, y)._is_zerotensor())
# Assert that a user can't override the behavior of a (ns, op, dispatch_key) # Assert that a user can't override the behavior of a (ns, op, dispatch_key)
# combination if someone overrided the behavior for the same before them # combination if someone overridden the behavior for the same before them
with self.assertRaisesRegex( with self.assertRaisesRegex(
RuntimeError, "already a kernel registered from python" RuntimeError, "already a kernel registered from python"
): ):

View File

@ -55,7 +55,7 @@ class FakeClassRegistry:
def register(self, full_qualname: str, fake_class=None) -> None: def register(self, full_qualname: str, fake_class=None) -> None:
if self.has_impl(full_qualname): if self.has_impl(full_qualname):
log.warning( log.warning(
"%s is already registered. Previous fake class is overrided with %s.", "%s is already registered. Previous fake class is overridden with %s.",
full_qualname, full_qualname,
fake_class, fake_class,
) )