various documentation formatting (#9359)

Summary:
This is a grab-bag of documentation formatting fixes.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/9359

Differential Revision: D8831400

Pulled By: soumith

fbshipit-source-id: 8dac02303168b2ea365e23938ee528d8e8c9f9b7
This commit is contained in:
Thomas Viehmann
2018-07-13 02:08:12 -07:00
committed by Facebook Github Bot
parent bb9ff58c6d
commit 3799b10c44
7 changed files with 18 additions and 17 deletions

View File

@ -48,8 +48,8 @@ build tasks. It can be used by typing only a few lines of code.
One key install script
^^^^^^^^^^^^^^^^^^^^^^
You can take a look at the script `here
<https://github.com/peterjc123/pytorch-scripts>`_.
You can take a look at `this set of scripts
<https://github.com/peterjc123/pytorch-scripts>`_.
It will lead the way for you.
Extension
@ -176,8 +176,8 @@ You can resolve this by typing the following command.
As for the wheels package, since we didn't pack some libaries and VS2017
redistributable files in, please make sure you install them manually.
The VS 2017 redistributable installer can be downloaded `here
<https://aka.ms/vs/15/release/VC_redist.x64.exe>`_.
The `VS 2017 redistributable installer
<https://aka.ms/vs/15/release/VC_redist.x64.exe>`_ can be downloaded.
And you should also pay attention to your installation of Numpy. Make sure it
uses MKL instead of OpenBLAS. You may type in the following command.

View File

@ -3782,10 +3782,11 @@ Args:
end (Number): the ending value for the set of points
step (Number): the gap between each pair of adjacent points. Default: ``1``.
{out}
{dtype} If `dtype` is not given, infer the data type from the other input arguments.
If any of `start`, `end`, or `stop` are floating-point,
the `dtype` is inferred to be the default dtype, see :meth:`~torch.get_default_dtype`.
Otherwise, the `dtype` is inferred to be `torch.int64`.
{dtype}
If `dtype` is not given, infer the data type from the other input arguments.
If any of `start`, `end`, or `stop` are floating-point,
the `dtype` is inferred to be the default dtype, see :meth:`~torch.get_default_dtype`.
Otherwise, the `dtype` is inferred to be `torch.int64`.
{layout}
{device}
{requires_grad}

View File

@ -22,7 +22,7 @@ class Categorical(Distribution):
vectors.
.. note:: :attr:`probs` must be non-negative, finite and have a non-zero sum,
and it will be normalized to sum to 1.
and it will be normalized to sum to 1.
See also: :func:`torch.multinomial`

View File

@ -16,7 +16,7 @@ class Multinomial(Distribution):
called (see example below)
.. note:: :attr:`probs` must be non-negative, finite and have a non-zero sum,
and it will be normalized to sum to 1.
and it will be normalized to sum to 1.
- :meth:`sample` requires a single shared `total_count` for all
parameters and samples.

View File

@ -12,7 +12,7 @@ class OneHotCategorical(Distribution):
Samples are one-hot coded vectors of size ``probs.size(-1)``.
.. note:: :attr:`probs` must be non-negative, finite and have a non-zero sum,
and it will be normalized to sum to 1.
and it will be normalized to sum to 1.
See also: :func:`torch.distributions.Categorical` for specifications of
:attr:`probs` and :attr:`logits`.

View File

@ -87,7 +87,7 @@ def checkpoint(function, *args):
args: tuple containing inputs to the :attr:`function`
Returns:
Output of running :attr:`function` on *:attr:`args`
Output of running :attr:`function` on :attr:`*args`
"""
return CheckpointFunction.apply(function, *args)
@ -120,7 +120,7 @@ def checkpoint_sequential(functions, segments, *inputs):
inputs: tuple of Tensors that are inputs to :attr:`functions`
Returns:
Output of running :attr:`functions` sequentially on *:attr:`inputs`
Output of running :attr:`functions` sequentially on :attr:`*inputs`
Example:
>>> model = nn.Sequential(...)

View File

@ -7,8 +7,8 @@ torch._C._add_docstr(from_dlpack, r"""from_dlpack(dlpack) -> Tensor
Decodes a DLPack to a tensor.
Arguments::
dlpack - a PyCapsule object with the dltensor
Args:
dlpack: a PyCapsule object with the dltensor
The tensor will share the memory with the object represented
in the dlpack.
@ -19,8 +19,8 @@ torch._C._add_docstr(to_dlpack, r"""to_dlpack(tensor) -> PyCapsule
Returns a DLPack representing the tensor.
Arguments::
tensor - a tensor to be exported
Args:
tensor: a tensor to be exported
The dlpack shares the tensors memory.
Note that each dlpack can only be consumed once.