mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
Grammar patch 1 (.md) (#41599)
Summary: A minor spell check! I have gone through a dozen of .md files to fix the typos. zou3519 take a look! Pull Request resolved: https://github.com/pytorch/pytorch/pull/41599 Reviewed By: ezyang Differential Revision: D22601629 Pulled By: zou3519 fbshipit-source-id: 68d8f77ad18edc1e77874f778b7dadee04b393ef
This commit is contained in:
committed by
Facebook GitHub Bot
parent
6769b850b2
commit
ce443def01
@ -446,7 +446,7 @@ export DESIRED_CUDA=cpu
|
||||
|
||||
To build a CUDA binary you need to use `nvidia-docker run` instead of just `docker run` (or you can manually pass `--runtime=nvidia`). This adds some needed libraries and things to build CUDA stuff.
|
||||
|
||||
You can build CUDA binaries on CPU only machines, but you can only run CUDA binaries on CUDA machines. This means that you can build a CUDA binary on a docker on your laptop if you so choose (though it’s gonna take a loong time).
|
||||
You can build CUDA binaries on CPU only machines, but you can only run CUDA binaries on CUDA machines. This means that you can build a CUDA binary on a docker on your laptop if you so choose (though it’s gonna take a long time).
|
||||
|
||||
For Facebook employees, ask about beefy machines that have docker support and use those instead of your laptop; it will be 5x as fast.
|
||||
|
||||
|
@ -238,7 +238,7 @@ To build the documentation:
|
||||
|
||||
1. Build and install PyTorch
|
||||
|
||||
2. Install the prequesities
|
||||
2. Install the prerequisites
|
||||
|
||||
```bash
|
||||
cd docs
|
||||
|
@ -6,7 +6,7 @@ Caffe2 is a lightweight, modular, and scalable deep learning framework. Building
|
||||
|
||||
## Questions and Feedback
|
||||
|
||||
Please use Github issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.
|
||||
Please use GitHub issues (https://github.com/pytorch/pytorch/issues) to ask questions, report bugs, and request new features.
|
||||
|
||||
### Further Resources on [Caffe2.ai](http://caffe2.ai)
|
||||
|
||||
|
@ -197,7 +197,7 @@ Note that the bang (!) is added after the opening comment """! - this seems to d
|
||||
|
||||
### Other Notes
|
||||
|
||||
Useful for xcode, currently off
|
||||
Useful for Xcode, currently off
|
||||
GENERATE_DOCSET = NO
|
||||
|
||||
Look at search engine integration, xml output, etc
|
||||
|
@ -61,7 +61,7 @@ CHECK` pragmas and essentially it sees the input string like this:
|
||||
```
|
||||
|
||||
It then checks that the optimized IR satisfies the specified annotations. It
|
||||
first finds string `%x : Tensor = aten::mul(%a, %b)` matching the annotion (1),
|
||||
first finds string `%x : Tensor = aten::mul(%a, %b)` matching the annotation (1),
|
||||
then it finds string `return (%x, %x)` matching the annotation (3), and since
|
||||
there were no lines matching `aten::mul` after the match (1) and before the
|
||||
match (3), the annotation (2) is also satisfied.
|
||||
|
6
third_party/miniz-2.0.8/ChangeLog.md
vendored
6
third_party/miniz-2.0.8/ChangeLog.md
vendored
@ -10,7 +10,7 @@
|
||||
### 2.0.7
|
||||
|
||||
- Removed need in C++ compiler in cmake build
|
||||
- Fixed loads of uninitilized value errors found with Valgrind by memsetting m_dict to 0 in tdefl_init.
|
||||
- Fixed loads of uninitialized value errors found with Valgrind by memsetting m_dict to 0 in tdefl_init.
|
||||
- Fix resource leak in mz_zip_reader_init_file_v2
|
||||
- Fix assert with mz_zip_writer_add_mem* w/MZ_DEFAULT_COMPRESSION
|
||||
- cmake build: install library and headers
|
||||
@ -66,7 +66,7 @@ The inflator now has a new failure status TINFL_STATUS_FAILED_CANNOT_MAKE_PROGRE
|
||||
- The inflator coroutine func. is subtle and complex so I'm being cautious about this release. I would greatly appreciate any help with testing or any feedback.
|
||||
I feel good about these changes, and they've been through several hours of automated testing, but they will probably not fix anything for the majority of prev. users so I'm
|
||||
going to mark this release as beta for a few weeks and continue testing it at work/home on various things.
|
||||
- The inflator in raw (non-zlib) mode is now usable on gzip or similiar data streams that have a bunch of bytes following the raw deflate data (problem discovered by rustyzip author williamw520).
|
||||
- The inflator in raw (non-zlib) mode is now usable on gzip or similar data streams that have a bunch of bytes following the raw deflate data (problem discovered by rustyzip author williamw520).
|
||||
This version should *never* read beyond the last byte of the raw deflate data independent of how many bytes you pass into the input buffer. This issue was caused by the various Huffman bitbuffer lookahead optimizations, and
|
||||
would not be an issue if the caller knew and enforced the precise size of the raw compressed data *or* if the compressed data was in zlib format (i.e. always followed by the byte aligned zlib adler32).
|
||||
So in other words, you can now call the inflator on deflate streams that are followed by arbitrary amounts of data and it's guaranteed that decompression will stop exactly on the last byte.
|
||||
@ -104,7 +104,7 @@ Interim bugfix release while I work on the next major release with zip64 and str
|
||||
- Retested this build under Windows (VS 2010, including static analysis), tcc 0.9.26, gcc v4.6 and clang v3.3.
|
||||
- Added example6.c, which dumps an image of the mandelbrot set to a PNG file.
|
||||
- Modified example2 to help test the MZ_ZIP_FLAG_DO_NOT_SORT_CENTRAL_DIRECTORY flag more.
|
||||
- In r3: Bugfix to mz_zip_writer_add_file() found during merge: Fix possible src file fclose() leak if alignment bytes+local header file write faiiled
|
||||
- In r3: Bugfix to mz_zip_writer_add_file() found during merge: Fix possible src file fclose() leak if alignment bytes+local header file write failed
|
||||
- In r4: Minor bugfix to mz_zip_writer_add_from_zip_reader(): Was pushing the wrong central dir header offset, appears harmless in this release, but it became a problem in the zip64 branch
|
||||
|
||||
### v1.14 - May 20, 2012
|
||||
|
@ -111,5 +111,5 @@ The code for constructing such an expression could look like this:
|
||||
# Memory model
|
||||
TBD
|
||||
|
||||
# Integartion with PyTorch JIT
|
||||
# Integration with PyTorch JIT
|
||||
TBD
|
||||
|
Reference in New Issue
Block a user