mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
fix spelling of word - when (#160185)
just found a typo while understanding the codebase while working on another PR This fixes typo in word `when` in files ``` native/cpu/PaddingKernel.cpp native/cpu/batch_norm_kernel.cpp ``` @eqy Pull Request resolved: https://github.com/pytorch/pytorch/pull/160185 Approved by: https://github.com/yewentao256, https://github.com/ezyang
This commit is contained in:
committed by
PyTorch MergeBot
parent
91f0bcf43f
commit
0d421ace32
@ -156,7 +156,7 @@ void cpu_padding(
|
||||
int64_t offset_h = ndim >= 2 ? p.offsets[ndim - 2] : 0;
|
||||
int64_t offset_w = p.offsets[ndim - 1];
|
||||
|
||||
// do vectorized copy whe output is overlapped with input on W,
|
||||
// do vectorized copy when output is overlapped with input on W,
|
||||
// only applies to positive padding
|
||||
auto loop = [=](scalar_t* out, const scalar_t* in, bool positive_padding) {
|
||||
if (positive_padding) {
|
||||
|
@ -318,7 +318,7 @@ batch_norm_cpu_collect_stats_channels_last_impl(
|
||||
//
|
||||
// The optimal THRESHOLD to tile was found empirically.
|
||||
// When C > THRESHOLD, C is large enough that the benefit from tiling and vectorization outweigh the synchronization overhead.
|
||||
// Wehn C <= TILE_SIZE, the problem size is small enough (C <= TILE_SIZE && NHW <= max_threads) that it's better to launch single thread with vectorization than C threads without vectorization.
|
||||
// When C <= TILE_SIZE, the problem size is small enough (C <= TILE_SIZE && NHW <= max_threads) that it's better to launch single thread with vectorization than C threads without vectorization.
|
||||
//
|
||||
// When num_threads == 1, always use Method 2 as there is no synchronization overhead.
|
||||
//
|
||||
|
Reference in New Issue
Block a user