[EZ] Replace pytorch-labs with meta-pytorch (#160459)

This PR replaces all instances of 'pytorch-labs' with 'meta-pytorch' in this repository now that the 'pytorch-labs' org has been renamed to 'meta-pytorch'

## Changes Made
- Replaced all occurrences of 'pytorch-labs' with 'meta-pytorch'
- Only modified files with extensions: .py, .md, .sh, .rst, .cpp, .h, .txt, .yml
- Skipped binary files and files larger than 1MB due to GitHub api payload limits in the script to cover all repos in this org. Will do a more manual second pass later to cover any larger files

## Files Modified
This PR updates files that contained the target text.

Generated by automated script on 2025-08-12T20:41:29.888681+00:00Z
Pull Request resolved: https://github.com/pytorch/pytorch/pull/160459
Approved by: https://github.com/huydhn, https://github.com/clee2000, https://github.com/atalman, https://github.com/malfet
This commit is contained in:
Zain Rizvi
2025-08-12 22:44:22 +00:00
committed by PyTorch MergeBot
parent 5737372862
commit 0d71ca2c46
3 changed files with 3 additions and 3 deletions

View File

@ -2,7 +2,7 @@
## Demo applications and tutorials
Please refer to [pytorch-labs/executorch-examples](https://github.com/pytorch-labs/executorch-examples/tree/main/dl3/android/DeepLabV3Demo) for the Android demo app based on [ExecuTorch](https://github.com/pytorch/executorch).
Please refer to [meta-pytorch/executorch-examples](https://github.com/meta-pytorch/executorch-examples/tree/main/dl3/android/DeepLabV3Demo) for the Android demo app based on [ExecuTorch](https://github.com/pytorch/executorch).
Please join our [Discord](https://discord.com/channels/1334270993966825602/1349854760299270284) for any questions.

View File

@ -1304,7 +1304,7 @@ at::Tensor _convert_weight_to_int4pack_cuda(
constexpr int32_t kKTileSize = 16;
// GPT-FAST assumes nTileSize of 8 for quantized weight tensor.
// See https://github.com/pytorch-labs/gpt-fast/blob/091515ab5b06f91c0d6a3b92f9c27463f738cc9b/quantize.py#L510
// See https://github.com/meta-pytorch/gpt-fast/blob/091515ab5b06f91c0d6a3b92f9c27463f738cc9b/quantize.py#L510
// Torch dynamo also requires the torch ops has the same output shape for each device.
// See https://github.com/pytorch/pytorch/blob/ec284d3a74ec1863685febd53687d491fd99a161/torch/_meta_registrations.py#L3263
constexpr int32_t kNTileSizeTensor = 8;

View File

@ -611,7 +611,7 @@ def _group_quantize_tensor_symmetric(w, n_bit=4, groupsize=32):
def _dynamically_quantize_per_channel(x, quant_min, quant_max, target_dtype):
# source: https://github.com/pytorch-labs/gpt-fast/blob/main/quantize.py
# source: https://github.com/meta-pytorch/gpt-fast/blob/main/quantize.py
# default setup for affine quantization of activations
x_dtype = x.dtype
x = x.float()