Files
pytorch/.github/label_to_label.yml
fduwjj b50075343a [distributed] Enable H100 test for all distributed related changes (#156721)
We want to run H100 CI for distributed related changes. We already have a labeling of oncall:distributed when touching distributed related code: 4491326fb0/.github/labeler.yml (L94). So we want to leverage that.

Pull Request resolved: https://github.com/pytorch/pytorch/pull/156721
Approved by: https://github.com/huydhn
2025-06-26 01:51:41 +00:00

60 lines
1.3 KiB
YAML

# Use this to auto apply labels based on other labels. Applies to both PRs and
# issues. Currently only supports any and all
- any:
- "module: opcheck"
then:
- "module: custom-operators"
- any:
- "module: custom-operators"
- "module: functionalization"
- "module: aotdispatch"
- "module: higher order operators"
- "module: fakeTensor"
- "module: ProxyTensor"
- "module: library"
- "module: reinplacing"
then:
- "module: pt2-dispatcher"
- any:
- "module: vmap"
then:
- "module: functorch"
- any:
- "module: reinplacing"
then:
- "module: inductor"
- any:
- "module: pt2 optimizer"
then:
- "module: dynamo"
- any:
- "module: flex attention"
then:
- "module: higher order operators"
- any:
- "module: aotinductor"
then:
- "oncall: export"
- any:
- "module: dynamo"
- "module: pt2-dispatcher"
- "module: inductor"
- "module: aotinductor"
- "module: cudagraphs"
- "oncall: export"
- "module: compile-time"
- "module: compiled autograd"
- "module: flex attention"
- "module: dynamic shapes"
then:
- "oncall: pt2"
- any:
- "release notes: distributed (c10d)"
- "release notes: distributed (symm_mem)"
- "release notes: distributed (pipeline)"
- "release notes: distributed (fsdp)"
- "release notes: distributed (dtensor)"
- "oncall: distributed"
then:
- "ciflow/h100-distributed"