Profile guided optimization for automatic_dynamic (#139001)

Previously: https://github.com/pytorch/pytorch/pull/138052 but the implementation is done from scratch, so I open a new PR.

This implements the ability to save and load profiles of automatic dynamic decisions, so on subsequent runs we can directly make something automatically dynamic. Unlike the previous implementation, this cache is never enabled by default; instead, you have to specify a "job id" that says it's OK to share results. We will be able to automatically populate this id for internal MAST jobs but for generic OSS users you will have to explicitly opt into it.

Signed-off-by: Edward Z. Yang <ezyang@meta.com>

Differential Revision: [D65065497](https://our.internmc.facebook.com/intern/diff/D65065497)
Pull Request resolved: https://github.com/pytorch/pytorch/pull/139001
Approved by: https://github.com/oulgen
This commit is contained in:
Edward Z. Yang
2024-11-01 21:06:34 -07:00
committed by PyTorch MergeBot
parent 55038aa661
commit f6be44c74e
14 changed files with 662 additions and 28 deletions

View File

@ -4,7 +4,7 @@ import logging
import os
import sys
import tempfile
from typing import Any, Callable, Dict, List, Optional, TypeVar
from typing import Any, Callable, Dict, List, Optional, Tuple, TypeVar
from typing_extensions import ParamSpec
import torch
@ -344,6 +344,10 @@ def max_clock_rate():
return 1100
def get_mast_job_name_version() -> Optional[Tuple[str, int]]:
return None
TEST_MASTER_ADDR = "127.0.0.1"
TEST_MASTER_PORT = 29500
# USE_GLOBAL_DEPS controls whether __init__.py tries to load