[coreml-backend-tool] fix pytorch-backended issue on new coremltools (#155543)

Summary:
the new coreml tool is export mlpakage instead mlmodel in default option.  when we use new 8.0 coreml tool to convert to backend, the error is

```
Exception: MLModel of type mlProgram cannot be loaded just from the model spec object. It also needs the path to the weights file. Please provide that as well, using the 'weights_dir' argument.
```

Test Plan:
tested with internal workflow

Rollback Plan:

Differential Revision: D76325462

Pull Request resolved: https://github.com/pytorch/pytorch/pull/155543
Approved by: https://github.com/shoumikhin
This commit is contained in:
Zhihan Fang
2025-06-11 20:52:22 +00:00
committed by PyTorch MergeBot
parent cec264c8c6
commit db5970c1a6

View File

@ -55,6 +55,7 @@ def CompileSpec(
allow_low_precision=True,
quantization_mode=CoreMLQuantizationMode.NONE,
mlmodel_export_path=None,
convert_to=None,
):
return (
inputs,
@ -63,6 +64,7 @@ def CompileSpec(
allow_low_precision,
quantization_mode,
mlmodel_export_path,
convert_to,
)
@ -91,6 +93,7 @@ def preprocess(script_module: torch._C.ScriptObject, compile_spec: dict[str, tup
allow_low_precision,
quantization_mode,
mlmodel_export_path,
convert_to,
) = spec
mil_inputs = []
inputs = []
@ -101,7 +104,7 @@ def preprocess(script_module: torch._C.ScriptObject, compile_spec: dict[str, tup
ml_type = _convert_to_mil_type(shape, dtype, name)
mil_inputs.append(ml_type)
model = torch.jit.RecursiveScriptModule._construct(script_module, lambda x: None)
mlmodel = ct.convert(model, inputs=mil_inputs)
mlmodel = ct.convert(model, inputs=mil_inputs, convert_to=convert_to)
if quantization_mode != CoreMLQuantizationMode.NONE:
quant_model_spec = quantization_utils.quantize_weights(