update aotinductor doc for XPU support (#149299)

as title. Since the AOTInductor feature starting from 2.7 works on Intel GPU, add the related contents into its doc.
Pull Request resolved: https://github.com/pytorch/pytorch/pull/149299
Approved by: https://github.com/guangyey, https://github.com/desertfire
This commit is contained in:
Jing Xu
2025-03-21 04:40:28 +00:00
committed by PyTorch MergeBot
parent ccd5d811e8
commit 4ea580568a

View File

@ -38,7 +38,8 @@ package.
the following code will compile the model into a shared library for CUDA execution.
Otherwise, the compiled artifact will run on CPU. For better performance during CPU inference,
it is suggested to enable freezing by setting ``export TORCHINDUCTOR_FREEZING=1``
before running the Python script below.
before running the Python script below. The same behavior works in an environment with Intel®
GPU as well.
.. code-block:: python