mirror of
https://github.com/pytorch/pytorch.git
synced 2025-10-20 21:14:14 +08:00
update aotinductor doc for XPU support (#149299)
as title. Since the AOTInductor feature starting from 2.7 works on Intel GPU, add the related contents into its doc. Pull Request resolved: https://github.com/pytorch/pytorch/pull/149299 Approved by: https://github.com/guangyey, https://github.com/desertfire
This commit is contained in:
committed by
PyTorch MergeBot
parent
ccd5d811e8
commit
4ea580568a
@ -38,7 +38,8 @@ package.
|
||||
the following code will compile the model into a shared library for CUDA execution.
|
||||
Otherwise, the compiled artifact will run on CPU. For better performance during CPU inference,
|
||||
it is suggested to enable freezing by setting ``export TORCHINDUCTOR_FREEZING=1``
|
||||
before running the Python script below.
|
||||
before running the Python script below. The same behavior works in an environment with Intel®
|
||||
GPU as well.
|
||||
|
||||
.. code-block:: python
|
||||
|
||||
|
Reference in New Issue
Block a user