Skip to content

Commit

Permalink
2.4.0 release (#3195)
Browse files Browse the repository at this point in the history
  • Loading branch information
jingxu10 authored Aug 14, 2024
1 parent a9f2e73 commit 4067e44
Show file tree
Hide file tree
Showing 146 changed files with 30,048 additions and 0 deletions.
4 changes: 4 additions & 0 deletions cpu/2.4.0+cpu/.buildinfo
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: ce5e33ee2857ff353429c5c71f5ead41
tags: 645f666f9bcd5a90fca523b33c5a78b7
Binary file added cpu/2.4.0+cpu/_images/1ins_cus.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/1ins_log.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/1ins_phy.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/1ins_soc.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/GenAI-bf16.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/GenAI-int8.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/autotp_bf16_llama.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/autotp_woq_int8_llama.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/bf16_llama.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/figure1_memory_layout.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/figure2_dispatch.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/figure3_strided_layout.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/hypertune.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/int8_pattern.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/kmp_affinity.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/llm_iakv_1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/llm_iakv_2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/m7i_m6i_comp_gptj6b.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/m7i_m6i_comp_llama13b.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/m7i_m6i_comp_llama7b.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/nins_cus.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/nins_lat.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/nins_thr.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added cpu/2.4.0+cpu/_images/smoothquant_int8_llama.gif
Binary file added cpu/2.4.0+cpu/_images/split_sgd.png
Binary file added cpu/2.4.0+cpu/_images/two_socket_config.png
Binary file added cpu/2.4.0+cpu/_images/woq_int4_gptj.gif
Binary file added cpu/2.4.0+cpu/_images/woq_int8_llama.gif
3 changes: 3 additions & 0 deletions cpu/2.4.0+cpu/_sources/design_doc/cpu/isa_dyndisp.md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# Intel® Extension for PyTorch\* CPU ISA Dynamic Dispatch Design Doc

The design document has been merged with [the ISA Dynamic Dispatch feature introduction](../../tutorials/features/isa_dynamic_dispatch.md).
100 changes: 100 additions & 0 deletions cpu/2.4.0+cpu/_sources/index.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
.. meta::
:description: This website introduces Intel® Extension for PyTorch*
:keywords: Intel optimization, PyTorch, Intel® Extension for PyTorch*, GPU, discrete GPU, Intel discrete GPU

Intel® Extension for PyTorch*
#############################

Intel® Extension for PyTorch* extends PyTorch* with the latest performance optimizations for Intel hardware.
Optimizations take advantage of Intel® Advanced Vector Extensions 512 (Intel® AVX-512) Vector Neural Network Instructions (VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel X\ :sup:`e`\ Matrix Extensions (XMX) AI engines on Intel discrete GPUs.
Moreover, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs through the PyTorch* ``xpu`` device.

In the current technological landscape, Generative AI (GenAI) workloads and models have gained widespread attention and popularity. Large Language Models (LLMs) have emerged as the dominant models driving these GenAI applications. Starting from 2.1.0, specific optimizations for certain
LLMs are introduced in the Intel® Extension for PyTorch*. For more information on LLM optimizations, refer to the `Large Language Models (LLM) <tutorials/llm.html>`_ section.

The extension can be loaded as a Python module for Python programs or linked as a C++ library for C++ programs. In Python scripts, users can enable it dynamically by importing ``intel_extension_for_pytorch``.

.. note::

- GPU features are not included in CPU-only packages.
- Optimizations for CPU-only may have a newer code base due to different development schedules.

Intel® Extension for PyTorch* has been released as an open–source project at `Github <https://github.com/intel/intel-extension-for-pytorch>`_. You can find the source code and instructions on how to get started at:

- **CPU**: `CPU main branch <https://github.com/intel/intel-extension-for-pytorch/tree/main>`_ | `Quick Start <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/getting_started>`_
- **XPU**: `XPU main branch <https://github.com/intel/intel-extension-for-pytorch/tree/xpu-main>`_ | `Quick Start <https://intel.github.io/intel-extension-for-pytorch/xpu/latest/tutorials/getting_started>`_

You can find more information about the product at:

- `Features <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/features>`_
- `Performance <https://intel.github.io/intel-extension-for-pytorch/cpu/latest/tutorials/performance>`_

Architecture
------------

Intel® Extension for PyTorch* is structured as shown in the following figure:

.. figure:: ../images/intel_extension_for_pytorch_structure.png
:width: 800
:align: center
:alt: Architecture of Intel® Extension for PyTorch*

Architecture of Intel® Extension for PyTorch*

- **Eager Mode**: In the eager mode, the PyTorch frontend is extended with custom Python modules (such as fusion modules), optimal optimizers, and INT8 quantization APIs. Further performance improvement is achieved by converting eager-mode models into graph mode using extended graph fusion passes.
- **Graph Mode**: In the graph mode, fusions reduce operator/kernel invocation overhead, resulting in improved performance. Compared to the eager mode, the graph mode in PyTorch* normally yields better performance from the optimization techniques like operation fusion. Intel® Extension for PyTorch* amplifies them with more comprehensive graph optimizations. Both PyTorch ``Torchscript`` and ``TorchDynamo`` graph modes are supported. With ``Torchscript``, we recommend using ``torch.jit.trace()`` as your preferred option, as it generally supports a wider range of workloads compared to ``torch.jit.script()``. With ``TorchDynamo``, ipex backend is available to provide good performances.
- **CPU Optimization**: On CPU, Intel® Extension for PyTorch* automatically dispatches operators to underlying kernels based on detected instruction set architecture (ISA). The extension leverages vectorization and matrix acceleration units available on Intel hardware. The runtime extension offers finer-grained thread runtime control and weight sharing for increased efficiency.
- **GPU Optimization**: On GPU, optimized operators and kernels are implemented and registered through PyTorch dispatching mechanism. These operators and kernels are accelerated from native vectorization feature and matrix calculation feature of Intel GPU hardware. Intel® Extension for PyTorch* for GPU utilizes the `DPC++ <https://github.com/intel/llvm#oneapi-dpc-compiler>`_ compiler that supports the latest `SYCL* <https://registry.khronos.org/SYCL/specs/sycl-2020/html/sycl-2020.html>`_ standard and also a number of extensions to the SYCL* standard, which can be found in the `sycl/doc/extensions <https://github.com/intel/llvm/tree/sycl/sycl/doc/extensions>`_ directory.


Support
-------
The team tracks bugs and enhancement requests using `GitHub issues <https://github.com/intel/intel-extension-for-pytorch/issues/>`_. Before submitting a suggestion or bug report, search the existing GitHub issues to see if your issue has already been reported.

.. toctree::
:caption: ABOUT
:maxdepth: 3
:hidden:

tutorials/introduction
tutorials/features
Large Language Models (LLM)<tutorials/llm>
tutorials/performance
tutorials/releases
tutorials/known_issues
tutorials/blogs_publications
tutorials/license

.. toctree::
:maxdepth: 3
:caption: GET STARTED
:hidden:

tutorials/installation
tutorials/getting_started
tutorials/examples
tutorials/cheat_sheet

.. toctree::
:maxdepth: 3
:caption: DEVELOPER REFERENCE
:hidden:

tutorials/api_doc

.. toctree::
:maxdepth: 3
:caption: PERFORMANCE TUNING
:hidden:

tutorials/performance_tuning/tuning_guide
tutorials/performance_tuning/launch_script
tutorials/performance_tuning/torchserve

.. toctree::
:maxdepth: 3
:caption: CONTRIBUTING GUIDE
:hidden:

tutorials/contribution

124 changes: 124 additions & 0 deletions cpu/2.4.0+cpu/_sources/tutorials/api_doc.rst.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
API Documentation
#################

General
*******

`ipex.optimize` is generally used for generic PyTorch models.

.. automodule:: intel_extension_for_pytorch
.. autofunction:: optimize


`ipex.llm.optimize` is used for Large Language Models (LLM).

.. automodule:: intel_extension_for_pytorch.llm
.. autofunction:: optimize

.. currentmodule:: intel_extension_for_pytorch
.. autoclass:: verbose

LLM Module Level Optimizations (Prototype)
******************************************

Module level optimization APIs are provided for optimizing customized LLMs.

.. automodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: LinearSilu

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: LinearSiluMul

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: Linear2SiluMul

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: LinearRelu

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: LinearNewGelu

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: LinearGelu

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: LinearMul

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: LinearAdd

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: LinearAddAdd

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: RotaryEmbedding

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: RMSNorm

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: FastLayerNorm

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: IndirectAccessKVCacheAttention

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: PagedAttention

.. currentmodule:: intel_extension_for_pytorch.llm.modules
.. autoclass:: VarlenAttention

.. automodule:: intel_extension_for_pytorch.llm.functional
.. autofunction:: rotary_embedding

.. currentmodule:: intel_extension_for_pytorch.llm.functional
.. autofunction:: rms_norm

.. currentmodule:: intel_extension_for_pytorch.llm.functional
.. autofunction:: fast_layer_norm

.. currentmodule:: intel_extension_for_pytorch.llm.functional
.. autofunction:: indirect_access_kv_cache_attention

.. currentmodule:: intel_extension_for_pytorch.llm.functional
.. autofunction:: varlen_attention

Fast Bert (Prototype)
************************

.. currentmodule:: intel_extension_for_pytorch
.. autofunction:: fast_bert

Graph Optimization
******************

.. currentmodule:: intel_extension_for_pytorch
.. autofunction:: enable_onednn_fusion

Quantization
************

.. automodule:: intel_extension_for_pytorch.quantization
.. autofunction:: get_smooth_quant_qconfig_mapping
.. autofunction:: get_weight_only_quant_qconfig_mapping
.. autofunction:: prepare
.. autofunction:: convert

Prototype API, introduction is avaiable at `feature page <./features/int8_recipe_tuning_api.md>`_.

.. autofunction:: autotune

CPU Runtime
***********

.. automodule:: intel_extension_for_pytorch.cpu.runtime
.. autofunction:: is_runtime_ext_enabled
.. autoclass:: CPUPool
.. autoclass:: pin
.. autoclass:: MultiStreamModuleHint
.. autoclass:: MultiStreamModule
.. autoclass:: Task
.. autofunction:: get_core_list_of_node_id

.. .. automodule:: intel_extension_for_pytorch.quantization
.. :members:
39 changes: 39 additions & 0 deletions cpu/2.4.0+cpu/_sources/tutorials/blogs_publications.md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
Blogs & Publications
====================

* [Accelerate Llama 2 with Intel AI Hardware and Software Optimizations, Jul 2023](https://www.intel.com/content/www/us/en/developer/articles/news/llama2.html)
* [Accelerate PyTorch\* Training and Inference Performance using Intel® AMX, Jul 2023](https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-training-inference-on-amx.html)
* [Intel® Deep Learning Boost (Intel® DL Boost) - Improve Inference Performance of Hugging Face BERT Base Model in Google Cloud Platform (GCP) Technology Guide, Apr 2023](https://networkbuilders.intel.com/solutionslibrary/intel-deep-learning-boost-intel-dl-boost-improve-inference-performance-of-hugging-face-bert-base-model-in-google-cloud-platform-gcp-technology-guide)
* [Get Started with Intel® Extension for PyTorch\* on GPU | Intel Software, Mar 2023](https://www.youtube.com/watch?v=Id-rE2Q7xZ0&t=1s)
* [Accelerate PyTorch\* INT8 Inference with New “X86” Quantization Backend on X86 CPUs, Mar 2023](https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-int8-inf-with-new-x86-backend.html)
* [Accelerating PyTorch Transformers with Intel Sapphire Rapids, Part 1, Jan 2023](https://huggingface.co/blog/intel-sapphire-rapids)
* [Intel® Deep Learning Boost - Improve Inference Performance of BERT Base Model from Hugging Face for Network Security Technology Guide, Jan 2023](https://networkbuilders.intel.com/solutionslibrary/intel-deep-learning-boost-improve-inference-performance-of-bert-base-model-from-hugging-face-for-network-security-technology-guide)
* [Scaling inference on CPUs with TorchServe, PyTorch Conference, Dec 2022](https://www.youtube.com/watch?v=066_Jd6cwZg)
* [What is New in Intel Extension for PyTorch, PyTorch Conference, Dec 2022](https://www.youtube.com/watch?v=SE56wFXdvP4&t=1s)
* [Accelerating PyG on Intel CPUs, Dec 2022](https://www.pyg.org/ns-newsarticle-accelerating-pyg-on-intel-cpus)
* [Accelerating PyTorch Deep Learning Models on Intel XPUs, Dec, 2022](https://www.oneapi.io/event-sessions/accelerating-pytorch-deep-learning-models-on-intel-xpus-2-ai-hpc-2022/)
* [Introducing the Intel® Extension for PyTorch\* for GPUs, Dec 2022](https://www.intel.com/content/www/us/en/developer/articles/technical/introducing-intel-extension-for-pytorch-for-gpus.html)
* [PyTorch Stable Diffusion Using Hugging Face and Intel Arc, Nov 2022](https://towardsdatascience.com/pytorch-stable-diffusion-using-hugging-face-and-intel-arc-77010e9eead6)
* [PyTorch 1.13: New Potential for AI Developers to Enhance Model Performance and Accuracy, Nov 2022](https://www.intel.com/content/www/us/en/developer/articles/technical/pytorch-1-13-new-potential-for-ai-developers.html)
* [Easy Quantization in PyTorch Using Fine-Grained FX, Sep 2022](https://medium.com/intel-analytics-software/easy-quantization-in-pytorch-using-fine-grained-fx-80be2c4bc2d6)
* [Empowering PyTorch on Intel® Xeon® Scalable processors with Bfloat16, Aug 2022](https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/)
* [Accelerating PyTorch Vision Models with Channels Last on CPU, Aug 2022](https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/)
* [One-Click Enabling of Intel Neural Compressor Features in PyTorch Scripts, Aug 2022](https://medium.com/intel-analytics-software/one-click-enable-intel-neural-compressor-features-in-pytorch-scripts-5d4e31f5a22b)
* [Increase PyTorch Inference Throughput by 4x, Jul 2022](https://www.intel.com/content/www/us/en/developer/articles/technical/increase-pytorch-inference-throughput-by-4x.html)
* [PyTorch Inference Acceleration with Intel® Neural Compressor, Jun 2022](https://medium.com/pytorch/pytorch-inference-acceleration-with-intel-neural-compressor-842ef4210d7d)
* [Accelerating PyTorch with Intel® Extension for PyTorch, May 2022](https://medium.com/pytorch/accelerating-pytorch-with-intel-extension-for-pytorch-3aef51ea3722)
* [Grokking PyTorch Intel CPU performance from first principles (parts 1), Apr 2022](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex.html)
* [Grokking PyTorch Intel CPU performance from first principles (parts 2), Apr 2022](https://pytorch.org/tutorials/intermediate/torchserve_with_ipex_2.html)
* [Grokking PyTorch Intel CPU performance from first principles, Apr 2022](https://medium.com/pytorch/grokking-pytorch-intel-cpu-performance-from-first-principles-7e39694412db)
* [KT Optimizes Performance for Personalized Text-to-Speech, Nov 2021](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/KT-Optimizes-Performance-for-Personalized-Text-to-Speech/post/1337757)
* [Accelerating PyTorch distributed fine-tuning with Intel technologies, Nov 2021](https://huggingface.co/blog/accelerating-pytorch)
* [Scaling up BERT-like model Inference on modern CPU - parts 1, Apr 2021](https://huggingface.co/blog/bert-cpu-scaling-part-1)
* [Scaling up BERT-like model Inference on modern CPU - parts 2, Nov 2021](https://huggingface.co/blog/bert-cpu-scaling-part-2)
* [NAVER: Low-Latency Machine-Learning Inference](https://www.intel.com/content/www/us/en/customer-spotlight/stories/naver-ocr-customer-story.html)
* [Intel® Extensions for PyTorch, Feb 2021](https://pytorch.org/tutorials/recipes/recipes/intel_extension_for_pytorch.html)
* [Optimizing DLRM by using PyTorch with oneCCL Backend, Feb 2021](https://pytorch.medium.com/optimizing-dlrm-by-using-pytorch-with-oneccl-backend-9f85b8ef6929)
* [Accelerate PyTorch with IPEX and oneDNN using Intel BF16 Technology, Feb 2021](https://medium.com/pytorch/accelerate-pytorch-with-ipex-and-onednn-using-intel-bf16-technology-dca5b8e6b58f)
*Note*: APIs mentioned in it are deprecated.
* [Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel® Xeon® Processors and Intel® Deep Learning Boost’s new BFloat16 capability, Jun 2020](https://community.intel.com/t5/Blogs/Tech-Innovation/Artificial-Intelligence-AI/Intel-and-Facebook-Accelerate-PyTorch-Performance-with-3rd-Gen/post/1335659)
* [Intel and Facebook\* collaborate to boost PyTorch\* CPU performance, Apr 2019](https://www.intel.com/content/www/us/en/developer/articles/case-study/intel-and-facebook-collaborate-to-boost-pytorch-cpu-performance.html)
* [Intel and Facebook\* Collaborate to Boost Caffe\*2 Performance on Intel CPU’s, Apr 2017](https://www.intel.com/content/www/us/en/developer/articles/technical/intel-and-facebook-collaborate-to-boost-caffe2-performance-on-intel-cpu-s.html)
21 changes: 21 additions & 0 deletions cpu/2.4.0+cpu/_sources/tutorials/cheat_sheet.md.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
Cheat Sheet
===========

Get started with Intel® Extension for PyTorch\* using the following commands:

|Description | Command |
| -------- | ------- |
| Basic CPU Installation | `python -m pip install intel_extension_for_pytorch` |
| Import Intel® Extension for PyTorch\* | `import intel_extension_for_pytorch as ipex`|
| Capture a Verbose Log (Command Prompt) | `export ONEDNN_VERBOSE=1` |
| Optimization During Training | `model = ...`<br>`optimizer = ...`<br>`model.train()`<br>`model, optimizer = ipex.optimize(model, optimizer=optimizer)`|
| Optimization During Inference | `model = ...`<br>`model.eval()`<br>`model = ipex.optimize(model)` |
| Optimization Using the Low-Precision Data Type bfloat16 <br>During Training (Default FP32) | `model = ...`<br>`optimizer = ...`<br>`model.train()`<br/><br/>`model, optimizer = ipex.optimize(model, optimizer=optimizer, dtype=torch.bfloat16)`<br/><br/>`with torch.no_grad():`<br>` with torch.cpu.amp.autocast():`<br>` model(data)` |
| Optimization Using the Low-Precision Data Type bfloat16 <br>During Inference (Default FP32) | `model = ...`<br>`model.eval()`<br/><br/>`model = ipex.optimize(model, dtype=torch.bfloat16)`<br/><br/>`with torch.cpu.amp.autocast():`<br>` model(data)`
| [Prototype] Fast BERT Optimization | `from transformers import BertModel`<br>`model = BertModel.from_pretrained("bert-base-uncased")`<br>`model.eval()`<br/><br/>`model = ipex.fast_bert(model, dtype=torch.bfloat16)`|
| Run CPU Launch Script (Command Prompt): <br>Automate Configuration Settings for Performance | `ipexrun [knobs] <your_pytorch_script> [args]`|
| [Prototype] Run HyperTune to perform hyperparameter/execution configuration search | `python -m intel_extension_for_pytorch.cpu.hypertune --conf-file <your_conf_file> <your_python_script> [args]`|
| [Prototype] Enable Graph capture | `model = …`<br>`model.eval()`<br>`model = ipex.optimize(model, graph_mode=True)`|
| Post-Training INT8 Quantization (Static) | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_static_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data, anyplace=False)`<br/><br/>`for d in calibration_data_loader():`<br>` prepared_model(d)`<br/><br/>`converted_model = ipex.quantization.convert(prepared_model)`|
| Post-Training INT8 Quantization (Dynamic) | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_dynamic_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data)`<br/><br/>`converted_model = ipex.quantization.convert(prepared_model)` |
| [Prototype] Post-Training INT8 Quantization (Tuning Recipe): | `model = …`<br>`model.eval()`<br>`data = …`<br/><br/>`qconfig = ipex.quantization.default_static_qconfig`<br/><br/>`prepared_model = ipex.quantization.prepare(model, qconfig, example_inputs=data, inplace=False)`<br/><br/>`tuned_model = ipex.quantization.autotune(prepared_model, calibration_data_loader, eval_function, sampling_sizes=[100],`<br>` accuracy_criterion={'relative': .01}, tuning_time=0)`<br/><br/>`convert_model = ipex.quantization.convert(tuned_model)`|
Loading

0 comments on commit 4067e44

Please sign in to comment.