Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
ishaghosh27 authored Oct 24, 2024
1 parent f7a372a commit 0ccb765
Showing 1 changed file with 6 additions and 4 deletions.
10 changes: 6 additions & 4 deletions examples/cpu/inference/python/jupyter-notebooks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,17 @@

Intel® Extension for PyTorch (IPEX) extends PyTorch* with optimizations for extra performance boost on Intel® hardware. While most of the optimizations will be upstreamed in future PyTorch* releases, the extension delivers up-to-date features and optimizations for PyTorch workloads on Intel® hardware. The optimization approaches generally include operator optimization, graph optimization and runtime optimization.

Before selecting a sample, please make sure to (1) Check [Prerequisites](#prerequisites), (2) complete [Environment Setup](#environment-setup), and (3) see instructions to [Run the Sample](#run-the-sample).

## Jupyter Notebooks Overview

| Sample name | Description | Time to Complete | Category | Validated for AI Tools Selector |
|---|---|---|---|---|
[Getting Started with Intel® Extension for PyTorch* (IPEX)](https://github.com/intel/intel-extension-for-pytorch/tree/main/examples/cpu/inference/python/jupyter-notebooks/IPEX_Getting_Started.ipynb ) | This code sample demonstrates how to begin using the Intel® Extension for PyTorch* (IPEX). It will guide users how to run a PyTorch inference workload on CPU by using oneAPI AI Analytics Toolkit and also analyze the CPU usage via oneDNN verbose logs.| 15 minutes| Getting Started | Y |
[PyTorch Inference Optimizations with Intel® Advanced Matrix Extensions (Intel® AMX) Bfloat16 Integer8](https://github.com/intel-innersource/frameworks.ai.pytorch.ipex-cpu/blob/cpu-device/examples/cpu/inference/python/jupyter-notebooks/IntelPyTorch_InferenceOptimizations_AMX_BF16_INT8.ipynb) | This code sample demonstrates how to perform inference using the ResNet50 and BERT models using the Intel® Extension for PyTorch* (IPEX). IPEX allows you to speed up inference on Intel® Xeon Scalable processors with lower precision data formats and specialized computer instructions. The bfloat16 (BF16) data format uses half the bit width of floating-point-32 (FP32), which lessens the amount of memory needed and execution time to process. Likewise, the integer8 (INT8) data format uses half the bit width of BF16. | 5 minutes | Code Optimization | Y|
[Interactive Chat Based on DialoGPT Model Using Intel® Extension for PyTorch* Quantization](https://github.com/intel-innersource/frameworks.ai.pytorch.ipex-cpu/blob/cpu-device/examples/cpu/inference/python/jupyter-notebooks/IntelPytorch_Interactive_Chat_Quantization.ipynb)| This code sample demonstrates how to create interactive chat based on pre-trained DialoGPT model and add the Intel® Extension for PyTorch* (IPEX) quantization to it. The sample shows how to create interactive chat based on the pre-trained DialoGPT model from HuggingFace and how to add INT8 dynamic quantization to it. The Intel® Extension for PyTorch* (IPEX) gives users the ability to speed up operations on processors with INT8 data format and specialized computer instructions.| 10 minutes | Concepts and Functionality| Y|
[Optimize PyTorch Models using Intel® Extension for PyTorch* (IPEX) Quantization](https://github.com/intel-innersource/frameworks.ai.pytorch.ipex-cpu/blob/cpu-device/examples/cpu/inference/python/jupyter-notebooks/IntelPytorch_Quantization.ipynb)|This code sample demonstrates how to quantize a ResNet50 model that is calibrated by the CIFAR10 dataset using the Intel® Extension for PyTorch* (IPEX). IPEX gives users the ability to speed up inference on Intel® Xeon Scalable processors with INT8 data format and specialized computer instructions. The INT8 data format uses quarter the bit width of floating-point-32 (FP32), lowering the amount of memory needed and execution time to process.| 5 minutes| Concepts and Functionality| Y|
[Optimize PyTorch Models using Intel® Extension for PyTorch* (IPEX)](https://github.com/intel-innersource/frameworks.ai.pytorch.ipex-cpu/blob/cpu-device/examples/cpu/inference/python/jupyter-notebooks/optimize_pytorch_models_with_ipex.ipynb)| This sample notebook shows how to get started with Intel® Extension for PyTorch* (IPEX) for sample Computer Vision and NLP workloads. The sample starts by loading two models from the PyTorch hub: Faster-RCNN (Faster R-CNN) and distilbert (DistilBERT). After loading the models, the sample applies sequential optimizations from Intel® Extension for PyTorch* (IPEX) and examines performance gains for each incremental change.| 30 minutes | Code Optimization |Y|
[PyTorch Inference Optimizations with Intel® Advanced Matrix Extensions (Intel® AMX) Bfloat16 Integer8](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/IntelPyTorch_InferenceOptimizations_AMX_BF16_INT8.ipynb) | This code sample demonstrates how to perform inference using the ResNet50 and BERT models using the Intel® Extension for PyTorch* (IPEX). IPEX allows you to speed up inference on Intel® Xeon Scalable processors with lower precision data formats and specialized computer instructions. The bfloat16 (BF16) data format uses half the bit width of floating-point-32 (FP32), which lessens the amount of memory needed and execution time to process. Likewise, the integer8 (INT8) data format uses half the bit width of BF16. | 5 minutes | Code Optimization | Y|
[Interactive Chat Based on DialoGPT Model Using Intel® Extension for PyTorch* Quantization](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/IntelPytorch_Interactive_Chat_Quantization.ipynb)| This code sample demonstrates how to create interactive chat based on pre-trained DialoGPT model and add the Intel® Extension for PyTorch* (IPEX) quantization to it. The sample shows how to create interactive chat based on the pre-trained DialoGPT model from HuggingFace and how to add INT8 dynamic quantization to it. The Intel® Extension for PyTorch* (IPEX) gives users the ability to speed up operations on processors with INT8 data format and specialized computer instructions.| 10 minutes | Concepts and Functionality| Y|
[Optimize PyTorch Models using Intel® Extension for PyTorch (IPEX) Quantization](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/IntelPytorch_Quantization.ipynb)|This code sample demonstrates how to quantize a ResNet50 model that is calibrated by the CIFAR10 dataset using the Intel® Extension for PyTorch* (IPEX). IPEX gives users the ability to speed up inference on Intel® Xeon Scalable processors with INT8 data format and specialized computer instructions. The INT8 data format uses quarter the bit width of floating-point-32 (FP32), lowering the amount of memory needed and execution time to process.| 5 minutes| Concepts and Functionality| Y|
[Optimize PyTorch Models using Intel® Extension for PyTorch* (IPEX)](https://github.com/intel/intel-extension-for-pytorch/blob/main/examples/cpu/inference/python/jupyter-notebooks/optimize_pytorch_models_with_ipex.ipynb)| This sample notebook shows how to get started with Intel® Extension for PyTorch* (IPEX) for sample Computer Vision and NLP workloads. The sample starts by loading two models from the PyTorch hub: Faster-RCNN (Faster R-CNN) and distilbert (DistilBERT). After loading the models, the sample applies sequential optimizations from Intel® Extension for PyTorch* (IPEX) and examines performance gains for each incremental change.| 30 minutes | Code Optimization |Y|

>**Note**: For Key Implementation Details, please refer to the .ipynb file of a sample.
Expand Down

0 comments on commit 0ccb765

Please sign in to comment.