Skip to content

Commit

Permalink
Merge remote-tracking branch 'gitlab/master-pt1.8'
Browse files Browse the repository at this point in the history
  • Loading branch information
EikanWang committed Jun 16, 2021
2 parents cc3203a + 49f9eb2 commit 82e3086
Show file tree
Hide file tree
Showing 63 changed files with 2,401 additions and 972 deletions.
93 changes: 67 additions & 26 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,65 +3,73 @@
Intel Extension for PyTorch is a Python package to extend official PyTorch. It is designed to make the Out-of-Box user experience of PyTorch CPU better while achieving good performance. The extension also will be the PR(Pull-Request) buffer for the Intel PyTorch framework dev team. The PR buffer will not only contain functions, but also optimization (for example, take advantage of Intel's new hardware features).

- [Installation](#installation)
- [Install PyTorch from Source](#install-pytorch-from-source)
- [Install PyTorch](#install-pytorch)
- [Install Intel Extension for PyTorch from Source](#install-intel-extension-for-pytorch-from-source)
- [Getting Started](#getting-started)
- [Automatically Mix Precison](#automatically-mix-precision)
- [BFloat16](#BFloat16)
- [INT8](#int8-quantization)
- [Contribution](#contribution)
- [Supported Customized Operators](#supported-customized-operators)
- [Supported Fusion Patterns](#supported-fusion-patterns)
- [Tutorials](#tutorials)
- [Joint blogs](#joint-blogs)
- [License](#license)

## Installation

### Install PyTorch from Source
### Install PyTorch
|IPEX Version|PyTorch Version|
|--|--|
|[v1.8.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.8.0)|[v1.8.0](https://github.com/pytorch/pytorch/tree/v1.8.0 "v1.8.0")|
|[v1.2.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.2.0)|[v1.7.0](https://github.com/pytorch/pytorch/tree/v1.7.0 "v1.7.0")|
|[v1.1.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.1.0)|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|
|[v1.0.2](https://github.com/intel/intel-extension-for-pytorch/tree/v1.0.2)|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|
|[v1.0.1](https://github.com/intel/intel-extension-for-pytorch/tree/v1.0.1)|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|
|[v1.0.0](https://github.com/intel/intel-extension-for-pytorch/tree/v1.0.0)|[v1.5.0-rc3](https://github.com/pytorch/pytorch/tree/v1.5.0-rc3 "v1.5.0-rc3")|

Take Intel-Extension-for-Pytorch v1.2.0 as the example.
Take Intel-Extension-for-Pytorch v1.8.0 as the example.

1. Get PyTorch v1.7.0 source(Refer to [PyTorch guide](https://github.com/pytorch/pytorch#get-the-pytorch-source) for more details)
1. Install PyTorch from binary
```bash
conda install pytorch torchvision torchaudio cpuonly -c pytorch
```

2. Install PyTorch from source

Get PyTorch v1.8.0 source(Refer to [PyTorch guide](https://github.com/pytorch/pytorch#get-the-pytorch-source) for more details)
```bash
git clone --recursive https://github.com/pytorch/pytorch
```

Checkout PyTorch to the specified version
```bash
cd pytorch

# checkout source code to the specified version
git checkout v1.7.0

# update submodules for the specified PyTorch version
git submodule sync
git submodule update --init --recursive
git checkout v1.8.0
```

2. Get the source code of Intel Extension for PyTorch
Update submodules
```bash
git clone --recursive https://github.com/intel/intel-extension-for-pytorch
cd intel-extension-for-pytorch
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive
```

3. Add an new backend for Intel Extension for PyTorch
Build and install PyTorch (Refer to [PyTorch guide](https://github.com/pytorch/pytorch#install-pytorch) for more details)
```bash
# Apply git patch to pytorch code
cd ${pytorch_directory}
git apply ${intel_extension_for_pytorch_directory}/torch_patches/xpu-1.7.patch
```

4. Build and install PyTorch (Refer to [PyTorch guide](https://github.com/pytorch/pytorch#install-pytorch) for more details)
```bash
cd ${pytorch_directory}
python setup.py install
```

### Install Intel Extension for PyTorch from Source

Get the source code of Intel Extension for PyTorch
```bash
git clone --recursive https://github.com/intel/intel-extension-for-pytorch
cd intel-extension-for-pytorch
# if you are updating an existing checkout
git submodule sync
git submodule update --init --recursive
```

Install dependencies
```bash
pip install lark-parser hypothesis
Expand Down Expand Up @@ -249,6 +257,39 @@ Supported Quantization Operators:
- ```convolution + BatchNorm```
### Supported Customized Operators
* ROIAlign
* NMS
* BatchScoreNMS
* MLP
* Interaction
* FrozenBatchNorm2d
### Supported Fusion Patterns
* Conv2D + ReLU
* Conv2D + SUM
* Conv2D + SUM + ReLU
* Conv2D + Sigmoid
* Conv2D + Sigmoid + MUL
* Conv2D + HardTanh
* Conv2D + ELU
* Conv3D + ReLU
* Conv3D + SUM
* Conv3D + SUM + ReLU
* Linear + ReLU
* Linear + GELU
* View + Transpose + Contiguous + View
## Tutorials
* [Performance Tuning](tutorials/Performance_Tuning.md)
## Joint-blogs
* [Intel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel® Xeon® Processors and Intel® Deep Learning Boost’s new BFloat16 capability](https://www.intel.com/content/www/us/en/artificial-intelligence/posts/intel-facebook-boost-bfloat16.html)
* [Accelerate PyTorch with IPEX and oneDNN using Intel BF16 Technology](https://medium.com/pytorch/accelerate-pytorch-with-ipex-and-onednn-using-intel-bf16-technology-dca5b8e6b58f)
* [Scaling up BERT-like model Inference on modern CPU - Part 1 by IPEX launcher](https://huggingface.co/blog/bert-cpu-scaling-part-1)
## Contribution
Please submit PR or issue to communicate with us or contribute code.
Expand Down
3 changes: 2 additions & 1 deletion intel_pytorch_extension_py/ops/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
from .embeddingbag import embeddingbag
from .linear import *
from .pooling import *
from .mlp import *
from .mlp import *
from .jit import *
from .save import *
from .to import *
Expand All @@ -12,4 +12,5 @@
from .lstm import *
from .rnn import *
from .gru import *
from .layer_norm import *
from .frozen_batch_norm import *
13 changes: 13 additions & 0 deletions intel_pytorch_extension_py/ops/layer_norm.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
import torch
import _torch_ipex as core
from typing import Optional

torch_layer_norm = torch.layer_norm

def _layer_norm(input, normalized_shape, weight, bias, eps, cudnn_enabled):
if input.device.type != "xpu":
return torch_layer_norm(input, normalized_shape, weight, bias, eps, cudnn_enabled)
else:
return torch.ops.torch_ipex.layer_norm(input, normalized_shape, weight, bias, eps)

torch.layer_norm = _layer_norm
4 changes: 3 additions & 1 deletion requirements.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
lark-parser
hypothesis
hypothesis
cmake>=3.13.0
wheel>=0.36
4 changes: 4 additions & 0 deletions scripts/cpu/common/aten_sig_parser.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,12 @@
| SIGNED_NUMBER
| ESCAPED_STRING
| CNAME
| STR_LITERAL
array_list: array_val
| array_val "," array_list
array_val: NUMBER*
| SIGNED_NUMBER*
| CNAME
return_type: "("* ret_param_list ")"*
| "()"
Expand Down Expand Up @@ -72,6 +74,7 @@
ATEN_NS: "aten::"
VEC: "[" (NUMBER)* "]"
STR_LITERAL: "'" (CNAME)* "'"
TENSOR_TYPE: "Tensor"
W_SYM: "!"
OPTIONAL_SYM: "?"
Expand Down Expand Up @@ -165,6 +168,7 @@ def get_all_return_params(self):

if __name__ == '__main__':
sigs = [
"aten::fft_fft2(Tensor self, int[1]? s=None, int[1] dim=[-2,-1], str? norm=None) -> Tensor",
"aten::abs.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)",
"aten::abs_(Tensor(a!) self) -> Tensor(a!)",
"aten::angle(Tensor self) -> Tensor",
Expand Down
64 changes: 64 additions & 0 deletions scripts/cpu/common/native_functions.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,64 @@
#!/usr/bin/python

import collections
import os
import re
import string
import sys

from .cpp_sig_parser import CPPSig
from .utils import *

class NativeFunctions(object):
def __init__(self, func_file_path):
self._func_file_path = func_file_path
self._native_sigs_str = []
self._func_data = ''
self._err_info = []

with open(self._func_file_path, 'r') as ff:
self._func_data = ff.read()

for line in open(self._func_file_path, 'r'):
m = re.match(r'TORCH_API *(.*); *', line)
if not m:
continue
native_cpp_sig_str = m.group(1).replace('at::', '').replace('c10::', '').replace('Reduction::', '')
# Remove ={xxx}
native_cpp_sig_str = re.sub("\=\{.*?\}\,", ",", native_cpp_sig_str)
# Remove =xxx,
native_cpp_sig_str = re.sub("\=.*?\,", ",", native_cpp_sig_str)
# Remove =),
native_cpp_sig_str = re.sub("\=.*?\)", ")", native_cpp_sig_str)
if not is_tensor_api(native_cpp_sig_str):
continue
self._native_sigs_str.append(native_cpp_sig_str)

def is_tensor_member_function(self, func_name):
if self._func_data.find(' {}('.format(func_name)) >= 0:
return False
else:
return True

def query(self, cpp_sig):
cnt = 0
cur_native_cpp_sig_str = ''
ret_native_cpp_sig = None
try:
for native_sig_str_item in self._native_sigs_str:
target_str = ' {}('.format(cpp_sig.def_name)
if native_sig_str_item.find(target_str) >= 0:
cur_native_cpp_sig_str = native_sig_str_item
native_cpp_sig = CPPSig(cur_native_cpp_sig_str)
params1 = [param.ipex_name if param.ipex_name != '' else param.name for param in native_cpp_sig.input_params]
params2 = [param.ipex_name if param.ipex_name != '' else param.name for param in cpp_sig.input_params]
if compare_params(params1, params2):
cnt = cnt + 1
ret_native_cpp_sig = native_cpp_sig
except Exception as e:
self._err_info.append((cur_native_cpp_sig_str, str(e)))
print('[NativeFunctions] Error parsing "{}": {}'.format(cur_native_cpp_sig_str, e), file=sys.stderr)

if cnt == 0:
raise Exception("Cannot the function:{} in Functions.h".format(cpp_sig.def_name))
return ret_native_cpp_sig
25 changes: 25 additions & 0 deletions scripts/cpu/common/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@
'TensorList': 'at::TensorList',
'TensorOptions': 'c10::TensorOptions',
'IntList': 'at::IntList',
'List': 'c10::List',
'Stream': 'c10::Stream',
'IntArrayRef': 'at::IntArrayRef',
'ArrayRef': 'c10::ArrayRef',
'Layout': 'c10::Layout',
Expand All @@ -22,6 +24,18 @@
'DimnameList': 'at::DimnameList', # Cover DimnameList and Dimname
}

def is_tensor_api(func_name):
m = re.search(r'\bTensor\b', func_name)
return m is not None

def compare_params(params1, params2):
if len(params1) != len(params2):
return False

for param_item in params1:
if param_item not in params2:
return False
return True

def add_ns(pt_string):
splited_str = re.split(r'([^a-zA-Z0-9_])', pt_string)
Expand Down Expand Up @@ -53,6 +67,17 @@ def query_tensor_options(input_params):
start_idx = -1
return start_idx, end_idx

def is_out_func(fname):
return fname.endswith("_out") or fname.endswith("_outf")

def reorder_params_idx(to_be_reordered_params, ref_params):
new_idxs = {}
assert len(to_be_reordered_params) == len(ref_params)
for param in to_be_reordered_params:
assert param in ref_params
new_idxs[ref_params.index(param)] = to_be_reordered_params.index(param)
return new_idxs

if __name__ == '__main__':
sigs = [
"aten::abs.out(Tensor self, *, Tensor(a!) out) -> Tensor(a!)",
Expand Down
Loading

0 comments on commit 82e3086

Please sign in to comment.