Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tflite backend #93

Open
wants to merge 125 commits into
base: develop
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 25 commits
Commits
Show all changes
125 commits
Select commit Hold shift + click to select a range
723965c
WIP
neil-tan Apr 23, 2019
7659273
- verified dependencies are importable
neil-tan Apr 24, 2019
236ec41
initial commit
neil-tan Oct 4, 2019
4bb2de5
op regristation
neil-tan Oct 5, 2019
3f19de7
tied everything together; lacks of op factory/converter
neil-tan Oct 7, 2019
9db8ba4
uses Affine Quantization
neil-tan Oct 9, 2019
f46ff9f
private variable and member functions
neil-tan Oct 9, 2019
8b60cc5
float32 for intermediate tensors
neil-tan Oct 16, 2019
295efea
- bugfixes in tflite_exporter.py
neil-tan Oct 17, 2019
b7f3adb
problem with tensor.isVariable()
neil-tan Oct 17, 2019
4d37abf
problem reading back tensor buffers
neil-tan Oct 17, 2019
fdd18a2
tensor buffer bug fix
neil-tan Oct 18, 2019
280fdc8
-fixed problem with builtin_opcode
neil-tan Oct 22, 2019
4f72a8b
iterables require reversing prior to flatbuffer prepending
neil-tan Nov 22, 2019
58d533f
refactoring: fb-vector-builder in progress
neil-tan Nov 23, 2019
5a56092
fb-vector-builder refactoring completed
neil-tan Nov 23, 2019
1a184ed
- using flatbuffers 136d75fa6580ef87d1b7cbc243e617f21149852e
neil-tan Nov 25, 2019
99c925a
daily commit - ops require options; see TODO in tflite_exporter.py
neil-tan Nov 27, 2019
ab7d08e
OperatorAddBuiltinOptions breaks input tensors? try regenerate the fl…
neil-tan Nov 28, 2019
2fd92f6
regenerated the fb python bindings
neil-tan Nov 29, 2019
56bbd26
input1 tensor failed to print after invoke
neil-tan Nov 29, 2019
acef9d0
- a more comprehensive test graph
neil-tan Nov 29, 2019
0e3c5f1
binding data to the first input tensor
neil-tan Nov 29, 2019
b16236b
run test with "pytest -s tests/tflm"
neil-tan Nov 29, 2019
3c60d92
clean up flatbuffers import
neil-tan Nov 29, 2019
5c1bf60
added flatbuffers to setup.py
neil-tan Dec 6, 2019
f4b87e5
updated this branch to use the new ugraph-constructor
neil-tan Dec 12, 2019
ab6940a
- onnx frontend build ugraph done (missing shape and dtype)
dboyliao Feb 22, 2020
6317eda
update tests and fix transformers
dboyliao Feb 24, 2020
f697120
making some transformer generic (not specific to tensorflow)
dboyliao Feb 25, 2020
fb5413e
update pipfile, locks
dboyliao Feb 27, 2020
ae052c9
lock file update
dboyliao Feb 28, 2020
8e8ef35
ortools mem allocation optimizer
dboyliao Feb 29, 2020
4a23369
tensor mem alloc: fix bug and add logging
dboyliao Feb 29, 2020
3e282a2
tensor mem alloc visualization
dboyliao Mar 1, 2020
ba5ad38
minor fix
dboyliao Mar 1, 2020
c87784e
python3.5 compatible (no f-string)
dboyliao Mar 1, 2020
8b09892
remove data manager from snippet creation and update tests
dboyliao Mar 2, 2020
5f3f2d2
cleanup DataManager, TensorLifeProbe transformer and update tests
dboyliao Mar 2, 2020
2b3481b
Fix kwargs parsing bug, add include_inputs flag for tensor alloc tran…
dboyliao Mar 2, 2020
4458319
better visualization
dboyliao Mar 3, 2020
a5a3043
update README
dboyliao Mar 3, 2020
9fb54f7
use backendpart to refactor the offline memory planner
Knight-X Feb 19, 2020
416f21a
fix bugs and update tests (exclude slow test)
dboyliao Mar 4, 2020
b0ecd72
make graph_lower a submodule
dboyliao Mar 5, 2020
facaa5d
Decouple graph transformation and code generator
dboyliao Mar 5, 2020
444ddbe
update tests
dboyliao Mar 7, 2020
11bc08b
Add test for cli speed
dboyliao Mar 7, 2020
fd749f4
circle-ci: utf8 local
dboyliao Mar 7, 2020
1ccc721
making tensor alloc lowering optional
dboyliao Mar 8, 2020
d8a17cf
update tests and add comments/docstrings
dboyliao Mar 8, 2020
af52ec8
fix bug and update tests
dboyliao Mar 8, 2020
a3f0368
make Configuration nearly unmutable object
dboyliao Mar 8, 2020
f07f5dd
Add notes to generated config file
dboyliao Mar 8, 2020
aa7a2fe
Fix transformer bug and update tests
dboyliao Mar 8, 2020
e11cc0d
update plugin example
dboyliao Mar 8, 2020
2ad07b9
unify alloc plan format
dboyliao Mar 9, 2020
1154aec
Fix init signature and add comments
dboyliao Mar 9, 2020
f8a81d6
Fix indentation
dboyliao Mar 10, 2020
78e9678
making bakend part apply and transform abstract and put checks on them
dboyliao Mar 10, 2020
ef50c05
update templates
dboyliao Mar 10, 2020
7670a0a
add tests
dboyliao Mar 11, 2020
d17e873
Extensible OpEqualityDelegate
dboyliao Mar 11, 2020
6feccbe
modify for brutal force version of memory planner
Knight-X Mar 11, 2020
4b5f8fd
remove the unnecssary lib
Knight-X Mar 11, 2020
3dc3f03
change the default setting of memory planner
Knight-X Mar 11, 2020
26e38f6
remove the brutal force memory planner
Knight-X Mar 11, 2020
d5f50c7
Merge pull request #105 from Knight-X/rearch_pr
dboyliao Mar 11, 2020
7ba874f
Merge pull request #104 from uTensor/rearch
dboyliao Mar 11, 2020
4179356
Merge pull request #103 from uTensor/ortools
dboyliao Mar 11, 2020
ac08440
resolve review issues
dboyliao Mar 13, 2020
ef4f1cb
Add data alignment attribute for SpaceAllocation
dboyliao Mar 13, 2020
2c2c9e6
better logging messages
dboyliao Mar 13, 2020
7d9d6ea
add comments
dboyliao Mar 14, 2020
89b0b73
merge develop and fixed a conflict
neil-tan Mar 20, 2020
436a3f8
Merge remote-tracking branch 'origin/develop' into f/tflite-parser
neil-tan Mar 20, 2020
6796df5
resolve code review issues, including renaming and minor code
dboyliao Mar 18, 2020
319f635
add new cnn pb file
dboyliao Mar 20, 2020
fde1fc8
update README.rst
dboyliao Mar 20, 2020
982bffd
resolve review issue: subroutines for rearch code generator and tests
dboyliao Mar 21, 2020
f31f012
minor update for tests
dboyliao Mar 21, 2020
97efefd
Add attributes to _Operator (rearch)
dboyliao Mar 23, 2020
1166e91
modify backend apply procedure
dboyliao Apr 1, 2020
c4bd90f
nightly
neil-tan Apr 1, 2020
95a1c50
memory planner: make output tensors optionally included in the planning
dboyliao Apr 7, 2020
e23a17b
before testing
neil-tan Apr 7, 2020
4c09391
merge onnx branch
dboyliao Apr 8, 2020
373bbc5
seems to work on the first glance, expecting devils in the detail
neil-tan Apr 9, 2020
142d752
checkpoint before reworking the quantization info
neil-tan Apr 12, 2020
598f824
quantization param in generic type
neil-tan Apr 12, 2020
a4f21ec
minor cleanup
neil-tan Apr 12, 2020
d7fe924
Add constructor signature for op factory
dboyliao Apr 15, 2020
154d8ab
Resolve review issues
dboyliao Apr 18, 2020
df94383
Formatting files
dboyliao Apr 20, 2020
4577bf8
update lockfile
dboyliao Apr 20, 2020
1cc82c5
reformat file and add configuration to pylint
dboyliao Apr 21, 2020
5606693
Fix "b"-prefix in the tensor names (with decoding to utf8)
dboyliao Apr 23, 2020
21da98c
addressing issues mentioned in the PR107 comments
neil-tan Apr 24, 2020
bda37bb
fix node_name
dboyliao Apr 24, 2020
3903dfe
Fix linting issues
dboyliao Apr 24, 2020
a8c904e
fix minor issues and add test for tflite parser
dboyliao Apr 24, 2020
1da0d5b
Merge pull request #107 from uTensor/f/tflite-parser
dboyliao Apr 24, 2020
13644c3
re-arch code generator refactoring
dboyliao Apr 27, 2020
9ed32f4
clean up code
dboyliao Apr 28, 2020
131e978
adding ops for code generator
dboyliao Apr 28, 2020
cff4940
minor fix: utils.prune_graph
dboyliao Apr 29, 2020
5dda2b9
rearch code generator beta
dboyliao Apr 29, 2020
090cd31
fix templates and type str
dboyliao Apr 30, 2020
b4469a7
namespaces support (same optype, different namespaces)
dboyliao May 1, 2020
5851a49
Fix bug and minor refactor (removing misleading op_info attributes fo…
dboyliao May 1, 2020
5a09094
Fix templates and snippets bug
dboyliao May 1, 2020
fa6c7f4
method renaming to prevent misunderstanding
dboyliao May 3, 2020
249a11c
Fix TF2.0 imports and bugs (v1 and v2 behavior inconsistent)
dboyliao May 4, 2020
79491c3
update tests and fix onnx parser
dboyliao May 4, 2020
1d6e3c7
add tests
dboyliao May 4, 2020
d6352a3
TF 2.0 support
dboyliao May 4, 2020
139a890
update requirements.txt
dboyliao May 4, 2020
2f84e09
speed up loading time
dboyliao May 4, 2020
8418cc2
prettify printing
dboyliao May 4, 2020
32f78cd
cli refactoring and exporting high-level api
dboyliao May 5, 2020
357a212
Add isort:skip_file
dboyliao May 5, 2020
18a396a
Add target kwargs to api
dboyliao May 6, 2020
18f0460
sync with release branch
dboyliao May 6, 2020
cae7806
remove frontend.tflite_flatbuffer module and update imports
dboyliao May 6, 2020
31b9f50
update tests
dboyliao May 6, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,8 @@
'torch',
'torchvision',
'onnx-tf==1.2.1',
'graphviz'
'graphviz',
'flatbuffers'
],
extras_require={
'dev': ['pytest']
Expand Down
Empty file.
139 changes: 139 additions & 0 deletions tests/tflm/tflite_export/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,139 @@
import numpy as np
from pytest import fixture

import tensorflow as tf
from utensor_cgen.ir import TensorInfo, OperationInfo, uTensorGraph
from utensor_cgen.ir.converter import (AttrValueConverter, DataTypeConverter,
GenericTensorConverterMixin)
from utensor_cgen.utils import prune_graph, topologic_order_graph
from utensor_cgen.backend.operators import OperatorFactory, _Operator
from utensor_cgen.matcher import OpEqualityDelegate, _morphism


@OperatorFactory.register
@OpEqualityDelegate.is_associative(
permutations=((0, 1), (1, 0))
)
class _TFLM_AddOperator(_Operator):

op_type = "TFLM_ADD" # tf op type

def __init__(self, op_info, **kwargs):
_Operator.__init__(self)
inputs = [tensor_info.name for tensor_info in op_info.input_tensors]
output = op_info.output_tensors[0].name
tf_dtype = op_info.input_tensors[0].dtype

@classmethod
def build_op_info(cls, ugraph, name, tensor_x, tensor_y, **kwargs):
# broadcast the shape and promote types
dummy_x = np.empty(tensor_x.shape)
dummy_y = np.empty(tensor_y.shape)
output_shape = np.broadcast(dummy_x, dummy_y).shape
output_dtype = np.promote_types(tensor_x.dtype, tensor_y.dtype)
return OperationInfo(
name=name,
input_tensors=[tensor_x, tensor_y],
output_tensors=[
TensorInfo(
name='{}:0'.format(name),
op_name=name,
dtype=output_dtype,
shape=list(output_shape),
ugraph=ugraph
)
],
op_type=cls.op_type,
op_attr={
'T': AttrValueConverter.__utensor_generic_type__(
value_name='type',
value=DataTypeConverter.get_tf_value(output_dtype)
)
},
ugraph=ugraph,
backend=kwargs.get('backend', 'TFLM')
)


@OperatorFactory.register
class _TFLM_FULLY_CONNECTED_Operator(_Operator):

op_type="TFLM_FULLY_CONNECTED"

def __init__(self, op_info, **kwargs):
_Operator.__init__(self)
inputs = [tensor_info.name for tensor_info in op_info.input_tensors]
output = op_info.output_tensors[0].name
out_dtype = op_info.output_tensors[0].dtype
in_dtypes = [tensor_info.dtype for tensor_info in op_info.input_tensors]
#assert (op_info.input_tensors[0].shape[1] == None or op_info.input_tensors[0].shape[1] == 1)

@classmethod
def build_op_info(cls, ugraph, name, tensor_x, tensor_w, tensor_b, **kwargs):
output_shape = [tensor_w.shape[0], tensor_x.shape[1]]
#output_dtype = np.promote_types(tensor_x.dtype, tensor_y.dtype)
output_dtype = tensor_x.dtype
return OperationInfo(
name=name,
input_tensors=[tensor_x, tensor_w, tensor_b],
output_tensors=[
TensorInfo(
name='{}:0'.format(name),
op_name=name,
dtype=output_dtype,
shape=list(output_shape),
ugraph=ugraph
)
],
op_type=cls.op_type,
op_attr={
'T': AttrValueConverter.__utensor_generic_type__(
value_name='type',
value=DataTypeConverter.get_tf_value(output_dtype)
)
},
ugraph=ugraph,
backend=kwargs.get('backend', 'TFLM')
)

@fixture(name='hybrid_quant_output')
def simple_tflm_graph():
ugraph = uTensorGraph()

with ugraph.begin_construction():
tensor_x0, = ugraph.add_op(
op_type='Const',
name='x0',
value=np.array([1, 1, 1, 1], dtype=np.float32)[:, np.newaxis]
)
tensor_x1, = ugraph.add_op(
op_type='Const',
name='x1',
value=np.array([2, 4, 6, 8], dtype=np.float32)[:, np.newaxis]
)
tensor_w, = ugraph.add_op(
op_type='Const',
name='w',
value=np.array([10, 20, 30, 40], dtype=np.float32)[np.newaxis, :]
)
tensor_b, = ugraph.add_op(
op_type='Const',
name='b',
value=np.array([7], dtype=np.float32)
)


tensor_addout, = ugraph.add_op(
tensor_x0, tensor_x1,
op_type='TFLM_ADD',
name='TFLM_ADD0'
)

tensor_out, = ugraph.add_op(
tensor_addout, tensor_w, tensor_b,
op_type='TFLM_FULLY_CONNECTED',
name='TFLM_FULLY_CONNECTED00',
is_output=True
)

return [ugraph, ["x0:0", "x1:0"], ["w:0", "b:0", tensor_out.name]]
114 changes: 114 additions & 0 deletions tests/tflm/tflite_export/test_write.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
import numpy as np

import tensorflow as tf
from utensor_cgen.frontend.tensorflow import GraphDefParser
from utensor_cgen.matcher import uTensorGraphMatcher
from utensor_cgen.utils import prune_graph, topologic_order_graph
from utensor_cgen.transformer import TFLiteExporter
import utensor_cgen.third_party.flatbuffers as flatbuffers
import utensor_cgen.third_party.tflite as tflite
from utensor_cgen.third_party.tflite.BuiltinOperator import BuiltinOperator
from utensor_cgen.third_party.tflite.Model import Model
from utensor_cgen.third_party.tflite.BuiltinOptions import BuiltinOptions
from utensor_cgen.third_party.tflite.TensorType import TensorType

builtin_ops = {v: k for k, v in BuiltinOperator.__dict__.items()}
op_options = {v: k for k, v in BuiltinOptions.__dict__.items()}

tensor_np_type = dict()
tensor_np_type[0] = np.float32
tensor_np_type[1] = np.float16
tensor_np_type[2] = np.int32
tensor_np_type[3] = np.uint8
tensor_np_type[4] = np.uint64
tensor_np_type[5] = np.ubyte #FIXME: supposed to be string
tensor_np_type[6] = np.bool
tensor_np_type[7] = np.int16
tensor_np_type[8] = np.cdouble
tensor_np_type[9] = np.int8

def print_tflite_graph(byte_buff):

model = Model.GetRootAsModel(byte_buff, 0)
subgraphs_len = model.SubgraphsLength()
subgraph = model.Subgraphs(0)
n_ops = subgraph.OperatorsLength()
print("version: ", model.Version())
print("subgraph len: ", subgraphs_len)
print("number of operators: ", n_ops)
print("number of t buff: ", model.BuffersLength())
print("flat buffer length: ", len(byte_buff), " bytes")
op_codes = []
for i in range(0, model.OperatorCodesLength()):
op_code = model.OperatorCodes(i)
op_codes.append(op_code)
print("op code length: ", len(op_codes))

for i in range(0, subgraph.OperatorsLength()):
op = subgraph.Operators(i)
print("op code index: ", op.OpcodeIndex())
opIndex = op.OpcodeIndex()
op_code = op_codes[opIndex]
builtin_code = op_code.BuiltinCode()
op_type = builtin_ops[builtin_code]
print(op_type)

input_tensors = [subgraph.Tensors(input_idx) for input_idx in op.InputsAsNumpy()]
for tensor in input_tensors:
print()
print(tensor.Name(), ", ", tensor.ShapeAsNumpy())
print("variable: ", tensor.IsVariable())
if tensor.Type() == np.uint8 or tensor.Type() == np.int8:
q = tensor.Quantization()
assert q != None
print("quantization info: ")
print(" Detail Type: ", q.DetailsType())
print(" Scales: ", q.ScaleAsNumpy())
print(" Zeros: ", q.ZeroPointAsNumpy())
print(" Scale: ", q.ScaleAsNumpy())
print(" Zero Point: ", q.ZeroPointAsNumpy())
print(" Dimension: ", q.QuantizedDimension())

print(tensor.IsVariable())
if not tensor.IsVariable():
buffer_index = tensor.Buffer()
assert buffer_index >= 0
assert model.Buffers(buffer_index).DataLength() > 0
buffer_content = model.Buffers(buffer_index).DataAsNumpy()
print("Tensor values: ", buffer_content.astype(tensor_np_type[tensor.Type()]))
else:
print("None")

def test_tflite_fb_write(hybrid_quant_output):
[sample_ugraph, input_tensors, output_tensors] = hybrid_quant_output
exporter = TFLiteExporter(input_tensors=input_tensors, output_tensors=output_tensors)
ugraph = exporter.transform(sample_ugraph)
model_content = exporter.output()

print_tflite_graph(model_content)

# referece_model_content = open('/Users/neitan01/Documents/tflm/sinExample/sine_model.tflite', "rb").read()
# print_tflite_graph(referece_model_content)

open("tflm_test_model.tflite", "wb").write(model_content)
test_model = tf.lite.Interpreter('tflm_test_model.tflite')
test_model.allocate_tensors()
input_data = np.array(np.ones([4,1]), dtype=np.float32)
test_model.set_tensor(test_model.get_input_details()[0]['index'], input_data)
test_model.invoke()

print("0 :", test_model.get_tensor(0))
print("1 :", test_model.get_tensor(1))
print("2 :", test_model.get_tensor(2))
print("3 :", test_model.get_tensor(3))


print(test_model.get_tensor_details())
print("out0 :", test_model.get_tensor(test_model.get_output_details()[0]["index"]))
print("out1 :", test_model.get_tensor(test_model.get_output_details()[1]["index"]))
print("out2 :", test_model.get_tensor(test_model.get_output_details()[2]["index"]))


output = test_model.get_tensor(test_model.get_output_details()[2]["index"])

assert np.abs(output - 707) <= 0.0001, 'error is greater than 0.0001'
17 changes: 17 additions & 0 deletions utensor_cgen/third_party/flatbuffers/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Copyright 2014 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from .builder import Builder
from .table import Table
from .compat import range_func as compat_range
Loading