-
-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exp matcher #65
Exp matcher #65
Conversation
- permutation aware - compatible ops aware - MetaOpInfo
fails at quantization: missing attribute
TODO: copy attributes to the fused node
matcher replace in progress
@mbartling I know. Feel free to ask me anything. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Over all looks excellent! Great work!
I am a fan of the verbosity and configurability of the matcher architecture. Personally, I'd like to build a convenience layer on top of this (mostly around the subgraph declaration) to simplify general development. And if the simplification is not enough, then we can always fall back to full control mode you built.
README.md
Outdated
|
||
## Subgraph Isomorphic Matcher and Nodes Fusion | ||
|
||
With `uTensorGraphMatcher`, performing a subgraph isomorphic match, replace or manipulate the matched subgraph is just a few line of code! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With uTensorGraphMatcher
, performing common subgraph tasks such as isomorphic matching along with replacing or manipulating the matched subgraph(s) takes just a few line of code!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should make a note here that future transforms will be based on our own graph constructor/builder.
README.md
Outdated
|
||
With `uTensorGraphMatcher`, performing a subgraph isomorphic match, replace or manipulate the matched subgraph is just a few line of code! | ||
|
||
### Node Fusion |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For these few examples, you might want one or two sentences explaining what they user is looking for.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. I'll elaborate it more.
@@ -81,8 +81,6 @@ def formatter(name, kwargs): | |||
) | |||
quant_ugraph = self._transform_graph(ugraph, | |||
self.trans_methods) | |||
from utensor_cgen.ir.misc.graph_viz import viz_graph |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for removing this :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No problem, bro.
@@ -18,7 +21,7 @@ class OperatorFactory(): | |||
def createOperatorSnippet(self, op_info, **kwargs): | |||
op_type = op_info.op_type | |||
if op_type not in self._operators: | |||
err_msg = "unsupported op type in uTensor: {}".format(op_type) | |||
err_msg = "unsupported op type in uTensor: {op.name}, {op.op_type}".format(op=op_info) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is just a thought, but we might consider creating a default "Unsupported Op" that takes on any unknown op. Rather than exiting out, we can have an option for --ignore-unsupported
which will do it's best to continue and attempt to generate the code with a ctx.push(new {{op.name}}(....));
bit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I prefer fail fast
in python.
Because it's the whole point of dev in python.
Fail fast, iterate faster.
@OperatorFactory.register | ||
@OpEqualityDelegate.is_compatible_with("Const", _morphism.Inline2ConstMorphism) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this format
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if morphism
is a good idea.
Not very useful now. Maybe it'll do some cool stuff later (?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming the morphisms can chain this could be super useful. Doesnt seem too dangerous
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be useful for stuff like architecture search.
However, we should also define the transform functions morphing from one graph to its equivalences. In addition to op_type we should also define how the tensor values need to be modified.
Seems doable with the current setup.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes.
But I still have a gut feeling that the current setup may not be good enough.
Need a few iterations of refactoring. We'll see. : )
super(ConvPoolTransformer, self).__init__(prune_graph=False) | ||
|
||
@property | ||
def pattern_ugraph(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventually it would be nice to figure out a better way of declaring the sub graphs. but the actual matching bit is so much nicer than it used to be
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure.
But it'll like doing our own Tensorflow
python api again.
Not of first priority for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Completely agree, let's just add this as a TODO feature in the issue tracker
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right. Will do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ugraph=repl_ugraph | ||
) | ||
topologic_order_graph(repl_ugraph) | ||
input_map = { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These lines can be procedurally done right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it's non-trivial how to map the inputs/outputs of pattern graph and matched subgraph.
So I have to leave it to the developers.
@property | ||
def pattern_ugraph(self): | ||
graph = tf.Graph() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably an insanely stupid question, but what does this subgraph have to do with LinearReorder?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean the naming?
I just follow @neil-tan's naming.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neil-tan The hell are you doing here? XD
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this.
Just rearrange the layers to save memory.
relu -> maxpool
is equivalent to maxpool -> relu
And @neil-tan call this linear_reorder.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mbartling this is to reduce the memory needed to store the activation by taking advantage of down-sampling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is good.
Many of the points have already been mentioned in the discussion. On the graph builder-constructor, we should aim to eliminate framework dependencies in our IR-space (after parsers and before backends). For this PR, we should make this clear to developers in the Readme and source code (transforms).
Please let us know how we can help with the graph-builder.
@@ -46,6 +55,9 @@ def snippet(self): | |||
|
|||
|
|||
@OperatorFactory.register | |||
@OpEqualityDelegate.is_associative( | |||
permutations=((0, 1), (1, 0)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this support groupings? For example;
q_add(a, a_min, a_max, b, b_min, b_max)
== q_add( b, b_min, b_max, a, a_min, a_max)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get it.
The later can be configured as (3, 4, 5, 0, 1, 2)
.
Well, a tuple is kinda a formal math notation for permutation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All you have to do is to list all possible permutations in this decorator.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok. As long as this supports multiple permutations. It shouldn’t be a problem since the numbers of operants are usually small.
README.md
Outdated
|
||
## Subgraph Isomorphic Matcher and Nodes Fusion | ||
|
||
With `uTensorGraphMatcher`, performing a subgraph isomorphic match, replace or manipulate the matched subgraph is just a few line of code! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should make a note here that future transforms will be based on our own graph constructor/builder.
@OperatorFactory.register | ||
@OpEqualityDelegate.is_compatible_with("Const", _morphism.Inline2ConstMorphism) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be useful for stuff like architecture search.
However, we should also define the transform functions morphing from one graph to its equivalences. In addition to op_type we should also define how the tensor values need to be modified.
Seems doable with the current setup.
@property | ||
def pattern_ugraph(self): | ||
graph = tf.Graph() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mbartling this is to reduce the memory needed to store the activation by taking advantage of down-sampling
Note: This PR passes integration tests with the following PB file |
- README.md -> README.rst
No description provided.