Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

F/no more cstrings #79

Open
wants to merge 18 commits into
base: ubit
Choose a base branch
from
Open

Conversation

mbartling
Copy link
Member

Replaces all cstrings (except for those used by sdtensor) with integers.

Massive savings, much wow.

@dboyliao can you suggest a cleaner way to add this functionality to all ops? It's pretty messy at the moment, but I think we can inject a simple tensor_name transformer that decorates each op with a "generate sref func" that gets called on snippet render.

@mbartling mbartling requested review from dboyliao and neil-tan June 25, 2019 21:34
@mbartling
Copy link
Member Author

@neil-tan @dboyliao @Knight-X
This PR uses static const uint32_t in place of #defines. The former has the implicit benefit of debug-ability since compilation with debug will output named symbols, and furthermore takes up the same size and number of instructions as the latter.

@dboyliao
Copy link
Member

dboyliao commented Aug 26, 2019

I think your approach is a good start except that I prefer it's implemented as an optimization pass.
The reason for this is:

  1. it make the ugraph easier to debug since the tensor name will be the same as original model file before the opt pass.
  2. A good case for developing a code generator planner or a better way to store meta data (such as this tensor name -> int mapping)

However, I'll work on auto testing code generator first before I dig into this.

BTW, I think ubit has been merged into develop AFAIK.
Can you change the merge base to develop?

Copy link
Member

@dboyliao dboyliao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As what I comment, I prefer it as an opt pass.

@neil-tan
Copy link
Member

@mbartling what's the run time to go with this? I'm getting a runtime error with this repo:

foundSimple MNIST end-to-end uTensor cli example (device)
[Error] ./uTensor/src/uTensor/core/context.cpp:70 @push Tensor "-122657359" not found

Checked out to this PR and generated the code

@neil-tan
Copy link
Member

@mbartling
As discussed, using the setup above:

  • Add #include "models/deep_mlp_weight.hpp" to main.cpp or the model file
  • In main.cpp ctx.get("y_pred:0") becomes ctx.get(sref_y_pred_0)
  • In the model file: ctx.add(input_0, "x:0", 2) becomes ctx.add(input_0, sref_x_0, 2)

works with current runtime dev branch.

@mbartling
Copy link
Member Author

@dboyliao Do we already have these changes in the current dev branch? If so we can close this PR

@dboyliao
Copy link
Member

AFAIK, not yet.
But I think you can still merge it because it's targeting ubit branch if it will make your work easier somehow. I can approve this PR.
As for develop, since it went through a major refactoring, I think this PR need extra work to be merged into develop.
Besides, it's using legacy api.
Not of high priority considering we are moving toward rearch.
How do you think?

@mbartling
Copy link
Member Author

I still think this idea should be implemented for the reach, even if it means we close this PR. It will generate the absolute smallest models and is really a way to differentiate between debug/release build.

Thoughts?

@dboyliao
Copy link
Member

Sure, I agree with you.
But the new codegen code base has changed a lot comparing this PR.
I need some time to think about how to integrate this PR to the next release code base.
Probably will do it after next release.

@mbartling
Copy link
Member Author

Actually, on second thought it really isn't necessary for the rearch since param names are bound to the operators in the input out name enums. So this is more a debug convenience. I think we can put it off for a while

@mbartling
Copy link
Member Author

Feel free to close :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants