Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to re-use a WrappedRamTensor and provide new input data #59

Open
RajeshSiraskar opened this issue Mar 2, 2019 · 8 comments
Open

Comments

@RajeshSiraskar
Copy link

RajeshSiraskar commented Mar 2, 2019

Hi,

I am beginner with uTensor and embedded C/C++. I have a little experience around Python and wanted to study development of intelligence at the edge by building models in Python and deploying on Cortex boards. @neil-tan helped me understand the basics and I used his tutorial to begin this understanding.

So passing the input data, wrapped in a WrappedRamTensor works great the 1st time. When I try to provide another instance of input data and do a second pass - it gives me an error. What could I be doing wrong? Does input data tensor have to be thread-safe?

Output with the error

[1] First instance of prediction: For input 10.000
 Input: 10.000 | Expected: 72.999 | Predicted: 71.871

 [2] Second instance of prediction: For input 40.000
[Error] lib\uTensor\core\context.cpp:96 @push Tensor "Placeholder:0" not found

Source code

  // A single value is being used so Tensor shape is {1, 1} 
  float input_data[1] = {10.0}; 
  Tensor* input_x = new WrappedRamTensor<float>({1, 1}, (float*) &input_data);

  // Value predicted by LR model
  S_TENSOR pred_tensor;         
  float pred_value;             
  
  // Compute model value for comparison
  float W = 6.968;
  float B = 3.319;
  float y;

  // First pass: Constant value 10.0 and evaluate first time:
  printf("\n [1] First instance of prediction: For input %4.3f", input_data[0]);
  get_LR_model_ctx(ctx, input_x);                   // Pass the 'input' data tensor to the context
  pred_tensor = ctx.get("y_pred:0");                // Get a reference to the 'output' tensor
  ctx.eval();                                       // Trigger the inference engine
  pred_value = *(pred_tensor->read<float>(0, 0));   // Get the result back

  y = W * input_data[0] + B;                        // Expected output

  printf("\n Input: %04.3f | Expected: %04.3f | Predicted: %04.3f", input_data[0], y, pred_value);
  
  // Second pass: Change input data and re-evalaute:
  input_data[0] = 40.0;
  printf("\n\n [2] Second instance of prediction: For input %4.3f\n", input_data[0]);
  get_LR_model_ctx(ctx, input_x);                   // Pass the 'input' data tensor to the context
  pred_tensor = ctx.get("y_pred:0");                // Get a reference to the 'output' tensor
  ctx.eval();                                       // Trigger the inference engine
  pred_value = *(pred_tensor->read<float>(0, 0));   // Get the result back

  y = W * input_data[0] + B;                        // Expected output

  printf("\n Input: %04.3f | Expected: %04.3f | Predicted: %04.3f", input_data[0], y, pred_value);
  
  printf("\n -------------------------------------------------------------------\n");
  return 0;
}
@dboyliao
Copy link
Member

dboyliao commented Mar 3, 2019

Can you show me what's inside get_LR_model_ctx?
And I think we haven't fix some issue of the Context class so reuse it over time will crash the program.
Try to create a new Context object before passing it to get_LR_model_ctx function.

@RajeshSiraskar
Copy link
Author

Hi @dboyliao

I did try re-creating new instances but it still gives the same errors:
Context ctx, ctx_2; and used ctx_2 for the ctx_2.eval().

Have attached a zip file with the C++ created files. Also dboyliao, I tried to understand the public data member of the WrappedRamTensor. Should I be using that to try to assign new data for evaluation? If yes - how do I use that?

Thanks for helping.

LR_model.zip

@dboyliao
Copy link
Member

dboyliao commented Mar 4, 2019

Ah, I think I know what's going wrong.
Your input_x is declared as a raw pointer.
So after the first ctx.eval, it may point to invalid address.
Try add input_x = new WrappedRamTensor<float>({1, 1}, (float*) &input_data); after input_data[0] = 40.0;

@Knight-X
Copy link
Member

Knight-X commented Mar 4, 2019

@RajeshSiraskar
I suppose your code is as follows:

 Context ctx, ctx2;
  // A single value is being used so Tensor shape is {1, 1} 
  float input_data[1] = {10.0}; 
  Tensor* input_x = new WrappedRamTensor<float>({1, 1}, (float*) &input_data);

  // Value predicted by LR model
  S_TENSOR pred_tensor;         
  float pred_value;             
  
  // Compute model value for comparison
  float W = 6.968;
  float B = 3.319;
  float y;

  // First pass: Constant value 10.0 and evaluate first time:
  printf("\n [1] First instance of prediction: For input %4.3f", input_data[0]);
  get_LR_model_ctx(ctx, input_x);                   // Pass the 'input' data tensor to the context
  pred_tensor = ctx.get("y_pred:0");                // Get a reference to the 'output' tensor
  ctx.eval();                                       // Trigger the inference engine
  pred_value = *(pred_tensor->read<float>(0, 0));   // Get the result back

  y = W * input_data[0] + B;                        // Expected output

  printf("\n Input: %04.3f | Expected: %04.3f | Predicted: %04.3f", input_data[0], y, pred_value);
  
  // Second pass: Change input data and re-evalaute:
  input_data[0] = 40.0;
  printf("\n\n [2] Second instance of prediction: For input %4.3f\n", input_data[0]);
  get_LR_model_ctx(ctx2, input_x);                   // Pass the 'input' data tensor to the context
  pred_tensor = ctx2.get("y_pred:0");                // Get a reference to the 'output' tensor
  ctx2.eval();                                       // Trigger the inference engine
  pred_value = *(pred_tensor->read<float>(0, 0));   // Get the result back

  y = W * input_data[0] + B;                        // Expected output

  printf("\n Input: %04.3f | Expected: %04.3f | Predicted: %04.3f", input_data[0], y, pred_value);
  
  printf("\n -------------------------------------------------------------------\n");
  return 0;
}

and you get the same error at second eval ?
[2] Second instance of prediction: For input 40.000
[Error] lib\uTensor\core\context.cpp:96 @Push Tensor "Placeholder:0" not found

Is my understanding correct?

@dboyliao
Copy link
Member

dboyliao commented Mar 4, 2019

@Knight-X
the second call to get_LR_model_ctx, should it be ctx2 not ctx?

@Knight-X
Copy link
Member

Knight-X commented Mar 4, 2019

@dboyliao
ya, you are right. typo error

@RajeshSiraskar
Copy link
Author

RajeshSiraskar commented Mar 4, 2019

Hi,

I tried all three experiments:

[Change 1: Adding input_x = new WrappedRamTensor]
@dboyliao:
I was very sure I had tried that earlier but did reconfirm it. This is the run time error I get:

[Error] lib\uTensor\core\context.cpp:32 @add tensor with name "" address already exist in rTable

When I add this:

// Second pass: Change input data and re-evalaute:
input_data[0] = 40.0;
input_x = new WrappedRamTensor<float>({1, 1}, (float*) &input_data);

Here there is only ONE Context ctx; I use for both predictions.

[Change 2: Adding Context ctx, ctx2;]
@Knight-X

I did retry Context ctx, ctx2; and yes it gives the error mentioned in your post. I reconfirmed there was no typo -- as below

[Error] lib\uTensor\core\context.cpp:96 @push Tensor "Placeholder:0" not found

get_LR_model_ctx(ctx2, input_x);                   // Pass the 'input' data tensor to the context
pred_tensor = ctx2.get("y_pred:0");                // Get a reference to the 'output' tensor
ctx2.eval();                                       // Trigger the inference engine
pred_value = *(pred_tensor->read<float>(0, 0));

[Change 3: Added BOTH changes together]

Here I combined both the above and here's the output. It does NOT output the correct value but executes without error:

[1] First instance of prediction: For input 10.000
 Input: 10.000 | Expected: 72.999 | Predicted: 71.871

 [2] Second instance of prediction: For input 40.000
 Input: 40.000 | Expected: 282.039 | Predicted: 71.871

@RajeshSiraskar
Copy link
Author

Hi - Just in case the board matters:

Board: https://os.mbed.com/platforms/ST-Discovery-L476VG/
IDE: PlatformIO

Thanks for helping

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants