Interpreting -> AllocateTensors() on Spresense and TFlite fails

Asked 1 years ago, Updated 1 years ago, 280 views

I'm trying to implement the encoder side of the convolutional autoencoder in the spresense using the spresense and Tensorflowlite.
The bad status is returned when you assign a tensor in interpreter->AllocateTensors().

constexprintTensorArenaSize=900000;
uint8_tensor_area [kTensorArenaSize];

void setup() {
  Serial.begin (115200);
  tflite::InitializeTarget();
  memset(tensor_area, 0, kTensorArenaSize*sizeof(uint8_t));
  
  // Setup logging. 
  static tflite::MicroErrorReporter micro_error_reporter;
  error_reporter=&micro_error_reporter;
  

// This pull in all the operations implementations needed.
  static tflite::AllOpsResolver resolver;
  
  // Build an interpreter to run the model with.
  static tflite:: MicroInterpreter static_interpreter(
      model, resolver, tensor_area, kTensorArenaSize, error_reporter);
  interpreter=&static_interpreter;
  
  // Allocate memory from the sensor_area for the model's sensors.
  TfLiteStatus allocate_status=interpreter->AllocateTensors();
  if(allocate_status!=kTfLiteOk){
    Serial.println("AllocateTensors() failed";
    return;
  } else{
    Serial.println("AllocateTensor()Success";
  }

The encoder model has 20*20 RGB images convoluted and uses TFlite converted to .h.The .h size was 25KB.Also, when the encoder is configured with only full coupling, it works, and the inference of convolutional NNs larger than this encoder works.
Encoder Model

Also, the Spresense side has allocated 1536KB to the main memory.
This model doesn't consume that much memory, so I don't think it's an error caused by insufficient memory, but I can't figure out the other reason.

Do you know the cause and solution of the error?

sense tensorflow

2022-12-16 12:39

1 Answers

I didn't actually run it, so I might be mistaken, but I felt that TensorArenaSize was 900kB for the size of the network, which was quite large.

Why don't you lower the value of TensorArenaSize and try it? If AllocateTensors go well, then you can see how much TensorArena is consuming in the interpre->area_used_bytes(), so you can set a more appropriate value.


2022-12-16 17:28

If you have any answers or tips


© 2024 OneMinuteCode. All rights reserved.