Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there any implementation of yolov8 in cpp using tensorflow lite? #12770

Open
1 task done
Thunderzen opened this issue May 17, 2024 · 3 comments
Open
1 task done

Is there any implementation of yolov8 in cpp using tensorflow lite? #12770

Thunderzen opened this issue May 17, 2024 · 3 comments
Labels
question Further information is requested

Comments

@Thunderzen
Copy link

Search before asking

Question

I have the .tflite and the relevant libraries in cpp but unsure on how to perform inference.

Additional

No response

@Thunderzen Thunderzen added the question Further information is requested label May 17, 2024
@glenn-jocher
Copy link
Member

@Thunderzen hello! Currently, we don't provide a direct C++ implementation for YOLOv8 using TensorFlow Lite. However, you can perform inference with your .tflite model in C++ by using the TensorFlow Lite C++ API. Here's a basic outline of the steps you'd typically follow:

  1. Load the TensorFlow Lite model:

    std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("your_model.tflite");
  2. Build the interpreter:

    tflite::ops::builtin::BuiltinOpResolver resolver;
    std::unique_ptr<tflite::Interpreter> interpreter;
    tflite::InterpreterBuilder(*model, resolver)(&interpreter);
  3. Allocate tensors and perform inference:

    interpreter->AllocateTensors();
    // Set input data
    float* input = interpreter->typed_input_tensor<float>(0);
    // Fill 'input' with your input data
    interpreter->Invoke();
    // Get output data
    float* output = interpreter->typed_output_tensor<float>(0);

For a complete example and more details, you might want to check out the TensorFlow Lite C++ API documentation. Hope this helps!

@Thunderzen
Copy link
Author

@glenn-jocher ,

Hi, is there documentation on the post-processing after performing interpreter->invoke in c++? For example, code to extract the information like bounding boxes, confidences and class scores?

@glenn-jocher
Copy link
Member

Hello! Currently, we don't have specific documentation for post-processing YOLOv8 outputs in C++ after using TensorFlow Lite's interpreter->Invoke(). However, typically, you'll need to access the output tensor, which contains the detection results, and then apply the appropriate logic to extract bounding boxes, confidence scores, and class IDs.

Here's a brief example of how you might start accessing the output tensor:

float* output = interpreter->typed_output_tensor<float>(0);
// Output processing code here

The exact details depend on the output format of your model. You might need to reshape the tensor and apply non-max suppression to filter overlapping boxes. For a more detailed guide, you might find TensorFlow's C++ API documentation helpful, or consider exploring community forums for specific examples related to YOLOv8. Hope this helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants