Hello, I’m working on a project where I need to engage in object detection via drones. I have the object detection code set up and running. I have done following to achieve that:
- Compiled opencv with Gstreamer and QT support from source and did some image processing. I was able to integrate this into QGroundcontrol by building the code as a shared library. (dll)
Now I am running into the following issues:
I want the output frames to be shown inside the QGC window itself, currently processed frames (detection output video stream) are shown using opencv method, imshow(), which results in a separate window.
Is there another approach on how I can use opencv to process frames (e.g from a webcam or RTSP stream) and show it inside the QGC window itself via the VideoManager?
very interesting project. I think the most straightforward way of doing it ( although probably not the most efficient, nor the most elegant one ) is to output the stream from your opencv code to a gstreamer pipeline, to the same UDP address of your machine, loopback, and attempt to open that from QGC.
I think I would first test in terminal a videotestsrc element based pipeline, and get to open it in QGC. From there you can try to do that within your opencv code, and from there it will be straighforward to push your frames instead of videotestsrc element.
For the elegant solution you might pull the string of videoreceiver to link how it is being forwarded to the frontend, etc etc but such approach will be much more time consuming, and if this is just a proof of concept maybe it isn’t worth the effort of going this long route instead.