I have created a simple opencv sample code to capture video using external webcam So it is working fine in opencv framework but i want to integrate with px4 stack or i want to use this code by the px4 just to excise the customized opencv code. So please suggest me How can we use our customized opencv code in px4 for different use cases.
Your suggestions are appreciated Thanks.
Is the reply I gave here not helpful to you? If it’s not clear enough, let me know wha isn’t clear and I can try to be more specific. Opening another topic with the same question probably isn’t going to solve your problem.
Recently, PX4 has added support for FastRTPS, which may be of interest to you as well:
Ok thank you I will check the link you sent and I will get back to you if i want any clarification.
Actually I have ordered the snapdragon flight but till it come i just want to excise the px4 and opencv codes so I know how to create my own application using creating application but how to and where to write our opencv related code and execute the code in (https://github.com/PX4/Firmware). I have tried to create my own application which can capture video in (https://github.com/PX4/Firmware/tree/master/src/modules) but i am facing some library issues so can you please suggest me.
Even though it might theoretically be feasible to do computer vision tasks within PX4 on the Snapdragon platform, I would choose a different, more separated, approach.
You have the PX4 application running (mostly on the DSP of the Snapdragon, with another process running on the Linux side). Then you have your computer vision application running, also on the Linux side. The output of this application (e.g. pose estimates if you’re developing a visual positioning system) is then communicated to PX4 via some interprocess communication such as Mavlink or RTPS. On the PX4 side, you write a module that receives your data and takes the appropriate actions (in the case of vision position, this is already taken care of in the state estimators LPE or EKF2).
With this approach, the flight control application remains lean and is not bogged down by large library inclusions and heavy computation tasks.
I can try the approach you told once i receive the hardware but Till that time I have to excise the codes So Can you please suggest me how to do theoretically computer vision tasks within PX4 on the Snapdragon platform.
The approach for SITL develpment is the same. Write your application such that it communicates via Mavlink over UDP with the SITL PX4 process. You can find examples of how to talk to the PX4 process in the snap_cam repo that I linked in your other post or on the mavlink website.
So you’ll have:
- Your application, capturing camera images and doing some CV computations to do some computations.
- The output of these computations are sent as Mavlink messages via UDP to PX4 (started e.g. by make poxis_sitl_default jmavsim)
- You modify the PX4 code to read your Mavlink messages and act accordingly
Depending on what you are computing in your CV app, you will want to create custom Mavlink and/or uORB messages. You can find help with this here: https://dev.px4.io/en/middleware/mavlink.html
If you share some more details about what you are trying to get PX4 to do as a response to what the camera sees, it would be easier to give some more concrete advice.
Thanks for the above suggestion and detailed explanation is below,
The thing is, Camera should detect the objects while traveling in a particular predetermined path so that if it detects object drone should stop and take diversion for a different path to reach its destination So i have coded the detection in opencv but i need to communicate with the modules to control the drone with the output of opencv code to px4.
Ok, obstacle avoidance is an interesting topic which is not yet supported in PX4, which means that there is some work needed on the PX4 side to get it to do what you need. There is a project for obstacle avoidance but I’m not sure how much development is going on in that regard. It would be great if you could work with other PX4 developers working on obstacle avoidance to push such an obstacle avoidance interface forward in order to prevent duplicate work. If I’m not mistaken, @vilhjalmur89 might be a good person to talk to about this.
Ok thank you and another one thing is, Is there any necessary to install opencv libraries in snapdragon flight or it is having library .SO files in it along with px4
You’ll probably have to install OpenCV on the Snapdragon.