I am working toward integrating a Vicon system into my GCS computer to provide position and speed estimates to my pixhawk running the PX4 flight stack and need help integrating the mocap system with my uas.
My aim is to use motion capture data to aid in tuning my quadcopter position controller. Once I am confident in the performance of the quadcopter, I will begin to transition to using optical flow for position control.
In my system I have a mavros node connected to QGroundControl via udp, and a vicon_bridge node collecting frames from my mocap system. Currently I’m thinking that I have to write a ros node that will generate position and velocity estimates from the bridge node, and then send them to my uas via the vision_position plugin in mavros_extras.
I am confused about whether I need to transform the data that is published by the bridge node into the correct coordinate frame so that it can be fused into the uas’ position estimator (in my case, I intend to use EKF2). I have verified that vision_position_estimate messages are being received by the MAVLink inspector in QGroundControl.
I don’t want to reinvent the wheel here; I am unsure if I have all the pieces I need to integrate the mocap system with my uas.
Does the vicon_bridge just integrate seamlessly with mavros, or is there more work that I need to do?