We have a drone that’s flying over optical flow with a motion capture system attached. The motion capture system can provide x, y, and z data about where it is within the motion capture’s frame.
We know that vehicle visual odometry should work for this but we’re considering an alternative that uses offboard commands. Ultimately, we want to be able to have the drone fly to a few setpoints within the motion capture’s coordinate frame, so the idea is to send offboard set_velocity_body commands until it reaches those points.
For example, let’s say the drone starts at (0, 0, 1, yaw= 0) but inside the mocap frame it’s (1, 3, 1, yaw = 0) and we want the drone to fly to the mocap position (1, 4, 1, yaw = 0). To get it to fly there we would calculate the difference between where it is in the mocap frame and where it should be in the mocap frame which is equal to +1m in the y direction. We would then send a velocity command telling it to fly in the y direction while also comparing where it is in the mocap frame to where it should be in the mocap frame. Once those two are about equal, we’ll stop sending velocity commands.
The main issue I see with this approach is that the yaw from the optical flow sensor will degrade over time making the velocity_body commands either much more complicated or invalid. Taking the example from before where the difference between the two points is +1m in the y direction we can tell the drone to move in the +y direction relative to its body assuming that the +y direction is the same in both the mocap system as in the optical flow. But if the yaw has degraded even by a few degrees we’ll need to send body commands in both the x and y direction to get it to +1m in the y direction of the mocap system.
My question is do you think this is a real concern and or if dealing with it will be harder than just feeding in the data as vehicle visual odometry.
Hey @Teddy_Zaremba , you want to move the position controller outside PX4 at the end.
That’s totally fine but as you pointed out in order to compute the right body velocity you need to know how the drone body frame is oriented w.r.t. the mocap one. Can’t the mocap tell you that? Because if have such info from the mocap then you just need to first compute your desired velocity in mocap frame, then rotate it in body frame and then send it. Zero worries about OF degradation.
Just assuming instead that
is definitively not going to work in the long run.
However, if you can’t get the mocap to tell you the drone orientation, then you can design a yaw estimator which monitors mocap velocities and with OF velocities: the right yaw is the one that alignes the two.
It won’t be harder if you have mocap giving you orientation.
But are there specific reason for not feeding mocap to PX4 in your application?
The other problem I have is that I don’t think the yaw estimate coming off this MoCap system is that reliable. It’s not a traditional MoCap system like Vicon that has a constant reference for its orientation. Instead, it uses its own IMU for the quaternion. I feel like if we do it this way we are sort of binding ourselves to the MoCap system’s IMU for the orientation vs if we feed in via visual odometry we could use EKF2 to fuse that with the other sensors’ quaternion such as estimating it from the position / velocity data and the optical flow sensor.
The only reason to feed in via offboard commands rather than visual odometry would be to save time.