Px4flow and IMU sensor fusion details

Hello,

If I understand correctly the px4 flight controller when used with a px4flow camera estimates the crafts velocity by doing some kind of fusion of IMU acceleration and px4flow velocity measurements.

Can anyone help me with the specifics of this method so I can implement a similar one on my own flight controller? I have been told that I need to integrate the acceleration to estimate the velocity and use the velocity measurement from the px4flow to augment this estimate or vice versa, but beyond that I do not know where to begin. I tried looking through the px4 firmware for this but was unable to find where this procedure is carried out.

Also is there a detailed description of the px4flow data anywhere? I am not sure precisely what is meant by “x velocity*1000 [meters/sec]” for flow_comp_m_x. Is this the velocity along the body frame x-axis or the velocity along the x-axis of a frame which rotates (yaw) with the craft but who’s x-y plane is always parallel to the ground?

I would greatly appreciate anyones help or advice on these things.

Thanks!

Thomas

IMU measurements are involved in a couple of ways. First, they are used in the prediction step of the EKF which estimates the vehicles position. This is effectively an integration of the linear acceleration and rotational velocity measurements.

Second, IMU measurements are needed in the update step of the EKF when fusing optical flow measurements. The motion observed by the optical flow camera can stem from either translating the camera (this is what you are actually trying to measure) and from rotation of the camera. The latter is also measured by the IMU. Therefore, you subtract the measured rotational velocity from the flow measurement. What you are left with is flow due to translation. If you know the distance of the camera to the ground (with the help of a range sensor), you can compute the velocity that corresponds to the flow you measured.

You can see how the measurements from the flow camera are compensated for rotation and then fused into the state estimator in the following two functions:

You can also see that the measurements are transformed from the body frame (x is forward) to the navigation frame (x is north).

I hope this helps!

Thank you Nicolas!

This is exactly what I was looking for. So from what I understand the measurements

y(Y_flow_vx) = delta_n(0) / dt_flow;
y(Y_flow_vy) = delta_n(1) / dt_flow;

which flowMeasure() computes are not directly available on the PX4flow, but rather I need to compute them using PX4flow data and attitude information. This brings me to the question: what is the the value of _flow_scale? (used to compute flow_x_rad and flow_y_rad in lines 73,74). Also, I am using a 3x3 rotation matrix R, defined by the consecutive rotations - yaw pitch and roll, which rotates vectors from the navigation frame to the body frame, and has the property R^-1=R^T. Would it make sense to use R^T in place of the direction cosine matrix _R_att to transform delta_b to delta_n?

Thanks again for the help!

_flow_scale is a parameter of the estimator, defined here: https://github.com/PX4/Firmware/blob/068ef591ab15a4d92d00ac2a8d741f887bd54fdf/src/modules/local_position_estimator/params.c#L33
@jgoppert can probably explain the meaning of that parameter the best.

If you have a rotation matrix that transforms from navigation to body frame, then you can certainly use the transpose of that to do that transformation.