Global Visual Navigation

Hello there,
I’m working on a visual navigation solution for indoor on a companion computer. I want to track certain features on the ground, which are geo-referenced - I’m bascially faking a GPS sensor. Since I’m indoor (I’m underneath a metal roof) I don’t have any GPS signal. Unfortunately, I can only detect optical features when I’m a few meters above the ground and hence generate a valid global position signal only then.
Ultimately, I want to use MISSION mode for routine inspections flights and this is where the trouble is. Since I don’t have a valid global position estimate, I cannot arm the drone. Is there are any way to bypass this failsafe and rely on inertial navigation the first few seconds until the drone is at height?
Thanks

This would be a good feature, there’s no mechanism for this at the moment AFAIK. You can try to work around it by publishing a fake “valid” VIO solution for a brief period until the solution is ready. It might be worth bringing up on the dev call. I have interest in this down the road but don’t have any time to look into it in the near future.

1 Like

Yes this is also what I thought about, e.g. just using the position Ned and returning it as global fake solution. It just feels not very elegant. Thanks for confirming.

You can use that mavlink messages to update absolute position estimates inside PX4:

Or send messages from a companion computer over DDS to PX4 e.g. AuxiliaryGlobalPosition.msg, see this PR.

1 Like

I wasn’t aware of this message. I think this is what I need. Thank you very much!

Another question just popped up, so I use this message whenever I have an external estimate? Isn’t visual odometry the message to go with? Or does it really reset the EKF position ( which vio doesn’t do?)

No it doesn’t reset the EKF position, both are used to fuse position measurements.
AuxiliaryGlobalPosition.msg also contains global fields, while
VehicleOdometry (resp. VehicleVisualOdometry.msg) only have local fields. And it can also feed in Pose measurements.

I see, thanks. So implentationwise I’d subscribe to local_position, turn it to global position estimate and feed it back to allow a dead reckoning like behavior when the optical system doesn’t pick up something. If it picks up something, I’d sent the detected global position, correct? Especially, the first part feels klunky and not right as the EKF should deliver this in the prediction step, but hey if it works…

Alternatively, any chance to remove the failsafe for the need to have global position estimate? Could this be an alternative potential way?