ORB SLAM for GPS-Denied Navigation

Hi all,

I’m currently doing some development on a drone capable of doing GPS-denied navigation. My physical setup is a CUAV v5 with Jetson Nano connected over UART. Jetson is running jetpack 4.4 (which is basically just Ubuntu 18.04) with ROS Melodic and orb_slam2_ros node and monocular camera.

I’ve been testing the performance in SITL and want to make sure that my setup is correct before I move on to trying to optimise my fork of orb slam2. At the moment I have a downward facing ROS camera on the iris model and orb_slam is subscribing to this feed to do monocular slam. It is then publishing directly to mavros/vision_pose/pose. In effect this is really just visual odometry as I’m not using the map/pointcloud yet. The SLAM algorithm works well in that it is able to track quite consistently as the drone moves around in simulation.

When I switch off the GPS, PX4 switches over to vision mode. I use EKF2_AID_MASK = 321 as suggested here but the quality of the navigation quickly degrades until PX4 is forced to failsafe land.

I know that my monocular SLAM is not handling scale (this is the problem I’m about to tackle) but would you expect the local unscaled pose updates from a SLAM algorithm like this to benefit the EKF and PX4? Do you have any suggestions as places to start to check that co-ordinate frames, timing offsets etc are correct before I move on? Any help is much appreciated!

The EKF doesn’t do any scale correction, so you need to estimate that from sensors before it gets to EKF. Either from acceleration with the accelerometers, and probably also improve it with things like baro, or even GPS velocity if it is available.

Note that the vision velocity should be in body frame, which is much easier in terms of frames.