Dronecode SDK Position Hold with Motion capture

It might seem simple, but it took us about a year to get there :slight_smile: . I am really happy with the result.

This is an example of position control with motion capture, the quad goes to (0, 0, 0.75), (0.2, 0, 0.75), (0, 0, 0.75), and lands. See code here

This example is in position control. You need to change a few lines in Dronecode SDK to allow flight without GPS.

1 Like

Well, it was more than that, we also build the tracking system hardware and software, the Otus quadcopter, and all the software on the Raspberry Pi. I have visited multiple universities and research labs, and this is quite typical in terms of timeframe. That is actually why we designed this quadcopter. You can find all the setup information on gitlab.

Edit:
The next step is to investigate control with a neural network. We got it working in simulation.
The code for training the quad’s neural network is here.

1 Like

@charles-blouin, thank you for graciously sharing those links. As a next step, I suggest you take a look at ORB-SLAM2 or SVO, especially if you are planning to fly where no external localization methods are used (such as a VICON position system).

Regarding neural networks, unless you are running it in the cloud, then you are going to need something more powerful than a Raspberry Pi (e.g. Nano or Xavier). I also suggest you look into Reinforcement Learning at the Machine Learning level before you dive deeper into a neural network. PPO is good algorithm for UAVs to start with and you can use their OpenAI GYM to run simulations in conjunction with Gazebo. With Reinforcement Learning, it takes longer to teach the UAS since you don’t have data model to run a supervised training. However, you already expended one(1) year to get your vehicle working with a motion capture system, so time is on your side :wink:

1 Like

That’s still something that we should make easier, noted! :smiley:

@Al_B Thanks for the info. Have you considered using Google ARcore for your research? When flying indoors, if you are using pose data as input, your neural network does not need to be that big. In the original PPO paper, the NN for controlling the Mujoco robots is two layer deep with 64 neurons on each layer, which is not that much even for a Raspberry Pi. In the reinforcement learning training I coded, the neural network is 32 neurons with two layers and it works well. Trajectory generation can be much more natural and aggressive with a NN than with a linear controller.

Is there an OpenAI Gym interface for gazebo and the px4 SITL software? Right now, I have to take the trained network with Pybullet, and run it in the Dronecode SDK script with the simulator or with the real drone. It could be possible to launch the simulator and write an OpenAI gym wrapper with Dronecode SDK, but I don’t think it would be possible to reset the simulation with an API call from Dronecode SDK.

@JulianOes Dronecode SDK works quite well for indoors flights, although it is not its main purpose. One way the firmware could be significantly improved to fly with vision or motion capture is regarding the handling of loss of tracking data. Right now, when pose data from the external vision system is lost, the EKF2 estimator relies only on the IMU to estimate position. It takes a few seconds before the filter resets. The position drifts by 10-15 meters in that time frame, which obviously causes a crash. The PX4 firmware (including the EKF2) could quickly recognize the lack of data vision data and switch to a mode that does not use external vision. Unfortunately, when activating the use of external vision system, vision fusion is always used in EKF2, which means that simply switching to stabilized mode is not enough to save the drone in the even that vision data is interrupted.

I think the system would need to run two EKF2 filters at the same time: one that relies on vision and another that does not. Another service could constantly check for the quality of the vision data and decide when to switch to a vision free mode.

A small issue I noticed is that there is no easy way to trim the vehicle when flying with a joystick or in off-board mode. This means that the drone does not take off perfectly straight when it is position control mode.

The EFK2 sanity checks don’t always reset. For example, if you start the drone but position data is not available, the drone can initialize in the wrong orientation. Even after starting the vision system and waiting a while, the error ‘High accel bias’ does not always clear, so you have to reset the PX4.

I don’t have an idea about Dronecode SDK Position Hold with Motion capture. Now I start to learn machine learning @CETPA so i have little bit knowledge.This is the connection may be useful:
Create a connection to a system. For example (basic code without error checking):
#include <mavsdk/mavsdk.h>
Mavsdk mavsdk;
ConnectionResult conn_result = mavsdk.add_udp_connection();
// Wait for the system to connect via heartbeat
while (mavsdk.system().size() == 0) {
sleep_for(seconds(1));
}
// System got discovered.
System system = mavsdk.systems()[0];
Create an instance of FollowMe with the system:
#include <mavsdk/plugins/follow_me/follow_me.h>
auto follow_me = FollowMe{system};