I’ve been trying to implement the pre-trained neural network presented here in Gazebo simulation.
I built it using the make px4_sitl_neural gz_x500 command, and then switched to “neural control” mode after taking off. I then tried to control it using the virtual joysticks in QGC.
The drone barely reacted when I sent commands; the pitch, roll, and altitude controls were very slow, and the yaw didn’t react at all. After looking at the logs, it seems like there are high-frequency vibrations on the motor outputs when sending commands.
Here are the logs from a simulation test flight : link
This is expected behaviour. The manual control was added last minute and is just to play around in SITL, and it just updates the waypoint, so it’s very high level control. As well as the network being trained to be a bit sluggish to make sure it’s safe. When actually doing flights/tests it’s recommended to send waypoints from a separate module or similar. I also implemented a mc_nn_testing module that you can use for this:
It is also correct that the controller does not respond to yaw commands, as the network is not trained to follow yaw setpoints, just position setpoints. The frequency vibrations happens sometimes and can be mitigated by tuning the thrust coefficient.
The module is primarily made to simplify the process of testing your own neural networks, the pre-trained network is more of a demo than a useable flight mode.
If you want more information you can find it here, and the full master thesis will be public soon if you are interested in that.