An interesting group from Beijing, China initiated an open source autonomous aviation project for drones and passenger-carrying aerial vehicles called Generalized Autonomy Aviation System (GAAS)
The team strives to provide a common infrastructure for drone autonomy and to accelerate the use of computer vision on drones. This is how they’re describing what they have accomplished and is working on:
"The project is continuously adding algorithms to increase the level of autonomy and to expand the capabilities of drones. Some of the existing features are static obstacle avoidance with stereo cameras, route navigation, 3D reconstruction, image detection, object tracking, precision landing on moving projects, image segmentation and more. Though GAAS provides algorithms for a wide range of features, its goal is to act as an easy-to-use communication architecture between PX4 and algorithms. GAAS utilizes stereo cameras and Simultaneous Localization and Mapping (SLAM) technology for collision avoidance and navigation. Stereo camera has become more and more popular in the drone community because of its affordable price and versatility. While the implementation of LiDAR on UAVs is typically limited to 16-line, stereo cameras are capable of providing depth maps with much higher resolution for collision avoidance. Along with stereo cameras, SLAM technology provides to drones the ability to be aware of its location in the map, and thus allowing drones to navigate to a specific landing location by itself. "
A set of Quick Start tutorials is also available at https://github.com/generalized-intelligence/GAAS/tree/master/demo
Would love some general feedback from the community:
*would you use it?
*does this overlap with PX4/avoidance?
Very interesting package, thank you!
The most interesting is the 3rd tutorial on vision navigation.
It is reported that it is not stable and needs tuning of VP drone.
For VP drone tuning clever package might be used (https://github.com/CopterExpress/clever ) - it allows to setup drone vision orientation on Aruco map - Aruco map gives good vision poisition.
After you have real drone flying good on Aruco VP - it is easier to test/tune your SLAM navigation on real done.
I checked it and it is still in a very infantile state. Currently it does not overlap PX4/avoidance as but instead could use it as a planner.
The project looks similar to what I wanted to do with Flyingros, 2 years ago, but never had time to improve it: make good tutorials for many technologies (installation, usage, performance comparison) and use-cases and in fine become a reference design that others could use as base.
I have the following remarks:
- They are making ROS interfaces for the algorithms forked and seemed to do some work on them
- They are making kind of a tutorial for multiple use-cases (mapping, …), in the long run, it could become a good reference.
- They seems inexperienced with ROS (strange frame_ids, absolute topics, strange file structure)
- The project forks algorithms and put them all in a single repo => will be hard to followup, I would prefer one algorithm per repo (major issue)
- The commits messages are not very helpful, we don’t know what actually changed between the original algorithms and this version
- The control nodes are very simplistic maybe not modular enough?
- Tutorials are not Plug and Play
- Tutorials are not using common ressources but instead copies of the ressources (major issue)
All the cons listed can be fixed and I encourage the project to continue and show us what they can do
Thanks for giving us such detailed and thoughtful feedback.
For the major issue mentioned about common resources, would you please elaborate a bit more? Maybe give us an example of what resources you meant? We would love to improve on our next tutorial
For each tutorial, I would expect to actually have a launch file and at best one python script. Whatever else should be using the “software” folder.
That way, using the same ressources, the commander.py for example would grow and you have only one place where to fix the bugs.