I have actually designed a computer vision platform (but for use at ETH) before I designed the original Pixhawk and it finally seems its time to bring that expertise to the open source space. Here is how the platform from 2010 looked like. Note the original “pxIMU” which predates the Pixhawk design.
I’m looking for community feedback both in terms of requirements as well as in terms of mechanical frames that you think would be a good starting point. My goal is to design something that serves as a good development platform. However, it should also be very close to a market-ready commercial reference design. Pixhawk did hit both marks.
As with any design problem, there are competing constraints, so imho it’s worth discussing requirements in a bit more detail before getting caught in the convenience of “solutioneering”.
I’ll put forward a couple of considerations:
The desired payload will remain approximately constant over life of type: electronics will shrink but you’ll keep adding more. This means that the SWAP required now can be locked in as a design requirement.
The customer/user base operates in two key environments: laboratory flying arenas and open parkland. The first dictates a maximum practical size, the second a minimum.
Cost vs quality. Commercial R&D endeavours generally have deeper pockets than universities, who in turn have deeper pockets than diy’ers and students. Where is this pitched?
Modularity vs simplicity. A reference build ideally has a single, defined configuration. A test/dev platform needs to be easy to change. These requirements conflict in many ways. I think you’re aiming for the latter?
Agility vs efficiency. With careful motor selection this can be managed via voltage and prop choices, but usually requires a compromise. Testing different things often requires different performance.
Just food for thought.
Those are all great considerations. I see short term a need for a flexible platform with 500 to 700 mm motor distance. Long term I think we need a small foldable vehicle and a larger, flexible vehicle.
Hi Everyone @LorenzMeier
Did you have in mind strategies to adopt when obstacle is detected ?
That could be a simple stop, which is good enough for low altitude flight and safety
Or more advance “going around” the object, and continue a mission in auto mode ? That second is more difficult, and it may also require more sensors ?
A big ++ to the initiative, and hope to be able to give a hand !
Heads up: I’ve spent quite a bit of time looking into suitable platforms we have come up with a 650-700 mm prop diameter which can support an NVIDIA TX2 or an Intel NUC. We are currently testing these with different sensors (stereo, others) and should be able to provide incremental updates here. Meanwhile the Aero is working reasonably well for our avoidance tests.
My project is putting a ReadyToSky OmnibusF4 Pro on a DonkeyCar( RC truck ) with a raspberry Pi 3(running ROS/Mavlink and Keras/Tensorflow) and using a wide angle camera.
We have been having good success with the python code on the rPi3 going to a PCA9685 PWM driver board but adding battery monitoring, a BEC to power the rPi and IMU are all external addons the OmnibusF4 Pro has plus a gyro.
So I’m build ROS/MavLink and the OmnibusF4 Pro but having problems getting the firmware setup for Rover operation and QGC won’t load firmware so have to build from source( make omnibus_f4sd upload ) but the latest pull is fighting me.
I’ll keep at it but had hoped I’d be spending more time in the ROS/MavLink side instead of bootloader/firmware and control(QGC).
Hi @soldierofhell , we are finalizing the vehicle and making a list of interested people who could give feedback for beta testing. If you’d be interested, and would potentially be buying more after beta testing (for academic / corporate research lab, for example), please get in touch with @JingerZ.