Computer Vision Reference Platforms

Hi All,

I have actually designed a computer vision platform (but for use at ETH) before I designed the original Pixhawk and it finally seems its time to bring that expertise to the open source space. Here is how the platform from 2010 looked like. Note the original “pxIMU” which predates the Pixhawk design.

I’m looking for community feedback both in terms of requirements as well as in terms of mechanical frames that you think would be a good starting point. My goal is to design something that serves as a good development platform. However, it should also be very close to a market-ready commercial reference design. Pixhawk did hit both marks.

7 Likes

Hardware-wise I’m currently looking at different options:

Frames:
https://www.foxtechfpv.com/foxtech-hover-1-quadcopter.html

High end frames:


http://aeronavics.com/fleet/navi-3/

Any opinions?

As with any design problem, there are competing constraints, so imho it’s worth discussing requirements in a bit more detail before getting caught in the convenience of “solutioneering”.
I’ll put forward a couple of considerations:

  1. The desired payload will remain approximately constant over life of type: electronics will shrink but you’ll keep adding more. This means that the SWAP required now can be locked in as a design requirement.
  2. The customer/user base operates in two key environments: laboratory flying arenas and open parkland. The first dictates a maximum practical size, the second a minimum.
  3. Cost vs quality. Commercial R&D endeavours generally have deeper pockets than universities, who in turn have deeper pockets than diy’ers and students. Where is this pitched?
  4. Modularity vs simplicity. A reference build ideally has a single, defined configuration. A test/dev platform needs to be easy to change. These requirements conflict in many ways. I think you’re aiming for the latter?
  5. Agility vs efficiency. With careful motor selection this can be managed via voltage and prop choices, but usually requires a compromise. Testing different things often requires different performance.
    Just food for thought.

I’m very interested in the camera configurations you have in mind.

Also, I personally want to run Golang code consisting of GoCV/OpenCV on the vision processor.

Those are all great considerations. I see short term a need for a flexible platform with 500 to 700 mm motor distance. Long term I think we need a small foldable vehicle and a larger, flexible vehicle.

@LorenzMeier,
What is your idea about onboard computing and payload? Is anything defined yet, or is it also an open question?

Just collecting feedback or ideas right now. I was looking at or tinkering with NVIDIA TX2, Intel Up^2, Snapdragon 820, NXP i.MX.

1 Like

Hi Everyone
@LorenzMeier
Did you have in mind strategies to adopt when obstacle is detected ?
That could be a simple stop, which is good enough for low altitude flight and safety
Or more advance “going around” the object, and continue a mission in auto mode ? That second is more difficult, and it may also require more sensors ?
A big ++ to the initiative, and hope to be able to give a hand !

Cheers

NVIDIA also offers the TX2 in an industrial version called the TX2i. The differences between the two modules is attached as a pdf.Jetson_TX2_TX2i_Interface_Comparison_and_Migration.pdf (465.9 KB)

WDL sells a good carrier board for interfacing and wiring to the Jetson TX2(i). They are about $500 in QTYs of 1.

I have not evaluated StereoLabs’ Zed stereo camera but it’s worth noting in this thread for measuring depth.

VIO on Snapdragon:

Obstacle avoidance on Aero:

1 Like

Heads up: I’ve spent quite a bit of time looking into suitable platforms we have come up with a 650-700 mm prop diameter which can support an NVIDIA TX2 or an Intel NUC. We are currently testing these with different sensors (stereo, others) and should be able to provide incremental updates here. Meanwhile the Aero is working reasonably well for our avoidance tests.

1 Like

My project is putting a ReadyToSky OmnibusF4 Pro on a DonkeyCar( RC truck ) with a raspberry Pi 3(running ROS/Mavlink and Keras/Tensorflow) and using a wide angle camera.

We have been having good success with the python code on the rPi3 going to a PCA9685 PWM driver board but adding battery monitoring, a BEC to power the rPi and IMU are all external addons the OmnibusF4 Pro has plus a gyro.

So I’m build ROS/MavLink and the OmnibusF4 Pro but having problems getting the firmware setup for Rover operation and QGC won’t load firmware so have to build from source( make omnibus_f4sd upload ) but the latest pull is fighting me.

I’ll keep at it but had hoped I’d be spending more time in the ROS/MavLink side instead of bootloader/firmware and control(QGC).

Are there any update about good computer vision platforms?

I guess the best are the NVIDIA TX2 and the Intel NUC?

Anyone looking forward to the new NVIDIA Jetson Nano?

@jkflying It would be good to provide high-level updates here. We are working on a dev kit - a high level summary is here:

3 Likes

@LorenzMeier, @jkflying, any updates in this topic?

Hi @soldierofhell , we are finalizing the vehicle and making a list of interested people who could give feedback for beta testing. If you’d be interested, and would potentially be buying more after beta testing (for academic / corporate research lab, for example), please get in touch with @JingerZ.

1 Like

p200.pdf (2.7 MB)

AMOV lab from China sells this as a RTF development platform (it’s tx2 based).

Although all their documentation is in Chinese right now, but the diagrams are in English: 无人系统板载(任务)计算机 — p200-wiki 0.0.1 文档

Thanks,
Do you know if/when will be english translation available? Is this SDK from the english diagram available somwhere?

BTW Why TX2? Are there any NN or any other computations done on GPU?