Using the Raspberry Pi as a host for the Intel NCS2 (Myriad X) is becoming increasingly popular for running neural inference on the 'edge'. 

However the data path when using an NCS2 is inefficient because the camera data has to flow through the Pi first to reach the NCS2. This results in a ~5x reduction in performance of what the NCS2 (the Myriad X) is capable of. 

So what we're making is a carrier board for the Raspberry Pi 3B+ Compute module, which exposes dual-camera connections directly to the Myriad X. Then the Myriad X connects to the Raspberry Pi, largely over the same manner it does when in the NCS2 (so much code can be reused). 

This allows a couple things: 
1. The video data path now skips the Pi, eliminating that CPU use (which is a LOT). 
2. The hardware stereo-image depth capability of the Myriad X can now be used. 
3. An estimated ~5x improvement on MobileNet-SSD object detection.

Example Object Detection and Absolute XYZ Position

Here's an example of what you can do with the Myriad X carrier board.  In this case, it's object detection and real-time absolute-position (x, y, z from camera in meters) estimation of those objects (including dz speed for people)

This is actually run using the NCS1.  Why? The datapath issues of using the NCS2 with a Pi means it doesn't provide a significant advantage over the NCS1 - and results in higher overall latency because the bottlenecks really fill up with it.  So you have to drop framerate even lower, or deal with the really-high latency.

Which is actually part of what prompted the AiPi board.  The NCS1 over USB on the Pi actually isn't hurt that much.  It goes from say 10FPS on a desktop to say 6FPS on the Raspberry Pi.

The NCS2, however, goes from ~60FPS (and potentially faster) on a Desktop to ~12FPS on a Pi, because of the datapath issues (latency/redundantly having to process video streams, etc.).  (And AiPi will fix that!)

Because the Myriad X hardware-depth engine can't be used until the AiPi board is out (we're working feverishly), these examples are using the Intel D435, which has very-similar hardware depth built-in.  With the AiPi board, no need for this additional hardware.  :-)

Closed Source Components and Initial Limitations

Just like the drivers for say some graphics cards/etc. some portion of the Myriad X will be in binary only.

So part of our effort will be writing the support code for loading these binaries, and also producing these binaries based on customer interest.

That said, all the portions of interacting with the Myriad X as one already does now through NCS2 with OpenVINO should be the same flow, or very similar (open source, modifiable, etc.).  

Specifically (but potentially not limited to this), the hardware and firmware for the stereo depth calculation is non-public, so the stereo-depth capability will be provided as a binary download.

If/as more functions which are not covered by OpenVINO (and hopefully OpenVINO soon simply covers stereo depth as well, which would be a great possibility), we can respond by implementing these and releasing binaries.

All that said, OpenVINO is a very promising platform, and is quickly adding great capabilities - so there's a chance that these initial limitations may disappear soon.

Be the First to Know

Sign up for the AiPi email list and you’ll be the first to know when it ships (which also means that you’ll get access to the best price).

* indicates required