Luxonis
LuxonisApplications
Software
Support
About UsBlog
Store
Applications
DocumentationGuides, specifications and datasheets
GitHubSource code of our libraries
RobotHubDiscover our cloud platform
About UsBlogStore
Jun 23, 2022

All About DepthAI

DepthAI
Computer Vision

Let's start from the beginning. What is DepthAI?

DepthAI is a Spatial AI platform, which allows robots and computers to perceive the world like a human can: what objects or features are, and where they are in the physical world. It focuses on the combination of these five key features: artificial intelligence (AI), computer vision (CV), depth perception, performant (high resolution and frames per second, and embedded (small, low power solutions).

DepthAI API allows you deep access to the OpenCV AI Kits, architected to allow arbitrary pipelines of neural inference (AI), depth processing, and computer vision (CV) functions in a node/graph system. This allows building complex combinations of AI, depth, and CV, as shown below:

Person being tracked while walking around room

Using the DepthAI API, you can see how its possible to perform multiple neural models and CV functions all together to achieve the desired functionality:

  • To be able to read hand signals from far away.

The standard palm detector model is only capable of detecting hands at up to around 2 meters.  And in the example above the user is 5 meters from the camera. So how is he doing it?

  • He used the DepthAI API to construct a sequence of neural models and computer vision functions which first find the region of the person, then the pose of the person, and from there feeds the region of the hand into the palm detector model. It next feeds this detected palm into the hand-pose model, and then the key-points from this hand-pose model can be fed into sign-language detection or other hand-signal detection models.

This API works great for a talented programmer who wants full and deep access such that he can do sophisticated things like this.  And the DepthAI API provides that.

But along with that granular control comes a lot of steps and configuration just to get something running.  Which you might not care about (at least at first). So that's where the DepthAI SDK comes in.

What is the DepthAI SDK?

The DepthAI SDK provides abstracted setup of and helpful high-level functions that allow you to accomplish your goals faster. It packages up a slew of API calls into convenient functions, making it easier to do common tasks like setting up a camera, or running a neural network, getting depth output, etc.

The DepthAI SDK operates above the DepthAI API–both of which are open source.  So, you can always start with the DepthAI SDK, use it to get going, and then look at what API calls are inside a given SDK manager to eventually build your own customizations.

So with just the code below, you're up and displaying a color video stream from the OAK-D-Lite:

Photo of installed OAK camera

And with nearly as few lines, you're up and running with real-time AI on OAK-D-Lite:

Photo of the OAK camera realtime object detecting

So this allows you to try out things yourself, in code, with as little hassle as possible. And don't want to code?  You can use the DepthAI example program here (no coding needed) for the Cortic Technology CEP, here (also no coding needed).

How Does DepthAI SDK Relate to Cortic Technology's CEP?

Think of the DepthAI SDK as a low-code approach - whereas Cortic Technology's CEP (below) is a no-code approach.

Gif of code block building

So you can choose how to best fit your experience level when building things with OAK-D-Lite.

In summary, here are 3 ways to build with the OAK-D-Lite:

We are also continuously adding to our DepthAI SDK resource library, so be sure to check back often for updates. Also feel free to reach out to us directly with ideas for functionality you’d like to see!


Erik Kokalj
Erik KokaljDirector of Applications Engineering