Luxonis
LuxonisApplications
Software
Support
About UsBlog
Store
Applications
DocumentationGuides, specifications and datasheets
GitHubSource code of our libraries
About UsBlogStore
Jun 23, 2022

3D Object Manipulation

Machine Learning
Computer Vision
DepthAI
Sometimes there's nothing better than looking back at all the cool ways our customers have adopted our tech. Seriously, you guys are great.  Imagine, if you will, any movie where the actor is using their hands to manipulate some kind of 3D image floating in space. Maybe they’re zooming in on a planetary surface, flipping through criminal profiles, or calculating some kind of advanced equation. Wouldn’t those have to feel like silly scenes to film? There’s nothing there after all. All the actors are just pretending that they’re seeing some kind of advanced technology that allows them to interact with a computer using only their hands. Doing something like that is far off in the realm of science fiction, right? Wrong. The building blocks are there now. Our customer, Cortic Technology, made it happen. Bam. The future.
We didn’t design our cameras with the above in mind. But there it is. And thanks to Cortic Technology it's now available, and easy for you to just run.  Check out the repo here: https://github.com/cortictechnology/vision_ui  And the fact that nearly all of our tech is open source is what allowed this. Since the DepthAI API behind our cameras is open source, it allows our community to run with it–integrating it, modifying it, improving it–in ways we could have never anticipated. Let’s look at another example:
Here we’re watching as a user controls this flight simulator with simple head gestures. The possibilities are endless. And since this is also open-source, you can download it, run it on your own camera, and then riff off of it if you want with your new ideas: https://github.com/gespona/depthai-unity-plugin But we’re not done. A continued thanks to Gerard Espona who wrote this Unity plugin for the DepthAI API, as well as the official OAK For Unity plugin, which is a native plugin for Windows, Linux, and MacOS. And best of all? It works with all OAK models. OAK For Unity Main Features:
  • High-Level API with predefined/pre-trained ready-to-use models.Just drag and drop a prefab. Including: face detector, head pose estimation, body pose (MoveNet),hand recognition, facial emotion, face mesh, eye-gaze, face age and gender estimation, background removal, object detector (Tiny Yolo, Mobile SSD), and access to IMU.
  • Low-Level API. Create your own pipelines to run custom models inside Unity, just drag and drop nodes.
  • Virtual OAK Unity camera (aka OAK Digital Twin). Run pipeline on OAK using virtual unity camera images. Great for validating your models using virtual environments and generating synthetic datasets.
  • Tutorials: from how to train your custom object detector category using synthetic datasets with automatic labeling to transform ML-Agents RL model to run on OAK
In addition to being useful for gaming interaction and the full power provided by OAK, the third point here is especially significant. Why? Because it allows for synthetic training, which is incredibly useful for all sorts of applications. Just imagine how important training is for defect detection on a production line. Defects should be rare. One in 1,000, at least. So how do you get the 1,000 images to train your model? You can make them artificially. So in a day you can get 100,000 images of common defects, as opposed to needing a year of production. It doesn’t matter what you’re playing, efficiency is truly the name of the game.

Erik Kokalj
Erik KokaljDirector of Applications Engineering