Jun 3, 2022
Stacking Up with LuxonisWho we are and what we do
In honor of our first blog post it seems only fitting to introduce ourselves. We are Luxonis, and it’s our mission to make robotic vision simple and accessible. How do we do it? With a full-stack approach starting with our revolutionary hardware, integrated with readily available, open-source software, and reaching all the way to the cloud. But what, you may ask, is robotic vision? It’s a technological landscape ready to explode. It’s the spark right before it hits the powder. The automation of the world’s workforce is a runaway train, and the most significant hurdle left to clear is teaching machines to see and engage with the world like we do. And what’s needed to make robotic vision work on such a broad scale? Five things: it must be easily embeddable, it must be high-performing, it must allow robotic systems to understand spatial positioning, it must include artificial intelligence to allow those systems to learn and improve, and must finally allow those systems to take action and make decisions. Let’s start unpacking the stack.
EmbeddedThe most recognizable part of our business is our hardware: our ever growing line of cameras like the OAK-D Pro, OAK-D-PoE, and OAK-D-IoT-75. In a world full of stacking stones, these cameras are the Legos. Why? Everything needed for robotic vision is embedded into the camera right out of the box, in the form of the Myriad™X chipset for original models and the Keem Bay chipset for current and future models. Low profile and energy efficient, our camera technology can be used either as-is or integrated into any number of other platforms.
PerformantWhile accessible hardware is great, nothing would come of it if it wasn’t also performant. Devices from Luxonis include anywhere from one to eight cameras of up to 48 megapixels, all providing a combination of high frame rates and low latency. Fast data means fast robots. It all sounds great to have robotic systems that can assess the ripeness of fruit on a vine or take a contact-free approach to measuring a sick patient’s vitals, but if these processes take minutes instead of microseconds they’re never going to be implementable.
SpatialNext comes the ability for the robotic system to understand its place in the world. No, robots don’t have existential crises (yet at least), what we’re talking about here is the ability to do things like track objects or recognize spatial depth. Humans do this intuitively; we’ve evolved to be able to run across a field, our head turned to one side, our eyes tracking a ball in mid-flight, and reaching out to catch the ball at just the right moment. Nearly all of what we’re sensing during this series of actions happens automatically, but getting a robot to do anything even close to this is much more complicated. Luckily Luxonis has already done the heavy lifting, and through fully compatible MIT-licensed open-source software, the OAK line of cameras–when properly trained–excels in object detection and spatial positioning identification.
Artificial Intelligence (AI)Spatial location is great, but we can’t stop there. It’s all well and good to identify that our ball is X distance away and moving closer to us, but how do we even know that it’s a ball? How do we tell the difference between a ball, a bird, a cloud, or a pair of shoes slung up onto some telephone wire? Again, humans do all this automatically. After all, we would have disappeared a long time ago if we couldn’t differentiate between a lion and a log. Robotic systems can do this too, but we need to teach them via a process called image training. Whether taking a more simplistic approach by placing bounding boxes around complete objects, or a more granular identification process such as semantic segmentation, we can appropriately identify images for robotic systems to eventually allow them to correctly identify novel instances all on their own.
Computer Vision (CV)After all this we finally arrive at a holistic CV. Building upon spatial awareness and more foundational recognition and segmentation, robotic systems can be given even more complex tasks. Estimating motion and future position, extracting specific features from a scene, or identifying edges or overlap between multiple objects are all possible. The boundaries of machine learning fall only at the edge of human imagination and ingenuity, and at Luxonis we are continuously pushing past that edge.
What's Next?Robotic vision is, and will increasingly be, a core contributor across countless industries and technologies. In agriculture, farmers are learning how to reduce pesticide use by training robots to eliminate pests. In construction, safety managers are protecting workers by training robots to identify when machinery is active. In conservation, environmentalists are training robots to determine if poachers are active in the area. How could robotic vision improve your world? Stay tuned to us here as we venture deeper into our exploration of the technology and its applications. Glad to have you with us.
Stuart MooreCommunications Director