About UsBlog
DocumentationGuides, specifications and datasheets
GitHubSource code of our libraries
About UsBlogStore
Jun 23, 2022

What Makes a Robot?

Computer Vision
Machine Learning
If we ask any random person walking down the street, “what makes a robot?”, what do you think we’d hear? A variety of responses to be sure, but definitely some we can anticipate. References to Isaac Asimov’s famous three rules of robotics would likely come up (and some interesting discussions that have sprung from them), as well as other nods to popular media like R2-D2, Wall-E, Terminator…we could go on. But what about drilling down to the more practical? We’d likely hear about some physical characteristics of a robot. A robot needs a head, it needs arms, it needs legs, it needs feet. In other words, it needs to in some way look and behave like a human (or, in a broader sense, like an animal). But would those people be right? Well, as is sometimes the case with the way people engage with the world around them, they have a tendency to think about other beings (or machines, or however we want to refer to the stereotypical robot) within the context of themselves, when the reality is that robots are entirely distinct from us. They can take any form that best suits the work we need them to do. Maybe the robots from the movies could use some reimagining.

The Man in the Machine

Can robots have a head, arms, or legs? Sure. There are plenty of instances where those features can be extremely useful: a robotic arm in a manufacturing plant, robotic legs on a cute little MiniPupper, or a head to help mimic social expressions for robots used in therapeutic treatments of dementia. In other cases, however, the existence of any of these features would be entirely impractical. Let’s ask it again. What makes a robot? Perhaps a better way to answer that question is in terms of function, not form. Let’s imagine a robot in the role of a cashier at a grocery store. Which is more practical: a human-size robot that scans and bags each item, or an elevated form of scanner that employs visual search and object detection to track the items in the cart from the beginning of the shopping experience? Let’s consider a traffic cop. Again, should we have a humanoid robot waving people along during their morning commutes? Or, would it make more sense to have a reimagined, portable traffic light that uses segmentation and pedestrian detection to accurately (and safely) assess the conditions around it to trigger a signal that keeps flow moving optimally? At Luxonis, we’re lucky to work with some visionary customers who are sprinting past the idea of robotics in the more conventional form. They understand that the idea of a TrappedRobot will often maximize efficiency and minimize expense. Why bother with an arm when a human partner is already doing the heavy lifting? Why bother with legs when there isn’t anywhere to go? The path forward is clear.

Looking Inwards

If, as the name suggests, a TrappedRobot is stuck in place, what specific function makes it useful? In almost all cases, there is one inescapable sense that robots must have to be able to engage with the world like people do: vision. Luckily, that’s our specialty. By combining embedded, high-performing spatial AI and computer vision with advanced stereo depth and color sensing, we transform all the complexity of human vision into accessible, ready-to-integrate cameras. It’s our slogan, afterall: robotic vision made simple. And we do it in the form that works for you. Whether via standard options like our OAK-D W or the OAK-D W PoE, or via customized systems designed in tandem with our world-class engineering team, we offer solutions to bring robotic vision to the masses. Truly, it is the ability to visualize the world like people do that unlocks the key to robots of all shapes and sizes. What people do without thinking can be an incredibly complex series of tasks for a robot to mirror. But with Luxonis, transforming a simple machine into a seeing, learning robot is within anyone’s reach. So, what makes a robot? Vision does.
Want to take a deeper dive? Be sure to check out our full range of DepthAI Documentation. We want to hear from you! Join our Discord! For support: [email protected] For sales: [email protected]

Stuart Moore
Stuart MooreCommunications Director