Luxonis
LuxonisApplications
Software
Support
About UsBlog
Store
Applications
DocumentationGuides, specifications and datasheets
GitHubSource code of our libraries
RobotHubDiscover our cloud platform
About UsBlogStore
Feb 19, 2023

DepthAI ROS Driver Release

Simplifying ROS-based software development for OAK
Tutorials
Development
Computer Vision
Guide
DepthAI
Stack of OAK cameras

At Luxonis, we are committed to creating robotic vision solutions that help improve the engineering efficiency of the world. With our stereo depth OAK cameras, robust DepthAI API, and growing cloud-based platform, RobotHub, our goal is to provide a start-to-finish ecosystem that uncomplicates innovation.

And, with that in mind, we’re pleased to announce the release of our newest DepthAI ROS driver for OAK cameras, which is part of our ongoing effort to make the development of ROS-based software even easier.

With the DepthAI ROS driver, nearly everything is parameterized using ROS2 parameters/dynamic reconfigure, thereby providing you with even greater flexibility when it comes to customizing your OAK to your exact use-case. Currently you can find over a hundred different values to modify!

There are tons of ways for this driver to make your life easier, some of which include: 

  • Several different “modes” that you can run the camera, depending on your use-case. You can for example use the camera to publish Spatial NN detections, as well as publish RGBD pointcloud or just stream data straight from sensors for host processing/calibration/modular camera setup
  • Set parameters, like exposure, focus for individual cameras at runtime.
  • Set IR LED power for better depth accuracy and night vision.
  • Experiment with onboard depth filter parameters.
  • Enable encoding to get more bandwidth with compressed images
  • Easy way to integrate multi camera setup with an example provided
  • Docker support for easy integration, build one yourself or use one from DockerHub repository

Having everything as ROS parameter also gives you the ability to reconfigure the camera `on-the-fly` by using `stop` and `start` services. You can use low quality streams and switch to higher quality when you need, or switch between different neural networks depending on what data your robot needs.

Here is an example of adjusting LED power for better depth quality:

GIF of adjusting LED power

Here is another example demonstrating manual control of RGB camera parameters in runtime:

GIF Adjusting camera settings

Here we see an example of RGBD depth alignment:

Gif showing RGBD depth alignment

Multi camera setup with OAK-D PRO, OAK-D W and OAK-D Lite, with one camera running RGBD and Mobilenet spatial detection, one running Yolo 2D detection on one running semantic segmentation.

Multi camera setup with OAK-D PRO, OAK-D W and OAK-D Lite

And here we see an example of Real-Time Appearance Based (RTAB) Mapping of an interior room:

Real-Time Appearance Based (RTAB) Mapping

The DepthAI ROS driver is being developed on ROS2 Humble and ROS1 Noetic (with versions on other distros coming soon), allows you to take full advantage of ROS Composition/Nodelet mechanisms, and can currently support detection (2D and spatial) and semantic segmentation networks, with lots more on the way as we continue refining and enhancing.

To discover more about the DepthAI ROS driver, including walkthroughs for how to install and get started, visit our repository. You can also track progress on new features and Roadmap here.

And as always, if you need help or have any questions, you can reach us at [email protected] or on our Discord.


Adam Serafin
Adam SerafinROS Developer