About UsBlog
DocumentationGuides, specifications and datasheets
GitHubSource code of our libraries
RobotHubDiscover our cloud platform
About UsBlogStore
Jul 15, 2022

Blender Data Augmentation

How to make your machine learning smarter, not harder

Perhaps the most central element to any machine learning process is data. The more examples we have to help an AI understand exactly what to look for, the more accurate it’s going to be. Want to train a robot how to recognize honeybees? Only have an image of the Cheerios guy? Sorry, out of luck. But do we instead have 10,000 images of bees in bright light, low light, mid-flight, in their hive, on a red flower, on a yellow flower–now we’re talking.

But wait, you might ask, isn’t it labor-intensive to capture all those images? Why, yes it is. And for certain projects dealing with that upfront commitment to obtaining effective images can’t be avoided. But for some it can, or at least made to sting a little less.

Blenders Do More Than Make Smoothies

If anyone wants a delicious green smoothie recipe I will gladly share an amazing one. Seriously, life changing breakfast on request right here.

Want to know what else is life changing? Using Blender (with a capital ‘B’) to supplement your image library, potentially saving countless hours (and dollars) not only in terms of image collection, but also in image labeling and machine learning time.

Our dedicated engineering team is constantly hard at work to help make our customers’ lives easier, and recently put together an example video showing how 3D models can help augment datasets. Take a look at how Blender can accelerate the image collection process.

Merging the Real with the Virtual

Let’s unpack what we’re seeing here.

Taking a real-world image as the baseline, it’s first necessary to match perspectives between that image and any synthetic components. In this case we’re using an open-source application called fSpy. Next, the synthetic components (in our example, a pothole) are overlaid onto the original. These components can be moved, resized, and reshaped as much as needed, allowing for hundreds/thousands/even millions of images to be generated vastly faster than it would take to collect them normally. Can you imagine trying to take new images of 10,000+ potholes?!

And it doesn’t stop there. Once image augmentation is complete, Blender can also be employed to accelerate the labeling process. Instead of spending time manually going through each image and labeling the key elements to train the model on–or paying someone else to do it–Blender does it for you, since it conveniently already knows exactly what and where the synthetic component is.

Not to say that there aren’t some limitations. Of course, someone skilled and knowledgeable in Blender is the minimum requirement here. Perspective matching can also be challenging, as the nature of both the baseline image and the synthetic components can make alignment difficult in some cases.

So, while we’re not talking about a magic bullet here–dataset development will always take time and resources to do effectively–using Blender as a supplement can be absolutely huge when it comes to maximizing efficiency. And that’s what we’re all about.

Do you have any shortcuts or tools you use to make image dataset management easier? Tell us about it! Or do you have any ideas for examples or training you’d like to see from us? Let us know!

Join our Discord here.

Bradley Dillon
Bradley DillonCOO