Why It All Started
Keeping people safe
We've had a lot of folks reach out after having built an automated safety system with the OAK line and say: "This thing is perfect for health/safety applications."
That's no coincidence. Trying to keep people safe is what started the whole operation. We were trying to make a safety system that needed to be:
- Embedded (meaning small, low-power, fast boot - something you could fit into a bike light)
- Performant (high-resolution (e.g. 13MP), high frame-rate, and multi-camera)
- Spatial Sensing (to be able to get real-time distances and relative trajectories)
- Artificial Intelligence (to know what those things are - to prevent false alarms)
- Computer Vision (to tie it all together, encode 4k video, etc.)
We found that although the component pieces existed, they didn’t quite result in the sleek, trim solution we were looking for. Sure, you could buy a RealSense camera for depth, a GoPro for 4k video, a NCS2 for AI processing, a Texas Instruments DSP (Digital Signal Processor) to do the CV processing, and a Pi to tie it all together, but you would end up with a product that looks like this:
It was big, ugly, cost a bunch of money, used my wife's milk crate, consumed my whole collection of zip ties, and in the end, ran at a whopping 3 FPS. But boy did it work! Surprisingly well as a matter of fact. Unfortunately there was no way a scalable product was going to be made from that hot mess.
What we actually needed was something that looked like this, that did all of that:

Tiny/embeddable, depth-sensing at 200 FPS, artificial intelligence at 30+FPS, 500 million pixels/second computer vision, and 4k encoding - all on-camera. That's OAK-D-Lite. And it brings us tantalizingly-close to being able to make our safety system. We're not there yet, and OAK-D-Lite is not the safety system - but it’s what will push us over the finish line (safely).
When I mentioned in the video that we had not yet built what we (and thousands of others) needed, I meant that we too were blocked by cost. Building a smart bike safety system off of a $200 camera is tough. But, building it off of a $79 camera becomes tenable. For us and a host of other important applications - life, safety, and otherwise.
Distracted driving kills people. A lot of people.
And it's actually the whole reason anh of this exists. Prior to several people in our network being run over by distracted drivers - we were planning on bringing video games to real life in a mission to make video games athletic (like here). But in each meeting where we were trying to recruit for this effort, the conversation was derailed with a story of tragedy.
- The Hackerspace founder in our area had been killed by a distracted driver, clipped by a mirror in the back of the neck.
- One of the top computer science guys in our area had nearly the same thing happen. Luckily he lived, but was left with a traumatic brain injury after a protracted fight to stay alive.
- Two others - the luckiest of the bunch - ended up with broken backs, shattered hips and femurs, and were bed-ridden for over nine months before they could even have surgery.
And it's not just our circle. This is so prevalent now that even the famous and well-known are making headlines in terrible ways:

All of these incidents happened to people who were riding bikes. They were struck by distracted drivers. Not by bad people, just by people who texted at the wrong time and veered off course by some tiny fraction of a degree.
What if computer vision could see this happening in advance, and do something about it? Even a small swerve could have prevented these accidents. In the fatal case here in Colorado, the person was killed by three inches of a mirror. Less than a hand’s width of a swerve could have saved a life.
There are two parts to a potential solution to this problem: the computer vision/AI/depth sensing necessary to know when a car is about to hit you, and some sort of mechanism to get the distracted driver to swerve/brake away from you.
It may sound odd - given that our focus is a computer vision device - but if there's no way to get the distracted driver's attention, all the CV/AI/etc. in the world doesn't matter, so the first thing we set out to solve was getting the driver's attention.
Speak Like A Car
Jonathan Lansey had nearly the exact same experience as us - back in 2015 - way before it was possible to have this sort of computer vision power even on a mainframe, let alone on a bike. So he set out to see if the second part of the solution - getting the driver's attention and eliciting a swerve/brake - was feasible. His team proved it was possible, and launched a product that did just that - Loud Bicycle. With Loud Bicycle you now have the means to elicit the brake and/or swerve necessary to very often stop the accident as it's happening.

But you still have to know when a car is going to hit you. In Jonathan’s examples, those who were hit by distracted drivers saw it coming but just couldn't get the driver's attention. He and his team at Loud Bicycle solved that extremely important part.
However, it turns out that in almost 90% of the accidents, the person riding the bike didn't see the car that was going to hit them. They only knew they were going to be hit after they were in the air. After talking with Jonathan very early-on (right after the accidents in our circle) we learned two very important things:
- It is possible to get a distracted driver to swerve/brake away from you.
- It's extremely effective because it's a “highly sensory-compatible input for the condition of driving” as behavioral scientists may say.
Knowing When You're Going to be Hit - Before You Are
Imagine you had someone riding backwards on your bike, looking out for danger and if danger was present, they would notify you in a progressively more aggressive way as the danger increased. Haptics (what fighter-pilots are alerted with when missiles are fired at them) would provide you risk level, while progressively-brighter LEDs try to get the driver's attention. If those didn’t work, and the system sensed an accident was imminent, a car horn honk (just like Loud Bicycle) would sound to get the driver’s attention and to elicit the swerve/brake.
As a bonus, it could send a real-time rear-view mirror to the smart-phone mounted on your handle bars - all while recording the incident. See what such a system might look like here:

Our North Star
The entire OAK ecosystem - with OAK-D-Lite being the original most important piece - is guided by the North Star of making this life-saving "Commute Guardian.” We even rendered it very early on thanks to some donated time by our friends at Hatch Duo:


As it turns out, solving this problem is one of the more complex things you can do with OAK. It literally requires every function running in parallel, at low latency, with extreme reliability. Good thing we can do it all:
- Platform motion estimation/trajectory estimation. This leverages many onboard computer vision features such as object tracking, disparity depth, and warp/dewarp (to remove the back/forth pedaling motion, while producing an accurate trajectory of the bike).
- Real-time AI fused with depth. Both object detection (the approaching car/truck) and semantic segmentation (to pick out the position of the side mirror).
- 4k Video Encoding. So you can have video evidence of what happened.
- High-resolution/framerate. Both of which are critical to quick process execution, and to automatically OCR the plate and/or other salient details.
- Embedded. Small footprint, low power draw, and fast boot to make all this feasible on a bike.
We’re proud of how far we’ve come.
