Round the Robin
Semantic segmentation to monocular Depth
Over the past year+ the founder has dedicated his engineering learning to embedded machine learning, with particular attention to computer vision. The work has covered disparity-depth implementations, object detection, object classification, object tracking - and integrations thereof.
Along the lines, we’ve paid keen attention to the chipsets and processor architectures which are actively emerging in the space, and can quickly/accurately narrow down to embedded solutions to meet cost/power/performance trades. See a brief (albeit by now slightly outdated) list of embedded AI/ML/CV chipsets below:
The experience garnered over the past year+ of implementing and training neural networks and coupling them with CV algorithms pairs well with our more long-standing experience developing traditional embedded systems. And this allows us to adeptly explore the changing landscape of embedded ML/CV parts, and design a system which meets you and your customers needs.
“No industry can afford to ignore artificial intelligence”
-MIT Technology Review
Our founder saw in 2017 (well, was nudged when is colleague quit his amazing job to go 100% into AI) that we were about to see a wave as big or bigger than the PC revolution leave a huge trail of creative destruction across many industries. And we’re seeing this now, mainly with companies poised to leverage big data in a cloud-centric way. Google leveraging AI to catch heart disease early (through retina scans, here), Deep Patient inexplicably predicting Schizophrenia disturbingly accurately (more here), and advertising moving almost 100% to AI tracking techniques for hyper-personalization (here). And these are just a few examples, the cloud-based applications are mind-numbingly vast.
And the `edge` applications are even more numerous. These are applications those where the latency of the ‘send it to the cloud’ mind-set is unacceptable and/or where the device can’t stop working when the internet goes down, or the AWS instance crashes, or any number of other issues happen between it, and the AI system running in the cloud.
And only in the past ~6 months has implementing these `edge` applications been feasible in a meaningful way. So we’re seeing the beginning of the biggest wave of machine learning - where it can now be applied anywhere, anytime, and most importantly - realtime.
This capability - and specifically the discovery that implementing neural processors is relatively easy - has lead to an arms race to produce the first and the best neural inference engines out there. And this has lead to a lot of confusion as to what is the `best` to use. Long story short, we’ve spent a TON of time studying what’s out there, and can help you choose.
But first, a slight tangent to explain why implementing neural processors is easy (besides the typical fabrication head-aches of making an IC, which are there for any IC you make): In the most popular form, neural processors are simply an ASIC implementation of an ‘embarrassingly wide’ (to quote Google WRT their Edge TPU) set of multiply accumulates (MACs) to implement the convolutions (which are the core to CNNs - convolutional neural networks), plus some other bits (ReLu, etc.) that go between the many layers (hence, ‘deep’, in deep learning) in these CNNs.
Some others have taken a more novel approach, converting convolution to sparse-matrix multiplications, and then converting the networks so that inference can be done in this way. Gyrfalcon tech is a good example, and this approach nets them the highest operations/Watt of any available neural processor. But it comes at a cost - it can’t do just any neural network - it can only do specific ones which can be converted to its approach. And yet others are implementing using of light, like aptly-named Lightmatter, here.
So back to choosing your hardware for your solution:
This comes down to (1) knowledge of what’s available (this helps), (2) knowledge of what matters for your product (cost above all, price above all, low-power above all, or some weighted compromise), and (3) the flexibility/upgrade-ability required for the final product.
And we can help way those leveraging our up-to-the-minute knowledge of the options for both hosts and neural processors (and hosts with neural processors built-in, like the 3559A or the BM1880). They all have their pros and cons, and these become salient based on your application.