Infineon’s recent acquisition of Imagimob, a Stockholm, Sweden-based supplier of TinyML platforms, raises a fundamental question: where the chip industry stands in adopting and accelerating this artificial intelligence (AI) technology used for automated tasks involving sensory data.
Especially, when most TinyML applications employ microcontrollers (MCUs) to deploy AI models. In fact, MCUs are at the heart of a new premise at the intersection of AI and the Internet of Things (IoT) called Artificial Intelligence of Things or AIoT. Steve Tateosian, VP of IoT Compute and Wireless at Infineon, calls AIoT a natural evolution enabled by TinyML.
But how does TinyML bridge the gap between machine learning (ML) and embedded systems? What role will suppliers of microcontrollers and other embedded processors play in facilitating the production-ready deep learning models? Infineon’s Imagimob deal and other tie-ups between embedded processors and ML software houses provide some clarity.
For a start, what’s required is more sophisticated TinyML models, and that calls for more innovation at the software solutions level for specific use cases. Here, it’s worth mentioning that Imagimob has been working closely with embedded processor suppliers like Syntiant before the acquisition. It has demoed its TinyML platform on Syntiant’s NDP120 neural decision processor in 2022.
Figure 1 AI chips powered by TinyML platforms can be used to quickly and easily implement vision, sound-event detection (SED), keyword spotting, and speech processing capabilities in a variety of applications. Source: Syntiant
Likewise, Infineon teamed up with another supplier of TinyML-based AI models, Edge Impulse, to prep its PSoC 6 microcontrollers for edge-based ML applications. Edge Impulse’s platform streamlines the entire process of collecting and structuring datasets, designing ML algorithms with ready-made building blocks, validating the models with real-time data, and deploying the fully optimized production-ready result to a microcontroller like PSoC 6.
So, by collaborating with a software house specializing in TinyML-based AI models, Infineon wanted to lower the barriers to running TinyML models on its MCUs. The TinyML platform offered by software houses like Imagimob and Edge Impulse allows developers to go from data collection to deployment on an edge device in minutes.
Such tie-ups are aimed at adopting and accelerating ML applications such as sound event detection, keyword spotting, fall detection, anomaly detection, and gesture detection. Here, MCU suppliers are trying to accelerate the adoption of TinyML for the microwatt era of smart and flexible battery-powered devices.
Figure 2 Embedded system developers use Imagimob AI to build production-ready models for a range of use cases such as audio, gesture recognition, human motion, predictive maintenance, and material detection. Source: Imagimob
According to David Lobina, Artificial Intelligence & Machine Learning research analyst at ABI Research, any sensory data from an environment can have an ML model applied to that data. “However, ambient sensing and audio processing remain the most common applications in TinyML.”
Take the case of the Imagimob AI platform that includes a built-in fall detection starter project. It comprises an annotated dataset with metadata (video) and a pre-trained ML model (in h5-format) to detect when a person falls from a belt-mounted device using inertial measurement unit (IMU) data. So, a developer can use the fall detection model and improve it by collecting more data.
Figure 3 Imagimob AI is an end-to-end development platform for machine learning on edge devices. Source: Imagimob
Founded in 2013, Imagimob offers a quick-start development system for on-device TinyML as well as “automatic machine learning” or AutoML solutions. Its acquisition by Infineon underscores the need for collaboration between embedded processor suppliers and TinyML platform providers in order to bring the advantages of AI/ML to embedded systems.
Related Content