Ayyeka News & Insights

What are the challenges to embed tinyML in edge devices?

Written by Jacqueline L. Mulhern | 4/19/22 8:27 AM

Most machine-learning (ML) focuses on high-powered solutions in the cloud or medium-powered solutions at the edge. However, there is another way to implement machine learning on small devices that have much less battery life. TinyML is intended for developing low-power machine learning models. It has wide-ranging applications in industries including security, healthcare, and smart-city technology. Its potential uses for critical infrastructure are in the research and development stage.

What are the benefits of tinyML?

Fast inference with low latency: TinyML enables on-device analytics without the necessity of sending data to a server, allowing edge devices to process data and provide inference with low latency. Machine learning (ML) inference is the process of running live data points into a machine learning algorithm (or “ML model”) to calculate an output, i.e. draw conclusions about the data. Latency is a measurement in Machine Learning determining performance of various models in a specific application. TinyML can be used anywhere it’s difficult to supply power.

Protecting Privacy: Take smart city parking, for example. One way to optimize parking is to put a video camera (similar to surveillance cameras!) at every street corner and monitor who parks where and when. This allows the municipality to automatically start the billing process for parking slots, as well as letting people know where the empty spots are located. Traditionally, this would require sending a live video feed to the cloud for processing. This creates a huge privacy issue: the municipality only needs to know the license number of the car to start the billing process. However, a live video feed contains much more information, like who is riding in the car with whom, creating a huge privacy issue. This is exactly where tinyML comes into play: tinyML lets you process the live video feed on resource-constrained devices in the field, without having to send it to the cloud. In this case, the only thing that would be sent to the cloud is the license plate number, and with that, the privacy issue is gone. The raw video would never leave the camera.

How can challenges to implementation be overcome?

This nascent technology requires us to address problems that haven’t been solved at any scale. TinyML devices consume different amounts of power, which makes maintaining accuracy across the range of devices difficult, and makes things even difficult for benchmarking. This constraint is overcome by the implementation of systems with little memory and computing power that require less energy.

Pete Warden of Google is on a quest to overcome this limitation.  He wants to build machine learning (ML) applications that can run on a microcontroller for a year using only a hearing aid battery for power. If a low-power device needs to communicate with the outside world, it can’t be “chatty”; it needs to turn off the radio, turning it on only to transmit short bursts of data. For example, on a phone, we typically gather data  and send it to a server for processing. That’s just too power-hungry for TinyML; we can’t do machine learning by sending data to the cloud. Any significant processing needs to be done locally–on a small, slow processor with limited memory. However, researchers are making a lot of progress at running ML models on small processors.

Designers for small, embedded devices have to be aware of the “creep factor”: they have to make it clear what data is being used, how it’s used, and how it’s communicated. One solution might be not to send data at all: to build devices that are never networked, with software that is pre-installed and never updated. A device that’s not on a network can’t become a victim of a hostile attack, nor can it violate the user’s privacy. No device can be more private than one that is never online–though such devices can never be updated with improved, more accurate models. Another challenge will be creating mechanisms for creating and recycling these devices; it’s too easy to imagine millions of tiny devices with batteries leaking mercury or lithium into the environment.

Conclusions

Is TinyML a new hardware platform? A software tool? A completely new AI methodology? It’s a bit of both.  In simple terms, it’s about getting machine learning to work on resource-constrained systems so small that they may not even run a full operating system (OS). “TinyML is just a name for doing machine learning on constrained devices,” said Allen Watson, product marketing manager at Synopsys. To learn about Ayyeka's work with Tadiran Batteries to implement low-power solutions in edge artificial intelligence for infrastructure, look here: https://waterfm.com/modernizing-remote-water-infrastructure-with-edge-ai-and-ultra-long-life-lithium-batteries/