Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
95 views

Tiny Machine Learning

TinyML allows machine learning models to run on small, low-power devices like sensors and microcontrollers. It provides benefits like low latency since models run locally, energy savings by reducing the need for server infrastructure, and improved privacy by keeping data on devices. Popular use cases include computer vision, predictive maintenance, and personalization in customer experience applications. Developing TinyML models involves tools like TensorFlow Lite and supported hardware like Arduino boards.

Uploaded by

recursosgtd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
95 views

Tiny Machine Learning

TinyML allows machine learning models to run on small, low-power devices like sensors and microcontrollers. It provides benefits like low latency since models run locally, energy savings by reducing the need for server infrastructure, and improved privacy by keeping data on devices. Popular use cases include computer vision, predictive maintenance, and personalization in customer experience applications. Developing TinyML models involves tools like TensorFlow Lite and supported hardware like Arduino boards.

Uploaded by

recursosgtd
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

What is TinyML?

An Introduction to Tiny
Machine Learning
Learn about TinyML, its applications and benefits, and how you can get started with this
emerging field of machine learning.

Machine learning models play a prominent role in our daily lives – whether we know it or not.
Throughout the course of a typical day, the odds are that you will interact with some machine
learning model since they have permeated almost all the digital products we interact with; for
example, social media services, virtual personal assistance, search engines, and spam
filtering by your email hosting service.

Despite the many instances of machine learning in daily life, there are still several areas
the technology has failed to reach. The cause? Many machine learning models, especially
state-of-the-art (SOTA) architectures, require significant resources. This demand for high-
performance computing power has confined several machine learning applications to the
cloud – on-demand computer system resource provider.

In addition to these models being computationally expensive to train, running inference on


them is often quite expensive too. If machine learning is to expand its reach and penetrate
additional domains, a solution that allows machine learning models to run inference on
smaller, more resource-constrained devices is required. The pursuit of this solution is what
has led to the subfield of machine learning called Tiny Machine Learning (TinyML).

In this article, we will:


Define TinyML and its benefits

Cover some applications of TinyML


Discuss the workflow requirements involved for TinyML applications.
What is TinyML?
“Neural networks are also called artificial neural networks (ANNs). The architecture forms
the foundation of deep learning, which is merely a subset of machine learning concerned
with algorithms that take inspiration from the structure and function of the human brain. Put
simply, neural networks form the basis of architectures that mimic how biological neurons
signal to one another.”
Source; PyTorch Tutorial: Building a Simple Neural Network

Machine learning is a subfield of artificial intelligence that provides a set of algorithms.


These algorithms allow machines to learn patterns and trends from available historical data to
predict previously known outcomes on the same data. However, the main goal is to use the
trained models to generalize their inferences beyond the training data set, improving the
accuracy of their predictions without being explicitly programmed.

One such algorithm used for these tasks is neural networks. Neural networks belong to a
subfield of machine learning known as deep learning, which consists of models that are
typically more expensive to train than machine learning models. You can learn more
about building neural network models in R in a separate tutorial.

Figure 1. A visualization of a three-layer neural network


According to tinyml.org, “Tiny machine learning is broadly defined as a fast-growing field
of machine learning technologies and applications including hardware, algorithms, and
software capable of performing on-device sensor data analytics at extremely low power,
typically in the mW range and below, and hence enabling a variety of always-on use-cases
and targeting battery operated devices.”

TinyML’s growth in recent years has largely been attributed to the development of the
hardware and software ecosystems that support it. Since the techniques can be implemented
in low-energy systems (i.e., sensors, microcontrollers, etc.), machine learning can be brought
to the edge in an extreme way, enabling such applications to perform with real-time
responsivity. In essence, the idea is to enable machine learning practitioners to do more with
less.

But why is this so important? Let’s take a look at why TinyML is appealing.

The benefits of TinyML


1.Latency: The data does not need to be transferred to a server for inference because the model
operates on edge devices. Data transfers typically take time, which causes a slight delay. Removing this
requirement decreases latency.

2.Energy savings: Microcontrollers need a very small amount of power, which enables them to operate
for long periods without needing to be charged. On top of that, extensive server infrastructure is not
required as no information transfer occurs: the result is energy, resource, and cost savings.

3.Reduced bandwidth: Little to no internet connectivity is required for inference. There are on-device
sensors that capture data and process it on the device. This means there is no raw sensor data
constantly being delivered to the server.

4.Data privacy: Your data is not kept on servers because the model runs on the edge. No transfer of
information to servers increases the guarantee of data privacy.

Use Cases: How is TinyML being used?


The applications of TinyML spread across a wide range of sectors, notably those dependent
on internet of things (IoT) networks and data – The Internet of Things (IoT) is basically a
network of physical items embedded with sensors, software, and other technologies that
connect to and exchange data with other devices and systems over the internet.

Computer vision, visual wake words, keyword spotters, predictive maintenance, gesture
recognition, industrial machine maintenance, etc., are all common TinyML use cases. Let’s
also take a look at some industries where TinyML has been used to power applications:
Agriculture
Real-time agriculture and livestock data can be monitored and collected using TinyML
devices. The Swedish edge AI product business Imagimob has created a development
platform for machine learning on edge devices. Fifty-five organizations from throughout the
European Union have collaborated with Imagimob to learn how TinyML can offer efficient
management of crops and livestock.

Industrial predictive maintenance


TinyML can be deployed on low-powered devices to continuously monitor machines for
malfunctions and predict issues before they happen; this type of application boasts the
potential to help businesses reduce costs that often arise from faulty machines.

A prime example of predictive maintenance is Ping Services. They developed a monitoring


device to continuously monitor the acoustic signature of wind turbine blades to detect and
notify of any change or damage. According to Ping’s website, “continuous monitoring
operators can give a timely response to blade damage, reducing maintenance costs, failure
risks, and downtime, as well as improving wind turbine performance and efficiency.”

Customer Experience
Personalization is a key marketing tool that customers demand as their expectations rise. The
idea is for businesses to understand their customers better and target them with ads and
messages that resonate with their behavior. Deploying edge TinyML applications enable
businesses to comprehend user contexts, including their behavior.

Workflow Requirements
Many tools and architectures deployed in traditional machine learning workflows are used
when building edge-device applications. The main difference is that TinyML allows these
models to perform various functions on smaller devices.

Tensorflow Lite for microcontrollers (TF Lite Micro) is one of the most popular
frameworks for machine learning on edge devices; it was specifically designed for the task of
implementing machine learning on embedded systems with only a few kilobytes of memory.

Python is often the preferred language for building machine learning models. However,
TensorFlow Lite makes it possible to easily develop models in C, C++, or Java and deploy
them without connecting to the internet.

From a hardware perspective, a supported microcontroller board is required o get started with
TinyML in TF Lite; the library currently supports the following microcontrollers:
Arduino Nano 33 BLE Sense
SparkFun Edge

STM32F746 Discovery kit


Adafruit EdgeBadge

Adafruit TensorFlow Lite for Microcontrollers Kit


Adafruit Circuit Playground Bluefruit

Espressif ESP32-DevKitC
Espressif ESP-EYE
Wio Terminal: ATSAMD51
Himax WE-I Plus EVB Endpoint AI Development Board

Synopsys DesignWare ARC EM Software Development Platform


Sony Spresense

With the support of TinyML, it is possible to increase the intelligence of billions of devices we
use every day, like home appliances and IoT gadgets, without spending a fortune on
expensive hardware or dependable internet connections, which are frequently constrained by
bandwidth and power and produce significant latency.

Learning Resources
TinyML Foundation
Tiny ML: Machine Learning with Tensorflow Lite on Arduino and Ultra-Low-Power
Microcontrollers (Book)
Embedded Machine Learning on Edge Devices (Podcast)
Understanding Machine Learning
Introduction to Deep Learning in Python
Deep Learning Tutorial

Wrap-up
TinyML has been gaining traction across various industries in recent years due to the
development of hardware and software ecosystems that support it. The tool has made it
possible to implement machine learning models in low-energy systems, like microcontrollers,
which opens the door to various new opportunities. Low latency, energy savings, data privacy,
and no connection dependencies are some of the factors that make TinyML so appealing to
developers seeking to develop applications for internet of things (IoT) devices.
TinyML And Its ‘Great’ Application in
IoT Technology

Tiny machine learning (TinyML) is an embedded software technology that can be used to
build low power consuming devices to run machine learning models. It is also more
famously referred to as the missing link between device intelligence and edge hardware.
It makes computing at edge cheaper, less expensive, and more stable. Further, TinyML
also facilitates improved response time, privacy, and low energy cost.

TinyML is massively growing in popularity with every passing year. As per ABI Research,
a global tech market advisory firm, by 2030, about 230 billion devices will be shipped with
TinyML chipset.

TinyML has the ability to provide a range of applications, from imagery micro-satellite,
wildfire detection, and for identifying crop ailments and animal illness. Another area of
application that is drawing great attention is its application in IoT devices.

TinyML and IoT


TinyML brings ultra-low-power systems and machine learning communities together; this
paves the way for more exciting on-device machine learning. TinyML is placed at the
intersection of embedded machine learning applications, algorithms, hardware, and
software. As compared with a desktop CPU, which consumes 100 watts of power, TinyML
just required a few milliwatts of battery power. With such a major advantage, TinyML can
provide great longevity to always-on ML applications at the edge/endpoint.

Currently, there are 250 billion microcontrollers in the world today. This number is
growing by 30 billion annually. The reason for its pervasiveness is that, firstly, it gives
small devices the ability to make smart decisions without needing to send the data to the
cloud. Further, TinyML models are small enough to fit into almost any environment.
Taking the example of an imagery micro-satellite which are required to capture high-
resolution images but are restricted by the size and number of photos they can transmit
back to Earth. With TinyML, however, the microsatellite only captures an image if there
was an object of interest such as a ship or weather pattern.
TinyML has the potential to transform the way one deals with IoT data, where billions of
tiny devices are already used to provide greater efficiency in fields of medicine,
automation, and manufacturing.

It is very important to make a clear distinction between ‘serving’ machine learning to IoT
and ‘developing’ machine learning inside the IoT devices. In the former, the machine
learning tasks are outsourced to the cloud, while the IoT device waits for the execution of
intelligent services, however, in latter, TinyML-as-a-service is employed, and the IoT
device is part of the execution of the services. The TinyML represents a connecting point
between the IoT devices and the ML.

The hardware requirements for machine learning in larger systems are analogous to
TinyML in smaller IoT. As the size of IoT devices hitting the market increase, we could
see even higher investment in terms of research in TinyML, exploring concepts such as
deep neural networks, model compression, and deep reinforcement learning.

The Challenges
There are a few challenges of integrating TinyML in the IoT devices; some of them are:

•Overcoming the technical challenges within edge computing


•The differences between web-based and embedded technologies in terms of
deployment and execution.
•The computational resource that is required for delivering an accurate and reliable
output.
Wrapping Up
Speaking in detail about the applications of TinyML, it can be used in sensors for real-
time traffic management and ease of urban mobility; in manufacturing, TinyML can be
used to enable real-time decision making to identify equipment failure. The workers can
be alerted to perform preventive maintenance based on the equipment conditions;
TinyML can also be used in the retail business for monitoring the availability of the
resource.

TinyML is gaining its ground but is still in a very nascent stage. It is expected to take over
space with inter-sector applications very soon.

You might also like