MCHP-UK-MEL3272-AI Trends-190889 Final
MCHP-UK-MEL3272-AI Trends-190889 Final
MCHP-UK-MEL3272-AI Trends-190889 Final
Artificial Intelligence (AI) is rapidly moving out of the data centre and into the Internet of Things (IoT).
AI today is dominated by massive neural networks that are trained on large processors and graphics
processing units (GPUs), Field Programmable Gate Arrays (FPGAs) or giant chips from new companies in the
data centre and provide usable insights from large amounts of complex data.
But for industrial automation and the IoT, the needs are very different. Machine learning (ML) monitors
continuous signals from equipment, finding trends to highlight how a piece of equipment is performing.
Identifying faults before they occur can save millions of dollars by avoiding unexpected downtime. This
predictive maintenance is driving demand for AI at the edge of the network and across the IoT.
Next microchip.com
1
Neural networks
Convolutional and Deep Neural Networks (CNN and DNN) can be trained in the cloud, but that can take
large amounts of data from the factory floor. Recognising patterns, whether vibration or temperature
patterns, can mean transferring huge amounts of data from the factory to the cloud.
Transferring large amounts of data may prove to be expensive and there may be latency issues
depending upon the nature of the application. So a new generation of chips is emerging for AI
applications at the edge, particularly for vision. Taking images from cameras and processing the data
locally, in a processor and GPU in an edge server for example, is an increasingly popular architectural
choice for industrial automation. Moore’s Law is providing more performance in a lower thermal
envelope for the processors and GPUS, enabling the fanless edge servers that are needed for higher
reliability in harsh industrial environments.
Visual inspection
This increased performance is allowing software from the cloud to be run on low-cost boards at the
edge. For example, the latest Raspberry Pi single-board computers running Linux® can now also run the
same container software, such as Docker and Kubernetes, that allows applications to be easily moved
around in the data centre. This extends the software from the cloud down to the edge of the network,
dramatically simplifying the rollout and management of edge AI systems.
This boosts the vision-processing applications that enhance the monitoring of production lines, adding
visual inspection of all kinds of products.
But increasingly, machine learning is used to monitor the equipment itself. It can be used to identify
potential problems, allowing factory operators to schedule maintenance before a failure causes a pro-
duction line to go down. This predictive maintenance can potentially save the industry billions of dollars
in lost production time – and has been driving significant growth in the rollout of machine learning for
the IoT.
This is driving the use of flexible, low-power devices such as Microchip’s PolarFire® FPGA.
Monitoring equipment
However, edge AI requires data, and one of the key trends is the instrumentation of equipment. Adding
wireless sensors nodes to industrial equipment is providing a deluge of data that must be analysed to
provide insights that can help the operator. Transmitting all that raw data to the edge server for analysis
by the AI engine is fine for a node that has a power supply from the wall or power-over-ethernet, but it
kills a battery-driven sensor node, increasing costs with more regular battery replacement cycles.
This drives AI even further into the IoT with the AI processing at the node.
2 Back Next
For example, event-driven AI processors are now hitting the factory floor. They integrate a different
kind of neural network accelerator that just responds to changes and are small enough to fit inside an
industrial camera.
Start-ups are developing silicon dedicated to AI for vision applications, optimising the silicon for specific
neural networks to meet the thermal requirements of the IoT.
AI accelerators
Microcontrollers are also now adding neural network accelerators alongside their CPU cores. Some are
custom, others are supplied by companies such as Arm with its U65 Ethos blocks or the neural network
accelerators from Imagination Technologies.
The RISC-V open-source instruction set is another source of innovation for edge AI. Adding custom
instructions to a core for specific types of machine learning, for vibration monitoring for example,
provides low-power, low-cost analysis at the edge.
But machine learning and AI are not just about neural networks. Digital signal processing blocks
embedded alongside a microcontroller in devices such as Microchip’s dsPIC® Digital Signal Controllers or
SAM microcontrollers can also be used for pattern-recognition applications that provide analysis at the
sensor.
All of this reduces the power consumption at the sensor node by using the local processing to handle the
analysis. That means data only needs to be sent when an anomaly is detected, dramatically extending
the battery life and cutting the operating costs.
These nodes can be added alongside the existing data gathering networks that provide process control,
minimising the impact of adding the new technology, but still flag any potential problems.
At the cloud level, TensorFlow is a key tool to build the models in the cloud on GPUs and its dedicated
ASIC. Similarly, tools such as Poplar can be used to train its data centre chip with huge AI models.
Google has also developed a stripped-down version, Tensorflow Lite, to take the trained models and
applies them to edge processors. TinyML uses TensorFlow Lite to implement neural network engines on
low-power microcontrollers such as the SAM family at the edge.
Similarly, tools can take the trained models and adapt them for FPGAs ready to use in the data centre
and in the field.
3 Back Next
Other startups have taken this a step further with a no-code, drag and drop approach to
implementing specific neural networks for vision applications on its chip.
But there is set to be an explosion in AI capability at the edge of the network with the coming generation
of microcontrollers based around the Arm® Cortex®-M55 processor. This has been designed to work
closely with Arm’s Ethos-U55 microNPU (Neural Processing Unit), which the company says will give
microcontrollers a combined 480x leap in machine-learning performance.
Tools for hybrid silicon with applications processors and microcontroller core also aim to make the
development process easier for engineers. High-level tools can take C code and neural network libraries
and partition a design across the various cores and accelerator blocks on a chip without the developer
having to be aware of the details.
Arm has recognised the importance of the tools, with a unified toolchain enabling AI hardware and
software developers with more ways to innovate.
Microcontrollers
Then microcontrollers such as Microchip’s PIC® and AVR® families can be used to implement the pattern
recognition.
Tools from companies such as Edge Impulse and Motion Gestures provide machine learning libraries
for many types of functions in low-power microcontrollers without developers needing to know about
the underlying AI. These libraries range from vibration monitoring of fans to gesture recognition using
different sensors, including touch, motion and vision.
The key is having these tools tightly integrated into the microcontroller development toolchain so that
the right machine-learning algorithms can be used on the most appropriate hardware, whether that is a
CPU with dedicated extensions, digital signal processing, a dedicated ML accelerator block or a general
purpose one.
All of these can be used to provide the appropriate level of performance and power consumption to
allow detailed analysis at the edge of the network, whether this is at the sensor node or the edge server.
Microchip has a range of hardware and software solutions for edge AI.
Please visit https://www.microchip.com/en-us/solutions/machine-learning to find out more.
4 Back Next
Artificial Intelligence and Machine Learning
Solutions for Smart Applications on the Edge or in the Server or
Data Center Environment
Get ready to add Artificial Intelligence (AI) and Machine Learning (ML) to your next design. Whether you’re
new to AI and ML and require a simplified, easy-to-use environment, or you’re an experienced develop-
er looking for advanced performance, you’ll find right tool for the job in our selection of software and
hardware tool kits, reference designs and silicon platforms. We have also partnered with industry-leading
design houses to offer complete AI-based solutions that integrate seamlessly with our products and plat-
forms. These total solutions can be created with little to no AI development experience. We make it easy
to implement AI and ML algorithms for collecting and organizing data, training neural networks in data
centers or implementing optimized inference on the edge.
Our extensive portfolio of silicon devices includes microcontrollers (MCUs), microprocessors (MPUs) and
Field-Programmable Gate Arrays (FPGAs). Our software toolkits allow the use of popular ML frameworks
including TensorFlow, Keras, Caffe and many others covered by the ONNX umbrella as well as those
found within TinyML and TensorFlow Lite. This combination of hardware and software enables you to de-
sign a variety of applications including high-performance AI acceleration cards for data centers, self-driv-
ing cars, security and surveillance, electronic fences, augmented and virtual reality headsets, drones,
robots, satellite imagery and communication centers.
Find out how our proven reference designs and network of experienced partners can help you reduce
risk, time to market, power consumption and application costs.
5 Back Next
Smart Embedded Vision
Implement real-time video or image processing in your application. Our comprehensive solutions include
silicon, Intellectual Property (IP) and software, as well as state-of-the-art Artificial Intelligence (AI) and
Machine Learning (ML) capabilities. These solutions are available in compact form factors, consume very
little power and deliver the highest security and reliability for your embedded vision application. Our
network of partners can assist with additional IP and/or services to guide you through your development.
6 Back Next
Smart Predictive Maintenance
State-of-the-art neural networks and embedded sensors are being used to accurately predict potential
maintenance issues with equipment used in a variety of industrial, manufacturing, consumer, automotive
and other applications. You can leverage the Internet of Things (IoT) and our combination of solutions to
monitor and detect wear and tear and operational anomalies that might be affecting the performance of
your product while also implementing smart control to manage load and reduce the amount of wasted
power. Predictive maintenance reduces downtime and repair costs while also extending equipment
life and ensuring output quality. We have partnered with Edge Impulse to offer a complete predictive
maintenance solution that uses multiple sensors and advanced Artificial Intelligence/Machine Learning
(AI/ML) capabilities and features:
• Highly accurate detection of issues and with fewer false positive triggers
• Easy-to-use Graphical User Interface (GUI) that eliminates the need for expertise in ML or training
neural networks
If you have in-house skills to implement ML neural networks, Edge Impulse offers a
state-of-the-art software kit and developer friendly environment that make it easy to get started.
Learn More
Intelligent Power
7 Back Next
Complex Gesture Detection at the Edge
with Machine Learning
An innovative Human Machine Interface (HMI) is one of the best ways to differentiate your product from
the competition. We now offer a solution that makes it easy to add complex gestures to your 2D touch
interface. This means you can run a touchpad on a single Microchip microcontroller (MCU) while that
same MCU classifies complex gestures in real time, delivering inference at the edge.
We have partnered with Motion Gestures to make advanced gesture recognition possible in a matter of
minutes. If you can draw it with your fingertip, our solution will be able to recognize it. Motion Gestures
provides powerful embedded AI-based gesture recognition software that is compatible with our
32-bit MCUs. It can be used with motion, touch and vision sensors and requires minimal computational
resources to deliver highly accurate gesture recognition. Unlike conventional solutions, the gesture
recognition engine uses advanced ML algorithms and does not require the collection of any training
data. This powerful solution provides a gesture recognition accuracy of nearly 100% while significantly
reducing your gesture software development time and costs.
8 Back Next
Machine Learning Workstations, Servers &
Appliances
The Machine Learning (ML) training and inference market using multiple GPUs is rapidly evolving, driving
advanced technologies such as low-latency, high-throughput PCIe® switches and high-performance
NVMe™ Flash controllers. The increased use of accelerators for deep learning, Artificial Intelligence (AI)
and ML is enabling radical advances in image classification, speech recognition, autonomous driving,
bioinformatics and video analytics. This results in a growing need for a high-bandwidth/low-latency PCIe
interconnect infrastructure utilizing NVMe storage to enable parallel computing.
High-performance fabric connectivity and composability for multi-host GPU and NVMe SSD systems are
critical to ensure dynamic allocation of GPU resources to match workload requirements and maximize
system efficiency. Switchtec™ PAX Advanced Fabric PCIe switches feature dynamic partitioning and
multi-host SR-IOV sharing, enabling real-time “composition” or dynamic allocation of GPU resources to a
specific host or set of hosts using standard host drivers.
Advanced fabric PCIe switch solutions for ML appliances deliver a scalable, low-latency and cost-effective
multi-host interconnect or a network of GPUs, NVMe SSDs and other PCIe endpoints. Another important
consideration is the availability of a fabric Application Programming Interface (API), which can simplify
system management, greatly reducing time to market and development cost for multi-host systems.
9 Back Next
Solutions for Machine Learning Appliances
Flashtec® NVMe Controllers
Flashtec NVMe controllers support the standard NVMe host interface in a variety of form factors at a
wide range of capacity points, and are optimized for maximum high-performance, random read/write
operations. They perform all Flash management operations on chip while using minimal host processing
and memory resources. Explore Products
Microchip Technology Inc. | 2355 W. Chandler Blvd. | Chandler AZ, 85224-6199 | microchip.com
The Microchip name and logo, and the Microchip logo are registered trademarks of Microchip Technology Incorporated in the U.S.A.
and other countries. All other trademarks mentioned herein are property of their respective companies. © 2021, Microchip Technology
Incorporated. MEL3272A-ENG-06-21
Back