Edge AI Modules
INDEX
2. Advantages and disadvantages of Edge AI
3. Edge AI for Image and Voice Recognition
3.1 Image recognition with Edge AI
3.2 Voice recognition with Edge AI
What Is AI? Various definitions exist for artificial intelligence (AI). Here we focus on systems that use an advanced method called deep learning. Deep learning allows AIs to extract important features from raw data by themselves rather than relying on pre-programmed rules. Architecturally, deep learning achieves this by stacking many layers of thin neural networks.
Currently most AI applications work by streaming data to powerful servers located on the internet. However sending data to the cloud raises privacy and security concerns about potential leaks of confidential information. It also brings with it higher costs, network congestion and increased power consumption due to the need to frequently transmit high-speed data.
Edge AI solves these problems by placing AI computing power at the edge of the network where it is collected. Processing data locally, it can convert megabytes of audio, video or sensor data into a few lines of text, greatly reducing network bandwidth and latency.
In this article, we will discuss what Edge AI is all about -- it's features, benefits, downsides -- and provide examples of how it's being used today. We'll also explore Murata's approach to take Edge AI out of the lab and onto the production line.
In a deep learning application, a model is trained on previously known data to generate results called "inferences". For instance, if an AI system is using camera on a manufacturing line to identify if there are scratches on a product, the model would be trained with images of both scratched and unscratched products.
Once the model is trained, it can be used to process live data from the camera. Here, cloud AI and Edge AI take different approaches (Figure 1).
In Cloud AI, the video data is streamed from the camera to remote data centers over the internet (the cloud) where it is processed. The inference result -- e.g., whether the product has any defect or not -- is then sent back to the production line.
In Edge AI, the AI inference is computed directly at the camera in real-time. This greatly reduces the required communication infrastructure, which may become expensive for a factory with many cameras.
Using the previous plant production line as an example, we will explain how these types of AI work and their characteristics by looking at the case where cloud AI and edge AI perform real-time estimation (inference) on a production line where products are constantly flowing.
*What is an AI Model?
AI model typically refers to a specific neural network architecture and the values of its training weights.
Let's consider first the case in which AI is run in the cloud (right side of Figure 1). In Cloud AI, operations are performed on a large server in the cloud. Some drawbacks of this approach include:
These issues can be avoided with an Edge AI approach. In this case, computing resources are built into the edge device (the camera) to process the video data locally.
Edge AI works by running a small AI model on each edge device. The image data is processed on device and only the inference results are transmitted over the network.
Let's dig a little deeper and consider some advantages and disadvantages to the Edge AI approach.
| No | Advantages/disadvantages | ||
|---|---|---|---|
| 1 | Processing delay | Since the AI model is computed on device, there is no delay and results can be obtained in real-time. | |
| 2 | CPU size | Depending on the application requirements, AI functions can often be supported on existing CPU on the edge device. For high performance applications, a high-speed CPU or specialized AI coprocessor chip may be needed. If high speed is needed, it may increase local power consumption. | |
| 3 | Network load | Very low (AI results only) | |
| 4 | Data confidentiality | Raw data is not exposed outside the device | |
| 5 | Offline connection | Yes. No internet connection is needed, allowing offline operation. | |
| 6 | Cost | Installation cost | If an AI coprocessor or high-speed CPU is needed, the cost of individual devices may be higher. Some training of employees may be needed on how to use the new system. |
| Operational cost | Maintenance requirements are minimal. | ||
| 7 | Power consumption | Some specialized edge AI processing ICs are highly optimized for efficiency and can reduce power by 10x or more. | |
In voice and image recognition, edge AI provides high performance and real-time processing capability. Example applications include surveillance cameras, people detection, robotics, and speech-controlled devices.
Cameras can be enabled with built-in Edge AI processing for applications such as a smart lock that uses facial recognition to automatically open the door. Smart cameras can also be used for surveillance applications to detect people entering and leaving an area or gesture recognition. In infrastructure applications, Edge AI cameras can detect fallen objects or intruders into controlled areas such as railway tracks or parking lots. There is great promise for AI in smart factories, where it can be used for contamination detection, shape testing, and safety monitoring to detect if workers are too close to heavy machinery.
Voice applications for Edge AI include speech control of lighting or machinery. Hands-free control can be beneficial in many cases where a tablet or touchscreen would be unsuitable because both hands or full or there are high levels of dust or dirt. Small devices such as wearables that lack other means of input are also good candidates where the user experience can benefit from voice recognition. In addition, Edge AI can be designed to recognize individual speakers to improve security.
As you've seen, Edge AI technology can help solve fundamental limitations posed by Cloud AI. Let's look at how Murata products can help to leverage the potential of Edge AI.
As the leading supplier of miniaturized high-performance communications modules to the smartphone industry, Murata has accumulated a great deal of technical know-how. Due to our reputation for flexibility and customer commitment, we have been approached by the new wave of AI companies to make modules that simplify integration of AI into IoT applications.
An important consideration in designing modules is to manage heat dissipation. Modules feature a high density of heat-generating components and it can be challenging to implement effective heat dissipation to avoid heat-related failures. Murata has applied our proprietary know-how to layout, mounting.
In addition to heat dissipation, we also have developed shielding measures to suppress electromagnetic noise to prevent problems caused by noise and provide higher reliability.
Another platform in which Edge AI is making inroads is single board computers and System-on-Modules (SOMs). Let's compare this approach to a dedicated special purpose AI module.
Because SOMs are general purpose MCUs with many onboard peripherals, they are easy to integrate with external devices and sensors. For instance, by adding a camera module and image recognition software libraries, a simple Edge AI camera prototype can easily be made. On the other hand, running AI software on general purpose MCUs tends to be inefficient, yielding low performance and consuming a lot of power, which may lead to a need for a larger power supply and heat sinking.
Alternatively, the efficiency of the system may be greatly improved by interfacing the MCU with an AI accelerator such as Murata's Type 1WV which is based on Google's Coral chip. Coral is optimized for neural network AI processing, offering 4 TOPS*1 (Trillion Operations Per Second) of computing power, up to a hundred times faster than a typical high performance MCU at a low power consumption of less than 2W. Murata's heat dissipating module design ensures the chip can work at its maximum speed without the need for a cooling fan.
The extra performance provided can make the difference between processing 1 frame every 2 seconds to processing real-time video at 60 fps or higher, enough for a wide variety of smart camera and audio applications.
*1 TOPS: an abbreviation of Tera Operations per Second, a unit that means it can perform 1 trillion operations per second.
As the number of IoT devices grows and intelligence moves further to the edge, AI modules will play an important part in delivering smaller products that function at higher speed, consume less power and maximize reliability.
At Murata, we are smoothing the way with modules that squeeze AI performance into tiny footprints at the lowest possible power.
The number of edge devices including smartphone, tablets, cameras and sensors connected to the internet is expected to reach 29 billion devices worldwide by 2030. In the same time period, the global market for edge AI is expected to grow from $15 billion in 2022 to over $107 billion.
As adoption of AI accelerators increases to boost system performance, the edge AI coprocessor market is expected to grow from $2.8 billion in 2022 to approximately $11 billion by 2032, growing at an average annual growth rate of 15.56%.
The reasons behind this expected expansion of the edge AI market include the following.
Over the past few years, expectations have been raised regarding the advantage that edge AI provides in curtailing power consumption.
As mentioned above, cloud computing is a common network form for current social infrastructure. However, due to the significant power consumption caused by the constant operation of cloud servers and heat loss due to long-distance data transmission, the inefficient power consumption of cloud computing has been pointed out. The same goes for cloud AI.
Therefore, edge computing and edge AI are being promoted as ways to improve power efficiency through distributed processing rather than the centralized data processing of cloud computing. Edge AI completes AI operations on data within the edge device without relying on cloud servers, and the heat loss due to data transmission is low, which enables it to contribute to significant energy savings in the infrastructure of a digital society.