The Thinking Factory
Artificial Intelligence (AI) wields great power in industrial systems. In the past, a manufacturing site relied upon a tenured engineer to prophesize pending machine failure. Now, AI can do the same, developing its skill in a matter of days. While AI is often the term used, most intelligent technologies are Machine Learning (ML) models created by studying data patterns. The applications for AI range from detecting imminent bearing failure in drives to quality assessments of soldering on Printed Circuit Boards (PCBs).
ML is a very data-hungry application, but thanks to IIoT deployments, industrial systems are providing the required data. Sometimes, ML applications require labeled data to make sense of the information. However, solutions that work with unlabeled data are also available, relying on self-learning algorithms to make sense of it all.
With some clever mathematics, it’s possible to train a model with even a minimal data set.
Once trained, the next step for ML is deployment. AI can be exceptionally processing intensive and many applications rely on the power of cloud computing to execute the algorithms. However, it also requires a big enough data pipe to get the data there. That could be a challenge in a manufacturing environment with hundreds of cameras. Instead, engineers are looking at ML deployment at the edge.
ML at the edge means the algorithm works on the data at its source (i.e., where the sensor and local system are located). Because of the massive quantity of data that is processed here, the data connection upstream requires less bandwidth and passed messages are often limited to a pass or fail output. Because such ML tasks are clearly defined, the algorithms can be optimized to the performance offered by embedded systems.
ML for Edge Applications
Field-Programmable Gate Arrays (FPGAs) are one option, optimizing vision system ML models for the lowest-power operation. Microchip’s range of PolarFire® FPGAs operates at five to ten times lower static power and 30 to 50% lower total power than competing technologies, making them well suited for thermal- and power-constrained environments. They are also available as SoC solutions with multicore RISC-V cores to support complete applications.
PolarFire® FPGAs
The VectorBlox™ Accelerator Software Development Kit (SDK) provides algorithm development, integrating with the most common ML frameworks such as TensorFlow, ONNX and Caffe2. You can optimize models by removing layers only required for the learning phase.
PIC32 MCUs
Thanks to quantization, the number of bits needed can also be reduced from 32 bits of floating point down to 8-bit integers. This delivers almost the same accuracy while using less memory.
VectorBlox™ Accelerator SDK
For less complex applications, today’s microcontrollers (MCUs) offer more than enough performance to execute the neural networks behind ML. Devices such as the PIC32 or SAM families of MCUs, with their rich sets of analog and digital peripherals and communication interfaces, are supported by “tinyML” platforms.
SAM MCUs
To build ML models, insights can be gathered by analyzing time series data, audio or images. Supported by online tools, training is expeditious, delivering a model that can be imported into application source code in a matter of hours.
Smart Predictive Maintenance