Industrial AI Data Pipelines for High-Volume Sensor Processing and Anomaly Detection
Overview
About the Client
A major automotive parts manufacturer with five production facilities in North America deployed IoT sensors on more than 400 machines to allow automated maintenance and operational efficiency enhancements. Each plant generated more than 50 million sensor data points per day from vibration sensors, temperature monitors, and pressure gauges.
They also have production counters, as well as Quality inspection equipment. However, the large amount of sensor data that was not structured, coupled with outdated maintenance management systems and a variety of tools for planning production, prevented organizations from reaping the anticipated benefits of the IoT investment. The downtime of equipment was unpredictable, and maintenance was reactive rather than predictive; the production efficiency data was buried in data that the team was unable to effectively sort or analyze.
Challenges Faced
The manufacturer was faced with massive data processing issues that prevented the collection of actionable data of data from their IoT infrastructure. Sensor data was received in a variety of formats that were inconsistent from different suppliers of equipment, and timestamps that were not aligned across the factory floor networks, and missing values due to intermittent issues with sensor connectivity, and no standardized equipment identifiers that linked sensors with specific equipment.
The maintenance management system worked independently of IoT data, which stored instructions for work orders, equipment manuals, and maintenance history in formats that did not work with sensor feeds that are real-time. Production planning systems did not integrate real-time data on equipment performance and resulted in schedules that did not take into account degrading machines’ performance or imminent breakdowns. Issues with data quality afflicted analysis efforts, with about 15% of readings from sensors needing to be cleaned due to drift, outliers, or calibration error.
The engineering team was lacking the expertise and resources to develop customized data pipelines to suit every type of sensor and usage, leading to an increasing backlog of valuable operational data that was not being used. Unplanned downtime continued to be an average of 8.8% of production time, which cost the company around $14 million per year in lost production and urgent repairs.
Our Solution
Blueflame Labs implemented an industrial-grade AI data processing pipeline that is specifically created for high-volume IoT sensor data as well as manufacturing analytics. Our data cleaning engine processed sensor data in real-time, finding and correcting time-stamp errors across factory networks, detecting and flagging sensor drift that requires calibration, and interpolating missing values with contextual AI models that are trained on machine-specific patterns of behavior, and removing outliers and noise while keeping real anomaly signals that signal problems with the equipment.
Intelligent data mapping layers unify sensor identifiers and master data for equipment from maintenance management systems, mapping production counters to specific products and customer orders, and linking the failures of quality inspections to parameters that measure machine performance.
We used our AI Data Pipeline Automation, which continually took over 250 million readings of sensors in all the facilities. It is applying machine learning algorithms to identify early warning signs of degradation of equipment and enriching the sensor data with relevant information such as manufacturing schedules, environmental factors, and maintenance history. Then, it created structured outputs specifically designed for maintenance predictive algorithms.
The real-time analytics layer displayed live dashboards that display the health of the equipment, as well as automated alerts to detect anomalies requiring maintenance intervention, as well as predictive maintenance suggestions with confidence intervals, as well as suggested window of intervention.
Results
The company has made significant efficiency improvements and cost savings by using predictive data to make maintenance decisions:
- 62 percent reduction in downtime for equipment that was not planned through proactive maintenance
- $8.7 million in savings annually from the prevention of production losses and optimization of maintenance
- 34 percent reduction in maintenance costs through the transition from reactive to predictive methods
- 91% accuracy in prediction for equipment failures 24 to 72 hours prior to the time of failure.
- Real-time production efficiency monitoring enabling dynamic schedule optimization
- 23% improvement in overall Efficiency of Equipment (OEE) throughout all facilities.
- A complete view of the health of equipment that allows data-driven capital investment decisions