Sponsored article
By Stéphane Marouani, Country Manager ANZ at MathWorks
The automotive manufacturing sector has struggled to balance high product quality while minimising operating expenses. AI-based anomaly detection, an approach that identifies irregular patterns in machine data to predict potential problems before they occur, is an emerging strategy that should be considered by all auto manufacturers interested in improving process efficiency, reducing downtime, and improving product quality.
Many engineers and technicians rely solely on manual data inspection or automated alerts when sensor values cross defined thresholds. Engineers cannot analyse thousands of sensors simultaneously, inevitably missing anomalies that manifest as complex patterns. AI-based anomaly detection enables engineers to predict potential failures and optimise maintenance intervals, driving higher reliability, reducing operational costs, and increasing a machine’s lifespan. Creating a robust and accurate anomaly detection system requires a well-thought-out design workflow that involves data gathering, algorithm development, and thorough validation and testing.
Designing an AI-based anomaly detection solution is comprehensive, from planning and data gathering to deployment and integration. Auto engineers new to AI must understand algorithm development and the operational environment to develop a solution that effectively identifies potential issues.
The design process for an AI-based anomaly detection system begins with defining the problem and assessing the available sensor data, components, processes, and possible anomalies. Engineers must first determine what constitutes an anomaly and the conditions that categorise data as anomalous.
Gathering data in automotive manufacturing involves using sensors for continuous monitoring and performing manual checks for accuracy. In-line measurement systems collect extensive data during vehicle production, often linked to vehicle VINs. Engineers should use this operational data to train anomaly detection systems for predictive maintenance and quality control. However, processing large data volumes can be costly and time-consuming, and anomalous data can be difficult to collect.
Engineers can also consider generating synthetic data from detailed simulations of machines and their operating environments. With a deep understanding of the system physics, engineers can generate anomalous data from simulations that can be difficult or impossible to obtain from real hardware. Synthetic data is especially useful when real operational or test data is scarce, difficult to obtain, or subject to privacy concerns. However, it’s important to ensure the simulation represents the operational system and models anomalies accurately – engineering expertise is essential to this process.
Designing an anomaly detection algorithm
The first step in designing an anomaly detection algorithm involves organising and preprocessing the data to make it suitable for analysis. This includes reformatting and restructuring data, extracting the relevant pieces to the problem, handling missing values, and removing outliers.
Next, engineers must select an anomaly detection technique, which requires assessing the data’s characteristics, the nature of the anomalies, and available computational resources.
Daihatsu used AI to identify knocking sounds from its engines
Experimenting with different training approaches for an AI model is crucial to finding the best fit for a specific dataset. At a high level, AI techniques can be divided into supervised and unsupervised learning approaches, depending on the type of data available.
Supervised learning
Supervised learning is used for anomaly detection when chunks of historical data can be clearly labeled as normal or anomalous, such as during dynamometer testing. Engineers who can align data with maintenance logs or historical observations often label the data manually. By training on this labeled dataset, the supervised learning model learns relationships between patterns in the data and their corresponding labels.
Daihatsu, a Japanese automobile manufacturer, previously used skilled workers to assess engine knocking sounds, but later automated this process using AI. By utilising machine learning and feature extraction capabilities in MATLAB®, the company developed classification models that allowed for the rapid automated examination of audio signals. This approach enabled Daihatsu to create an AI system capable of identifying anomalous engine knocking sounds with the same accuracy as skilled workers.
Unsupervised learning
Many organisations may not have the labeled anomalous data required for a supervised learning approach. This may be because anomalous data has not been archived or anomalies are too infrequent for an extensive training dataset. Unsupervised learning is needed when most of the training data is normal.
In an unsupervised learning approach, the model is trained to understand the characteristics of normal data, and any new data outside the defined normal range is flagged as an anomaly. Unsupervised models can analyse sensor data to identify unusual patterns that may indicate a problem, even if that type of failure has not been previously encountered or labeled.
Feature engineering
Although some AI models are trained on raw sensor data, it is often more effective to extract useful features from the data before training a model. Feature engineering is the process of extracting meaningful quantities from raw data, which helps AI models learn more efficiently from the underlying patterns. Experienced engineers may already know the critical features to extract from the sensor data.
Engineers use AI to process sensor data and extract patterns on edge devices during automotive manufacturing.
Validating and testing AI models ensures their reliability and robustness. Typically, engineers split the data into three parts: training, validation, and test sets. Training and validation data are used to tune the model parameters during the training phase, and test data is used after the model is trained to determine its performance on unseen data. Engineers can also evaluate the model using performance metrics, such as precision and recall, and fine-tune it to meet the needs of the specific anomaly detection problem.
A trained and tested AI model becomes valuable when deployed in operation and begins making predictions on new data. Engineers consider computational needs, latency, and scalability factors when selecting an appropriate deployment environment. This ranges from edge devices close to the manufacturing process for real-time anomaly detection to cloud platforms with nearly unlimited computational power but higher latencies.
Integration requires developing APIs to access the model’s predictions and establishing data pipelines to ensure the model receives properly formatted and preprocessed input. This ensures the model works with the application or system’s components and delivers its full value.
Conclusion
AI-based anomaly detection is driving more efficient manufacturing operations as the automotive industry evolves. Engineers can use AI models to process sensor data on edge devices and extract patterns from data, simplifying issue identification before significant failures arise. This approach can reduce defects, extend machine lifespan, and reduce operational costs.
Pictures: supplied
Stephane Marouani is Country Manager ANZ at MathWorks.