Downloads
Keywords:
Data Engineering Architectures for Real-Time Quality Monitoring in Paint Production Lines
Authors
Abstract
A wide range of industries, including automotive, electronics, consumer goods, and food, relies on manufacturing processes to produce different kinds of products. Quality monitoring is critical to guaranteeing the quality of the products on the production lines. Various approaches have been developed to monitor the quality of the produced items on the production lines, including model-based approaches and data-driven approaches. Most of these methods only assert the quality of the items after the entire production process is finished using the resulting inspection results. Each produced item after passing the control quality monitoring step is either labeled “approved” or “not approved”, and the overall quality of the production process is asserted based on the quantity of the approval inspection results. This means that there is a significant amount of faulty items produced before being rejected. It is highly desired to conduct an automated real-time quality monitoring approach to alert the fault in the manufacturing process as soon as possible. Although a wide range of quality monitoring approaches have been proposed and developed successfully, they do not take into account the streaming nature of the data, and they have not been evaluated in streaming environments, where data is mined in the order they are generated and the training process is commonly never-ending. This article proposes the first work on real-time quality monitoring methodology developed from deep learning architecture known as Neural Networks with Dynamically Evolved Capacity (NADINE), namely NADINE. It is the extension of NADINE featuring 1-D and 2-D convolutional layers that process the time-series and visual data streams captured from the sensors and cameras of the production line, respectively. It is stated that NADINE is able to conduct an online quality monitoring task for both streaming time-series and streaming visual data, adaptively evolving its capacity by adding the neurons and their connection weights on the fly, and conducts training with the first-in-first-out replay buffer to gradually train the newly added capacities of the model.
Article Details
Published
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.