Quality management is a constant and significant concern in enterprises. Effective determination of correct solutions for comprehensive problems helps avoid increased backtesting costs. This study proposes an intelligent quality control method for manufacturing processes based on a human-cyber-physical (HCP) knowledge graph, which is a systematic method that encompasses the following elements: data management and classification based on HCP ternary data, HCP ontology construction, knowledge extraction for constructing an HCP knowledge graph, and comprehensive application of quality control based on HCP knowledge. The proposed method implements case retrieval, automatic analysis, and assisted decision making based on an HCP knowledge graph, enabling quality monitoring, inspection, diagnosis, and maintenance strategies for quality control. In practical applications, the proposed modular and hierarchical HCP ontology exhibits significant superiority in terms of shareability and reusability of the acquired knowledge. Moreover, the HCP knowledge graph deeply integrates the provided HCP data and effectively supports comprehensive decision making. The proposed method was implemented in cases involving an automotive production line and a gear manufacturing process, and the effectiveness of the method was verified by the application system deployed. Furthermore, the proposed method can be extended to other manufacturing process quality control tasks.
Shilong Wang, Jinhan Yang, Bo Yang, Dong Li, Ling Kang.
An Intelligent Quality Control Method for Manufacturing Processes Based on a Human-Cyber-Physical Knowledge Graph.
Engineering, 2024, 41(10): 256-274 DOI:10.1016/j.eng.2024.03.022
The increasing demand for high product quality by customers guarantees the survival and development of enterprises; in addition, it is a crucial factor for market development. Reasonable quality management is essential for improving product quality and controlling costs, thereby strengthening market competitiveness and providing high profits. In the 20th century, quality management constituted three stages: inspection quality control (IQC), statistical quality control (SQC), and total quality control. IQC and SQC are based on backtesting and sampling inspections for prevention, respectively; however, they are no longer effective at guaranteeing and improving product quality, and are they not sufficiently automated or intelligent.
Quality control is an essential part of total quality management (TQM), and robust quality control techniques and methods are critical for achieving TQM. The advancement of information technology has fostered the improvement of quality control methods, such as data processing and analysis, intelligent diagnostic methods, and quality management systems. Studies have completely exploited manufacturing process data and computer performance; however, the resulting approaches do not exhibit sufficient intelligence. Presently, the intelligence industry is emerging from the fourth industrial revolution, engendering new possibilities and imposing more quality management requirements [1], [2], [3]. Facilitated by the new generation of artificial intelligence (AI), manufacturing systems in the intelligence industry incorporate human, cyber, and physical capabilities, thereby facilitating deep information perception, intelligent optimization, autonomous decision-making, precise control, and efficient execution [4]. Therefore, the automation and intelligence of quality control methods and quality management systems should be continuously improved to develop intelligent systems characterized by self-awareness, self-comparison, self-prediction, self-optimization, and functional self-restoration [5].
Statistical process control (SPC), first proposed by Shewhart, is widely recognized as an effective quality control tool [6]. SPC mines the connections between statistical features and targets based on statistics and provides a warning system to promptly monitor the process. In recent decades, based on research and practice, many advanced control charts, such as the accumulation sum, exponentially weighted moving average, and synthetic control charts, have emerged [7]. These techniques have enabled SPC systems to achieve significantly fewer defects per million opportunities and facilitate the application of engineering process controls (EPCs) [6]. EPC uses control equations to automatically monitor and frequently adjust parameters by employing pre-feedback or feedback. Hence, it is necessary to specify inputs and outputs and construct an accurate mathematical or control model. However, complex manufacturing systems generate large amounts of process data with characteristics such as high dimensionality, dynamics, and uncertainty [8]. Model-based approaches, such as SPC and EPC, can no longer capture features or relationships between them for deeper optimization and control.
TQM is a modern quality management method that promotes the application of AI for quality control [7]. Deep learning, neural networks, support vector machines, genetic algorithms, machine vision, and several other intelligent algorithms have been applied to quality control tasks for feature extraction and identification, quality diagnosis, and quality prediction [9]. Data-driven quality control systems can effectively learn and mine historical data, construct agent models, predict potential quality problems, reduce reliance on expert experience, and provide references for companies to achieve greater intelligence [10]. Previous studies have demonstrated the ability of AI technologies to capture complex features hidden in manufacturing processes without considering the principles, thus presenting flexible and robust performance.
Data-based analysis methods cannot effectively process incomplete information, and thus underperform in unexpected problems and comprehensive processing scenarios, whereas expert systems fill this gap. Expert systems represent expert knowledge through computer symbols, and process knowledge through a reasoning machine that imitates the human brain, thereby integrating data and knowledge [11]. However, the knowledge representation mechanisms of conventional expert systems require specific data structures, and the knowledge base is populated by humans. As more heterogeneous knowledge accumulates, accurately performing knowledge representation and efficient reasoning in a streamlined manner can hinder knowledge sharing and reuse. Therefore, ontologies are widely used for knowledge representation and exploitation. They not only provide clear descriptions of knowledge with explicit specifications but also enable knowledge inference using reasoning machines. Ontology-based knowledge graphs provide scalability and interpretability by organizing several heterogeneous concepts with interconnected nodes and defining their arbitrary semantic relationships using ontology-based schemas [12]. Many recent studies on quality control have investigated decision-making methods with knowledge assistance. The existing studies suggest that knowledge-assisted decision making is effective at reducing the reliance on manual decision making, enhancing the sharing and reuse of knowledge, and improving the ability of systems to solve complex problems [13], [14], [15].
Despite significant progress in IQC research, data-driven approaches cannot solve certain problems or conduct reasoning, and thus they still rely on experts for complete quality control. In addition, knowledge-driven approaches require a complete knowledge base, which is impossible to obtain. The combination of knowledge- and data-driven approaches forms a new synergistic methodological path, which is expected to provide a broader scope for research and applications involving complex control systems. Accordingly, this study proposes a quality control method for manufacturing processes based on a human-cyber-physical (HCP) knowledge graph. First, a new concept called “HCP ternary data” is proposed, based on which an HCP ontology model and a method for constructing an HCP knowledge graph are developed. Subsequently, the process data, expert experience, and computer resources for quality control are deeply integrated and stored in the HCP knowledge graph. Finally, a reasoning machine is used to process, analyze, diagnose, and make decisions regarding quality problems.
The remainder of this paper is organized as follows: Section 2 briefly presents the existing studies on knowledge graphs in manufacturing. Section 3 describes the framework of the proposed quality control method for manufacturing processes based on an HCP knowledge graph, including the relevant concepts, methods, and processes. Section 4 describes the HCP ontology model and the development of the HCP knowledge graph. Section 5 explains the mechanism through which the proposed method can be applied to implement quality control. Section 6 presents the validation of the proposed methodology based on its application.
2. Related works
As discussed in Section 1, the fusion of an HCP ontology for use in quality control tasks is an essential problem in HCP knowledge graph-based methods. This section comprehensively reviews ontology-based knowledge representation methods for industrial knowledge graphs and the application of knowledge graphs to quality management based on the existing studies.
2.1. Knowledge representation and modeling
Industrial knowledge graphs are widely used in product design and development, workshop planning and scheduling, manufacturing quality control, equipment failure diagnosis, and maintenance, as presented in Table 1 [14], [16], [17]. Modern advanced technologies, such as the new generation of AI, the Internet of Things (IoT), sensors, and cyber-physical systems (CPSs), which are integrated into intelligent factories, promote deeper integration of HCP knowledge. Furthermore, the knowledge generated from heterogeneous data in different application domains exhibits variable patterns and knowledge granularity. Subject to the individual behavior of participants, development processes, tools, or environmental factors, most early knowledge representation methods used certain data structures and semantic descriptions, which led to poor interoperability [18], [19].
In the era of Industry 4.0, ontologies and ontology-based knowledge graphs have become essential tools for representing knowledge and supporting integration and interoperability [20], [21]. Early research on manufacturing domain ontologies transformed previous cases and experiences regarding manufacturing problems and solutions into semantic associations by abstracting the resources of manufacturing scenarios. Kitamura et al. [22] developed an ontological modeling framework for functional knowledge based on a function-behavior structure framework concerning the functionality of artifacts. Prestes et al. [23] defined a robotics and automation core ontology in generic terms as a standard method in robotics and automation. For higher-level integrated applications, integrating existing ontologies and developing modular ontologies provides opportunities to reuse domain knowledge. Cheng et al. [24] proposed a manufacturing ontology with five dimensions (bases, products, processes, equipment, and parameters) using a modular and reusable approach that covered the entire process. In addition, the hierarchical construction approach enables the development of scalable ontology models with arbitrary dimensions and granularity levels. Adopting an ontology with a conceptual hierarchy and logical reasoning for semantic modeling, Järvenpää et al. [25] imported a resource model into a capability model to obtain a generic formal resource-capability ontology that formed matching relationships between resources and capabilities. Lin et al. [26] proposed an ontology-based hierarchical and modularized resource-unified description model that guaranteed resource description consistency and independence and effectively eliminated resource heterogeneity and isolation of whole-lifecycle data. Previous studies have revealed the practical effects of modular and hierarchical ontologies, such as resources, equipment, processes, and predictive maintenance ontologies, on manufacturing domain knowledge. Various descriptive frameworks have been proposed to provide unified representations of complex and diverse knowledge in the manufacturing domain, supporting the process of constructing knowledge graphs for the entire manufacturing system and implementing decision-making for complex manufacturing tasks.
2.2. Applications of knowledge graphs in quality management
A typical feature of industrial knowledge graphs is the representation and storage of historical experiences and cases. Moreover, with the effective query and similarity-matching capabilities of reasoning machines, the knowledge required to solve manufacturing problems is provided in a timely, accurate, and comprehensive manner to assist in the generation of decision solutions.
Knowledge-based quality control requires a complete and accurate knowledge framework, as in the application areas of fault diagnosis and equipment operation and maintenance. Therefore, ontological modeling was performed to obtain theoretical knowledge and specific elements. Zhou et al. [27] proposed a method of extracting and semantically fusing the knowledge acquired from tables and then constructed a knowledge graph to build a historical case knowledge base, which enabled the identification and inference of potential causes of product processing defects. He and Jiang [28] verified that it was practical to clarify problem phenomena, analyze problem causes, and generate solutions for production problems by querying a knowledge graph, and used a parameter optimization case to demonstrate this notion. Zhou et al. [29] used a Java Expert Shell System rule engine and established an ontology model with fault mode and effect analysis mechanisms to determine the causes of primary wind failures. These studies integrated knowledge from different sources and data in complex forms through sophisticated natural language processing technologies and supported companies in efficiently accessing the required knowledge. However, specialized knowledge models and frameworks do not provide reasonable shareability or reusability in complex decision-making applications.
The most significant aspect of existing industrial knowledge graphs is that they obtain the same or highly similar solutions through case similarity matching based on current empirical knowledge. Case-based reasoning (CBR) and rule-based reasoning (RBR) are key inference methods for expert systems and industrial knowledge graphs, particularly for quality control tasks. However, the CBR-based fault diagnosis problem cannot be avoided in the absence of a corresponding case. In addition, RBR does not perform satisfactorily when addressing low-frequency connections and low-connection graphs, that is, sparse data cases. Moreover, owing to a poor semantic understanding of the capabilities of CBR and RBR, combinations of CBR and RBR have been introduced to facilitate ontological reasoning. Tung et al. [30] developed a solution retrieval system based on a hybrid approach of RBR and CBR, called rule-based CBR (RCBR). Their experiments demonstrated that the performance of RCBR was relatively reasonable in terms of reducing time costs and attaining improved accuracy during case retrieval. Xu et al. [31] used a reasoning machine that combined CBR and RBR to perform case retrieval in an ontology-based fault diagnosis model, thereby solving the problem of diagnostic failure in the absence of a retrieved case.
In addition, based on the knowledge services provided for specific manufacturing problems, query conditions or results must be further processed for complex problems. Hybrid methods combining knowledge- and data-driven approaches in manufacturing are advancing, addressing this challenge. Zhou et al. [32] implemented a hybrid diagnosis method that semantically mapped the features extracted from a signal onto entities in a knowledge graph. Chen et al. [33] proposed a data-knowledge hybrid-driven gas path analysis method, in which the features extracted from the data were mapped to a knowledge graph. Although the aforementioned studies significantly improved the completeness of domain knowledge and provided comprehensive decision-making capabilities based on knowledge graphs, automatic and complex decision-making is difficult to achieve using knowledge graphs, as presented in Table 2. To fill this gap, a knowledge graph that can combine manufacturing process data, experience, and methods is required to build a complete knowledge framework and automate the entire quality control process.
3. Framework of the proposed manufacturing process quality control method based on an HCP knowledge graph
This study proposes a quality control method for manufacturing processes based on an HCP knowledge graph. A new concept called “HCP ternary data” is proposed to classify and manage domain theories, methods, and manufacturing data. In addition, a systematic methodology and key technologies are proposed. Fig. 1 depicts the framework of the manufacturing process quality control method developed based on an HCP knowledge graph consisting of the data, HCP knowledge graph, and application layers.
3.1. Definition of HCP ternary data
The rapid development of emerging technologies, such as information technology and AI, is advancing the integration of human, cyber, and physical technologies. With the increasing demand for intelligent manufacturing, promoting the connection, collaboration, and integration of human, cyber, and physical data is conducive to building an intelligent industry with self-learning, self-adaptation, and continuous evolution capabilities. The deep integration of HCP data contributes to achieving intelligent, refined, and efficient manufacturing data management processes and higher utilization levels [4].
The analysis and management of manufacturing data are the bases for guaranteeing that knowledge sources are valuable and reliable. Categorizing and managing data according to their characteristics and intrinsic links facilitates efficient data management and utilization, and benefits the refinement and representation of structured knowledge. Furthermore, a comprehensive and structured framework for describing knowledge facilitates extensive data applications and solves complex manufacturing decision problems.
In manufacturing, humans process uncertain information while generating large amounts of empirical knowledge. Cyber systems, which primarily refer to computers and computing resources, can process, calculate, and store massive amounts of data; however, they cannot process uncertain information. Physical data are obtained objectively from existing physical objects, generally referring to the data of production equipment and data generated by production processes, which are the primary data using which humans make decisions and computers perform calculations.
Therefore, knowledge graphs are important for fusing HCP ontologies to facilitate intelligent data management and utilization in complex manufacturing environments. In the HCP ternary data, human, cyber, and physical data are defined as follows:
(1) Human data refer to the expert knowledge accumulated when solving actual manufacturing process problems; they are often observed in processing records, technical manuals, and experience notes at manufacturing sites, and these data are primarily unstructured.
(2)Cyber data refers to algorithms, mathematical models, methods, and tools used to analyze, identify, diagnose, and address manufacturing problems and their extracted characteristic data, which are primarily found in semi-structured and unstructured forms.
(3)Physical data refer to the operating parameters of the equipment and production data generated by a manufacturing process; these data are primarily semi-structured and structured.
3.2. Data layer
The data layer, including data acquisition and management mechanisms, serves as the source of database and knowledge base. According to the provided quality management content, the data primarily include several aspects such as quality monitoring, quality inspection, quality diagnosis, and maintenance strategy information. Manufacturing process quality data are obtained from three primary sources: production, quality management, and documentation. The production site is the most direct source of data and generates data using sensors, equipment operation data, and production records. A quality management system is used to preprocess, count, process, and analyze raw data and obtain refined and new quality data. Technical documentation, basic equipment information, manuals, and design documents are key data sources.
The manufacturing process quality data are divided into three parts according to the definition of the HCP ternary data, which are used to construct an HCP ontology and a knowledge graph. Human data are primarily reflected in descriptions and records of quality problems, problem tracing, and solutions. Cyber data refer to computer algorithms and models used to analyze process data, product quality data, equipment status data, and various features extracted from multiple types of heterogeneous data. Physical data refer to data obtained from various sensors, controllers, and testing equipment, which represent processes, product quality, and equipment status with many data attributes and property values.
3.3. HCP knowledge graph layer
The HCP knowledge graph layer consists of three critical steps: ontology, knowledge processing, and knowledge graph construction. The HCP ontology is constructed based on expert experience, and an HCP knowledge graph is constructed in a top-down manner. The HCP knowledge graph adopts a hierarchical and modular description framework to produce a unified description of the quality management knowledge contained in the manufacturing process of interest. HCP knowledge is extracted from the HCP ternary data using sophisticated natural language processing, data analysis, and processing techniques. This HCP knowledge is semantically mapped to the HCP ontology, and the ontology model is instantiated to obtain the HCP knowledge graph.
3.4. Application layer
The application layer uses the HCP knowledge graph obtained using a reasoning machine. This reasoning machine was developed based on a graph convolutional network (GCN) and ontological reasoning with interpretability and intelligent algorithms. By semantically correlating the phenomena and features of quality problems with those of matching historical cases, maintenance technicians can rapidly localize and process quality problems. For complex quality problems, it is possible to analyze the provided data, select an appropriate method or tool, and propose and execute the corresponding solution, thus obtaining accurate diagnoses and solutions. The HCP knowledge graph accumulates and reuses existing knowledge to help identify the causes of quality problems and provides solutions to these quality problems while updating the newly generated knowledge in the knowledge base.
4. Knowledge modeling for an HCP knowledge graph
An ontology-based knowledge graph can effectively store data and knowledge, and support quality control tasks. Therefore, a complete HCP ontology model is constructed based on the content and characteristics related to quality control. An effective knowledge-extraction method is developed to construct an HCP knowledge graph automatically.
4.1. HCP ontology construction
Manufacturing-domain knowledge graphs have high requirements in terms of the depth and accuracy of knowledge. A descriptive model that is appropriate for human and computer understanding is vital to improve operational and human-computer interaction capabilities. Based on the HCP ternary data, a unified description model, an HCP ontology, is established for massive, heterogeneous, and independent manufacturing process quality data. The core of the HCP ontology defines the primary classes, properties, and hierarchy of the quality control domain. This framework produces a clear and accurate representation, while confirming that the knowledge graph is highly extensible and reusable. The HCP ontology comprises human, cyber, and physical modules. Each module contains hierarchical descriptive attributes and corresponding attribute values to guarantee an accurate description of each part of the input knowledge. Finally, all the modules are fused into a unified description framework.
Human knowledge consists of objective descriptions and subjective decisions associated with manufacturing process quality problems, the main contents of which are shown in Fig. 2. The core of the human ontology comprises four main elements of quality management content. As shown in Fig. 3, “hPhenomenon” is the core, and “hCause,” “hDiagnosticMethod,” “hMaintenance,” and “hPrevention” are the contents; this paradigm is equivalent to the “problem-cause-solution” model. Each core class has its corresponding attributes and values. For example, “hPhenomenon” includes descriptive characteristics, such as “hLocation,” “hTime,” and “hDegree,” as well as a special “hAtrribute,” such as a fault code. The human knowledge ontology describes human knowledge in a unified manner based on the “central concept-main contents-attributes-values” framework, which makes the ontology expandable and universal.
A cyber ontology is a unified model that summarizes the core elements of the tools and methods used in the quality control process, as shown in Fig. 4. The abstract model has “cOutputFeature,” “cInputFeature,” and “cParameter” as user-oriented attributes and includes intrinsic attributes, such as “cMethodType” and “cAttribute.”
As illustrated in Fig. 5, the physical ontology comprises two parts: production resource entities and production data. The types of entities are distinguished by different relationships and their “pType” attributes. All entities are characterized by three aspects, “pIntrinsicAttribute,” “pOperationalAttribute,” and “pCondition,” which also serve as data sources for human analyses and computer operations.
The HCP ontology model integrates the human, cyber, and physical components to form a comprehensive ontology. Fig. 6 shows that this model represents knowledge of the respective HCP aspects and their interrelationships. One part of a class is mapped to another class with a corresponding relationship. For example, “hLocation” and “hCauseLocation” in the human ontology are mapped to “pEntity” in the physical ontology. All attributes and values in the human and physical ontologies can be used as inputs for the cyber ontology. The set {relations:Comparison} represents the relationships between all the attributes and values, as presented in Table 3.
The integrated ontology model defines basic classes, interclass relationships, and related attributes. The HCP ontology was constructed using the Protégé ontology editor and supplemented with functions and axioms. As shown in Fig. 7, the HCP knowledge graph is centered on owl:Thing, which is divided into three HCP panels and hierarchically differentiated concept classes.
4.2. Development of reasoning rules in the HCP ontology
Semantic web rule language (SWRL), a semantic rule description language, can combine the concepts described by web ontology language (WOL) ontology to compensate for the limitations of OWL in rule description and reasoning tasks [34]. By invoking the established ontology and custom SWRL rules, SWRL can further refine the ontology and define logical relationships between ontology terms until a reasonable rule base is formed. Ontologies developed using Protégé with SWRL rules to effectively enhance the reasoning process and allow interoperability between various reasoning engines have been proven and studied thoroughly; the basic theory and examples were investigated in detail [35], [36].
The constructed SWRL rules are presented based on an analysis of the classes and properties of the ontology model. To easily understand SWRL, two basic syntax elements are introduced as follows:
(1) : If x is an instance of class , or the value of its data property is established;
(2) : If x and y are associated with property , then is valid.
This basic rule suggests that we can analyze entities or relationships via known entities and relationships . For example, we examined the reasoning rule (Appendix A).
These rules assist in interacting with entities in the three parts of the HCP ontology and aid in integrated reasoning and decision-making steps. The basic elements and rules are as follows:
Rule 1 is used to match “hPhenomenon” in the HCP knowledge graph by the description of the quality problem to obtain other concepts, such as “hCause” and “hDiagnosticMethod.”
Rule 2 represents the mapping relationship between the diagnostic method attributes of humans and cyber knowledge. Similarly, Rules 3 and 4 provide the mapping relationships between the input and output features in the HCP knowledge graph, respectively.
Rule 5 was used to select the appropriate diagnostic method and input features by retrieving the existing diagnostic methods. Rules 6, 7, and 8 define the reasoning process for selecting diagnostic methods for other features of “pEntity,” as in Rule 5.
Rule 9 defines the reasoning process for selecting an appropriate diagnostic method and input features through feature and data comparisons. Rules 10 and 11 define the reasoning process used to select diagnostic methods for the other features of “pEntity,” as in Rule 9.
Rule 12 matches the output of the diagnostic method with the historical data to determine an appropriate historical quality problem. Rules 13 and 14 match the output attributes or entities for inference.
4.3. HCP knowledge graph construction
Ontology provides a standard descriptive framework for the knowledge of manufacturing quality. An ontology-based knowledge graph is used to store and utilize the manufacturing quality knowledge. Quality knowledge of the manufacturing process is collected, processed, and extracted using a knowledge extraction framework [37]. Each piece of the extracted knowledge is represented as a triplet. Subsequently, all triples are fused to guarantee the consistency of knowledge. Finally, all the knowledge is stored in Neo4j (Neo Technology Inc., Sweden). As shown in Fig. 8, the HCP knowledge graph is constructed in the following order: data preprocessing, knowledge processing, and knowledge storage.
Unstructured data have corresponding internal structures and are frequently found in document and text formats. However, their data structure is irregular or incomplete, that is, not structured by a predefined data model or schema; in addition, representing data in the two-dimensional logical form of a relational database is inconvenient. Structured data follow data format requirements and length specifications, and are frequently stored and managed through relational databases. Structured data can be readily processed when constructing the knowledge graph. Semi-structured data lie between the structured and unstructured types and possess a highly variable structure.
Human data are unstructured. A bidirectional long short-term memory-conditional random field (Bi-LSTM-CRF) model is used for entity recognition, and a hidden Markov model is used to extract triads with verbs as their core relations. The obtained entities are aligned using similarity calculations and manual detection methods. The data processed and output by computers are generally semi-structured. Cyber data primarily include the methodological model and extracted features, aligned with other data for entities after manual annotation and extraction. Physical data comprise structured and semi-structured data that require preprocessing before knowledge processing.
Neo4j has a specialized graph query language, Cipher, which enables fast storage of entities and entity relationships in the database and implementation of corresponding query operations. The named entities and relationships extracted from the HCP ternary data are saved in the csv format. They are deposited in a Neo4j database through a Cipher-unified syntax to complete the knowledge graph construction procedure.
5. HCP knowledge graph-based quality control method
The established knowledge graph provides knowledge support for decision making such that a reasoning machine can be used to determine solutions in the knowledge graph. A reasoning machine is developed based on GCNs and ontological reasoning using interpretable and intelligent algorithms. Based on the existing reasoning rules in the HCP ontology, a decision-making model for the HCP knowledge graph-based method is developed to determine knowledge-based solutions for quality problems, and a decision-making process based on the HCP knowledge graph is developed to determine hybrid knowledge- and data-based solutions for quality problems.
5.1. Decision making model for the HCP knowledge graph-based method
A GCN improves the quality of node classification results by fusing local graph information with node features. In addition, it assigns importance weights to the neighbors of each node, and subsequently uses the weighted feature sum of neighboring nodes as the feature representation of each node. Therefore, the node features are independent of the graph structure, which improves the portability of the model. Therefore, GCNs are widely used in inference-based decision-making tasks involving knowledge graphs.
The proposed method adopted a question-and-answer model based on a GCN. As shown in Fig. 9, this model can combine information from both semantic and hierarchical structures. The model consists of the feature learning from textual knowledge and decision-making parts. The feature-learning process based on text knowledge uses a pretrained language model, bidirectional encoder representations from transformers (BERT) to obtain the semantic spatial relationships between the input questions and inference decisions. The decision-making part obtains the final information representation of the target node through both the information aggregation and transfer parts of the GCN. For textual information, the entities in the provided quality questions and knowledge graph, as well as the paths passed between the question-and-answer entities, are represented by a text vector obtained through the BERT layer. The inference stage primarily considers the entity vectors learned from the BERT layer and obtains the entity structure-level information after two GCN layers. Subsequently, the quality questions are combined with the entity vectors in the knowledge graph, and the candidate answers are scored after obtaining the completely connected layer, which is a multilayer perceptron. Finally, the result with the highest score is selected as the solution for the decision-making process.
5.2. Decision making process based on the HCP knowledge graph
Current decision-making systems can be classified into knowledge-based and data-driven types. Knowledge-based decision-making systems typically use operation manuals, fault manuals, and expert experience as knowledge sources that are manually or automatically converted into rule-based expert knowledge for decision support. These knowledge-based approaches cannot enumerate all fault conditions completely; the rules have a limited ability to express nondeterministic objects, and understanding various manuals, expert experience, and writing rules is a labor-intensive task. By contrast, data-driven approaches adopt deep learning to automatically extract knowledge from actual production data, learn from past data, and reveal unknown patterns in the available data.
Both knowledge-supported and data-driven approaches are included in the HCP knowledge graph, which is compatible with the advantages of both aspects and thus provides more comprehensive support for decision-making in production scenarios.
The data features mined by the analysis algorithm or model are combined with relevant expert knowledge application ontologies to extract information from the graph network. Subsequently, high-order information is fed into the classifier to obtain the final diagnostic result for the target product quality problem, as shown in Fig. 10.
After the quality problems are diagnosed, the diagnosis results and associated knowledge nodes are used to obtain candidate sets through semantic information matching using graph networks. Subsequently, the candidate decision set is evaluated to obtain a quality control decision solution, as illustrated in Fig. 11.
6. Case study
6.1. Development of an HCP knowledge graph-based quality control system
The proposed quality control system based on the HCP knowledge graph adopts the Model Template View (MTV) model of the Django processing framework, which is divided into the model, view, and controller layers, as shown in Fig. 12. In the MTV model, the view layer on the server receives requests from the browser and interacts with the model and template layers. Next, the model layer extracts data from the input database and sends them to the view layer, which sends the prepared template to the view layer. Subsequently, the view layer renders the template with the received data and organizes it into a response format to be sent to the browser. Eventually, the browser parses the response and presents it to the user. The greatest implication is the ability to run the deployed system using any browser over the Internet or a local area network (LAN), thus providing the advantages of low coupling rates, high reusability, and low life-cycle costs.
As illustrated in Fig. 13, the architecture of the HCP knowledge-graph-based quality control system is divided into the front-end user interaction, back-end processing, core algorithm, and support platform layers. The front-end browser accepts commands from a user and links them to the enterprise’s database. Subsequently, it interacts with the back-end processing architecture via Hypertext Transfer Protocol (HTTP) for data and functionality. The core algorithm layer provides fundamental computation and analysis for knowledge extraction, graph construction and management, troubleshooting, and other functional requirements. The system runs on open support platforms to support various operating systems and databases.
The experiments were conducted on a computing workstation with an AMD Ryzen Threadripper PRO 5965WX central processing unit (CPU; Advanced Micro Devices Inc., USA), 256 GB of random access memory (RAM; SAMSUNG, Republic of Korean), and two NVIDIA RTX A6000 graphics processing units (GPUs; NVIDIA, USA) using the Ubuntu 22.04.3 long term support (LTS; Canonical, UK) operating system. Subsequently, the system server was deployed in the CentoS7 operating system of the enterprises.
6.2. HCP knowledge graph-based quality control for an automotive production line
A case study of an HCP knowledge graph-based method was conducted on an intelligent automobile production line. The discussed approach was utilized, and the knowledge contained in the enterprise production data was extracted to obtain a knowledge graph and stored in Neo4j. The effectiveness of the approach was demonstrated by deploying a quality management system at a production site.
Currently, large amounts of process, device, and statistical data are stored in enterprise databases. Automobile manufacturers have started to use these data to make decisions for quality improvements and equipment maintenance through big data analysis technology. For example, a manufacturing execution system can be used to obtain statistics concerning various indicators, such as production qualification rates and energy consumption levels. Machine vision and machine learning technologies can be used to detect product surface quality defects and monitor the status of key rotating equipment on an automotive production line, respectively.
However, selecting an appropriate solution often requires considerable time and effort. Decision-support systems have been isolated using various analytical techniques. The inefficiency of existing decision-making methods significantly affects the quality control efficiency in automotive production scenarios. Therefore, potential links between various analytical techniques and decision-making systems must be explored and established to provide improved support for quality control, equipment operation, and maintenance.
Raw data for the automobile production line were obtained after thorough investigation and processing. According to the definition of HCP ternary data in Section 3.2, human knowledge is primarily reflected in descriptions of quality problem phenomena, problem tracing, and solutions. The data were primarily obtained from stamping and welding workshops involving various processes such as welding, gluing, and edge rolling. Cyber knowledge was obtained from the algorithm models used for quality control and extracted features, as shown in Fig. 14. Physical knowledge includes production equipment and data, as illustrated in Fig. 15. The production equipment was classified into five major categories: robot equipment, control equipment, process equipment, conveyor equipment, and jigs and fixtures, and their related working parameters and equipment information. The data required for the quality problem processing included six categories: pictures, vibration signals, noise signals, videos, electrical signals, and statistical data. The key attributes of the pictures were their shooting positions, pixel sizes, picture colors, and picture formats, whereas those of the vibration signals were their data detection positions, amplitude sizes, frequency sizes, and phase sizes. Fig. 15 presents the characteristics of several other data types.
Finally, an application was developed to validate the method and deployed in a production line. The system extracts knowledge from historical data and acquires more than 500 entities and ten algorithmic models, which are stored in a graph database. Moreover, the system can construct, view, edit, and query knowledge graphs, thereby visualizing and managing the knowledge base, as shown in Fig. 16. The system automatically adopts appropriate analysis methods to process and diagnose quality problem data and finally obtains diagnosis results and decision solutions. This diagnostic method has been applied to dozens of scenarios, such as resistance spot welding surface quality inspection, resistance spot welding nucleus diameter distribution prediction, and stamping part surface scratch defect identification.
Considering the solder joint appearance quality inspection function as an example, as shown in Fig. 17, the system performs the following steps:
(1) First, an appropriate feature extraction algorithm for the weld appearance picture entity (physical knowledge) is obtained from the knowledge graph. In this case, a fine-grained network extraction algorithm (cyber knowledge) is used to extract the fine-grained visual features of the weld appearance.
(2) By comparing the weld appearance defect body location entity and weld appearance defect type entity in terms of quality knowledge, the probability dependency relationship knowledge (human knowledge) between the two is determined. In this respect, a weld position feature extraction algorithm (cyber knowledge) is identified. Subsequently, the probability values of the body position and defect class are input into a flexible map convolution algorithm for feature calculation to output the weld position Euclidean space features (cyber knowledge).
(3) The fine-grained visual features of the weld appearance and weld location Euclidean space features obtained from the above steps are simultaneously input into the cross-entropy classifier for classification. The final weld-appearance defect classification results are obtained.
(4) Based on the corresponding weld-appearance defect type, the corresponding solution and preventive measures (human knowledge) are identified based on a knowledge graph.
The real-time voltage, current, energy, power, and pulse width signals of the spot-welding robot were imported into the system, and a knowledge graph was used to analyze the quality problems of the nugget and provide the corresponding causes and solutions, as shown in Fig. 18. First, the energy, power, voltage, current, pulse width, and data entities (physical knowledge) were selected as inputs for the knowledge graph query. An appropriate feature extraction algorithm was queried through the knowledge graph, a generalized regression neural network (cyber knowledge) was optimally selected, and the grasshopper optimization algorithm (GOA) algorithm (cyber knowledge) was invoked for nugget feature extraction. Nugget quality characteristics, including the nugget diameter and indentation value (cyber knowledge), were obtained. The predicted nugget diameter served as an input to the knowledge graph query for quality diagnosis. Finally, the causes and solutions for the quality problems with abnormal nugget diameters were determined in relation to the process parameters.
In high-volume, fast-paced welding processes, the strategy of performing offline process adjustment after manual sampling is generally adopted; however, this approach cannot realize comprehensive quality control. The system can realize real-time data monitoring and diagnosis with a signal acquisition frequency of up to 200 Hz and an inspection rate of 100% for all welded joints in the vehicle. Experimental validation was carried out on 1500 sets of data; Table 4 lists a portion of the actual data.
No.: number.
The prediction accuracy of the nugget diameter was 95.3%, and the average error was less than 5%. The evaluation metric for knowledge-assisted decision-making was Hits@n. Hits@n indicates that the candidate nodes are ranked according to their similarity scores, with the correct results appearing in the top n items. Hits@1, Hits@3, and Hits@10 are generally used for performance comparisons, and our experimental results correspond to the values of 51.8%, 97.8%, and 99.7%, respectively. As presented in Table 5, Hits@1 outperformed the large generalized MT5 model [38] and the graph-embedding-based EmbedKGQA [39] model, and Hits@3 was sufficient for production control.
6.3. HCP knowledge graph-based quality control for gear manufacturing
Solving problems encountered in conventional quality control scenarios significantly promotes transforming and upgrading the manufacturing industry owing to the continuous improvement in the digitalization and intelligence level of gear manufacturing equipment. Many factors such as tooth blank material, process parameters, worker skills, and processing environment affect gear processing quality. These data and information are also accumulated by enterprises, and the potential relationships between data and quality problems are determined through big data mining. Enterprises must guide production processes and improve their machining quality by mining correlations from large amounts of manufacturing process quality data and obtaining potential quality control strategies.
Therefore, in this study, the HCP knowledge-graph-based method was validated using quality data derived from the gear manufacturing process of an enterprise. Furthermore, knowledge extraction was performed on the existing data to construct an HCP knowledge graph, as presented in Fig. 19. Finally, the effectiveness of the method was verified using a generic application system, as shown in Fig. 20.
The deployed system transforms the production process data into knowledge integrated into a knowledge graph and enables the utilization of knowledge through an intelligent reasoning engine. The developed application system can be used to analyze and diagnose quality problems and provide decision-making solutions that can aid enterprises in improving and enhancing their quality management systems. Moreover, the developed system can be used to conduct real-time monitoring and diagnostic analyses of quality problems to achieve timely problem detection and rapid responses. The proposed method is implemented using an HCP knowledge graph, based on which historical problems can be quickly retrieved to obtain solutions, and data analysis can be automatically performed in combination with intelligent algorithms. The real-time and comprehensive monitoring of quality problems facilitates the timely repair of defective products to reduce scrap rates. Other potential problems, such as equipment failure and production accidents can also be detected over time, thereby preventing accidents from causing greater economic losses.
7. Conclusions and future works
This study proposes a quality control method based on an HCP knowledge graph. The proposed method is a systematic approach that uses knowledge to obtain assisted decisions and includes four aspects: data management, ontology model construction, knowledge graph construction, and application development. The proposed method introduces a new concept called “HCP ternary data,” which facilitates effective classification and management of production process data. Moreover, this study proposes an HCP ontology model; its hierarchical and modular structure enables knowledge integration, sharing, and reuse. In particular, the structured ontology model is characterized by semantic solid correctness, retrieval effectiveness, portability, and extensibility, enabling the knowledge graph construction and management processes to be completed more effectively. In addition, analysis, diagnosis, and assisted decision-making tasks with regard to quality issues use a GCN reasoning machine based on a combination of CBR and RBR. The reasoning machine is constructed on the basis of knowledge-based information matching while uncovering additional implicit relationships, thus avoiding retrieval hindrances caused by knowledge limitations. Therefore, we designed and developed a generic quality control system and deployed it in a production plant to verify the effectiveness and practicality of the approach in an automotive production line and gear manufacturing scenario.
In conclusion, the proposed HCP knowledge graph methodology represented a significant improvement in IQC by deeply integrating HCP collaboration when applied to the manufacturing process. The proposed method implements case retrieval, automatic analysis, and assisted decision making based on an HCP knowledge graph. A structured system consists of functional modules that can be used independently or integrated with other systems that possess relevant standard data structures. The proposed method can be extended to other manufacturing process quality control tasks by directly extracting relevant knowledge sources owing to the portability of the knowledge model, versatility of the reasoning machine, and applicability of the system approach. To maximize its utility in real-world scenarios, practical applications should focus on customizing the HCP ontology framework to suit specific industry requirements, such as intelligent early warning, production scheduling, and production line reconfiguration. In addition, integrating more knowledge modes, such as graphics and formulas, will provide the possibility of attaining a higher level of assisted decision-making.
Robust ontology structures can efficiently represent industrial knowledge, and powerful reasoning algorithms can make complex decisions. A more comprehensive knowledge base can lead to improved applications. Thus, more research on automated and intelligent knowledge extraction methods is required to achieve a higher level of intelligently assisted decision-making. However, multimodal data and knowledge are characterized by cross-fertilization, which challenges knowledge extraction methods. Thus, the existing approaches do not have the ability to automatically acquire, integrate, and represent this complex, multimodal industrial knowledge and guarantee the coherence and consistency of knowledge elements. To achieve a greater degree of automated and accurate HCP ternary knowledge extraction: ➀ Methods and algorithms that can automatically construct algorithmic models of knowledge graphs (machine knowledge) must be developed to achieve a greater degree of automation in the knowledge graph development and construction processes, and ➁ more accurate knowledge extraction methods must be developed to handle new HCP ternary data models.
Acknowledgments
This study was supported by the National Science and Technology Innovation 2030 of China Next-Generation Artificial Intelligence Major Project (2018AAA0101800), the National Natural Science Foundation of China (52375482), and the Regional Innovation Cooperation Project of Sichuan Province (2023YFQ0019).
Compliance with ethics guidelines
Shilong Wang, Jinhan Yang, Bo Yang, Dong Li, and Ling Kang declare that they have no conflict of interest or financial conflicts to disclose.
J.Zhou, P.Li, Y.Zhou, B.Wang, J.Zang, L. Meng. Toward new-generation intelligent manufacturing. Engineering, 4 (1) (2018), pp. 11-20.
[2]
G.Büchi, M.Cugno, R.Castagnoli. Smart factory performance and Industry 4.0. Technol Forecast Soc Change, 150 (2020), Article 119790.
[3]
LiB, ChaiX, HouB, ZhangL, ZhouJ, LiuY. New generation artificial intelligence-driven intelligent manufacturing (NGAIIM). In: Proceeding of the 2018 IEEE Smartworld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation (SmartWorld/SCALCOM/UIC/ATC/CBDCom/IOP/SCI); 2018 Oct 8-12; Guangzhou, China. Piscataway: IEEE; 2018. p. 1864-9.
[4]
J.Zhou, Y.Zhou, B.Wang, J.Zang. Human-cyber-physical systems (HCPSs) in the context of new-generation intelligent manufacturing. Engineering, 5 (4) (2019), pp. 624-636.
[5]
X.Li, P.Zheng, J.Bao, L.Gao, X.Xu. Achieving cognitive mass personalization via the self-X cognitive manufacturing network: an industrial knowledge graph- and graph embedding-enabled pathway. Engineering, 22 (3) (2021), pp. 11-19.
[6]
C.A.Escobar, M.E.McGovern, R.Morales-Menendez. Quality 4.0: a review of big data challenges in manufacturing. J Intell Manuf, 32 (8) (2021), pp. 2319-2334.
[7]
A.Haq, M.B.C.Khoo. A new synthetic control chart for monitoring process mean using auxiliary information. J Stat Comput Simul, 86 (15) (2016), pp. 3068-3092.
[8]
J.Lee, H.A.Kao, S.Yang. Service innovation and smart analytics for Industry 4.0 and big data environment. Procedia CIRP, 16 (2014), pp. 3-8.
[9]
H.Tercan, T.Meisen. Machine learning and deep learning based predictive quality in manufacturing: a systematic review. J Intell Manuf, 33 (7) (2022), pp. 1879-1905.
[10]
P.Tambare, C.Meshram, C.C.Lee, R.J.Ramteke, A.L.Imoize. Performance measurement system and quality management in data-driven Industry 4.0: a review. Sensors, 22 (1) (2021), p. 224.
[11]
N.Yang, C.Yang, L.Wu, X.Shen, J.Jia, Z.Li, et al. Intelligent data-driven decision-making method for dynamic multisequence: an E-Seq2Seq-based SCUC expert system. IEEE Trans Industr Inform, 18 (5) (2022), pp. 3126-3137.
[12]
X.Li, C.H.Chen, P.Zheng, Z.Wang, Z.Jiang, Z.Jiang. A knowledge graph-aided concept-knowledge approach for evolutionary smart product-service system development. J Mech Des, 142 (10) (2020), Article 101403.
[13]
B.Yang, S.Wang, S.Li, F.Bi. Digital thread-driven proactive and reactive service composition for cloud manufacturing. IEEE Trans Industr Inform, 19 (3) (2023), pp. 2952-2962.
[14]
M.Yahya, J.G.Breslin, M.I.Ali. Semantic web and knowledge graphs for Industry 4.0. Appl Sci, 11 (11) (2021), p. 5110.
[15]
G.Buchgeher, D.Gabauer, J.Martinez-Gil, L.Ehrlinger. Knowledge graphs in manufacturing and production: a systematic literature review. IEEE Access, 9 (2020), pp. 55537-55554.
[16]
E.Maleki, F.Belkadi, N.Boli, B.J. Van DerZwaag, K.Alexopoulos, S.Koukas, et al. Ontology-based framework enabling smart product-service systems: application of sensing systems for machine health monitoring. IEEE Internet Things J, 5 (6) (2018), pp. 4496-4505.
[17]
R.Wang, C.F.Cheung. Knowledge graph embedding learning system for defect diagnosis in additive manufacturing. Comput Ind, 149 (2023), Article 103912.
A.Matsokis, D.Kiritsis. An ontology-based approach for product lifecycle management. Comput Ind, 61 (8) (2010), pp. 787-797.
[20]
LemaignanS, SiadatA, DantanJY, SemenenkoA. MASON:a proposal for an ontology of manufacturing domain. In:Proceedings of the IEEE Workshop on Distributed Intelligent Systems: Collective Intelligence and Its Applications ( DIS'06); 2006 Jun 15-16; Prague, Czech Republic. Piscataway: IEEE; 2006. p. 195-200.
[21]
J.Wan, B.Chen, M.Imran, F.Tao, D.Li, C.Liu, et al. Toward dynamic resources management for IOT-based manufacturing. IEEE Commun Mag, 56 (2) (2018), pp. 52-59.
[22]
Y.Kitamura, Y.Koji, R.Mizoguchi. An ontological model of device function: industrial deployment and lessons learned. Appl Ontol, 1 (2006), pp. 237-262.
[23]
E.Prestes, J.L.Carbonera, S.R.Fiorini, V.A.M.Jorge, M.Abel, R.Madhavan, et al. Towards a core ontology for robotics and automation. Robot Auton Syst, 61 (11) (2013), pp. 1193-1204.
[24]
ChengH, ZengP, XueL, ShiZ, WangP, YuH. Manufacturing ontology development based on Industry 4.0 demonstration production line. In:Proceedings of the 2016 Third International Conference on Trustworthy Systems and Their Applications (TSA); 2016 Sep 18-22; Wuhan, China. Piscataway; IEEE; 2016. p. 42-7.
[25]
E.Järvenpää, N.Siltala, O.Hylli, M.Lanz. The development of an ontology for describing the capabilities of manufacturing resources. J Intell Manuf, 30 (2) (2019), pp. 959-978.
[26]
J.Lin, C.Wu, R.Yared. An effective resource matching scheme based on a novel unified descriptive model for modern manufacturing industry systems. Electronics, 11 (8) (2022), p. 1187.
[27]
B.Zhou, B.Hua, X.Gu, Y.Lu, T.Peng, Y.Zheng, et al. An end-to-end tabular information-oriented causality event evolutionary knowledge graph for manufacturing documents. Adv Eng Inform, 50 (2021), Article 101441.
[28]
L.He, P.Jiang. Manufacturing knowledge graph: a connectivism to answer production problems query with knowledge reuse. IEEE Access, 7 (2019), pp. 101231-101244.
[29]
A.Zhou, D.Yu, W.Zhang. A research on intelligent fault diagnosis of wind turbines based on ontology and FMECA. Adv Eng Inform, 29 (1) (2015), pp. 115-125.
[30]
Y.H.Tung, S.S.Tseng, J.F.Weng, T.P.Lee, A.Y.H.Liao, W.N.Tsai, et al. A rule-based CBR approach for expert finding and problem diagnosis. Expert Syst Appl, 37 (3) (2010), pp. 2427-2438.
[31]
F.Xu, X.Liu, W.Chen, C.Zhou, B.Cao. Ontology-based method for fault diagnosis of loaders. Sensors, 18 (3) (2018), p. 729.
[32]
Q.Zhou, P.Yan, H.Liu, Y.Xin. A hybrid fault diagnosis method for mechanical components based on ontology and signal analysis. J Intell Manuf, 30 (4) (2019), pp. 1693-1715.
[33]
J.Chen, Z.Hu, J.Lu, X.Zheng, H.Zhang, D.Kiritsis. A data-knowledge hybrid driven method for gas turbine gas path diagnosis. Appl Sci, 12 (12) (2022), p. 5961.
[34]
J.Li, J.Xiang, J.Cheng. EARR: using rules to enhance the embedding of knowledge graph. Expert Syst Appl, 232 (2023), Article 120831.
[35]
T.M. DeFarias, A.Roxin, C.Nicolle. SWRL rule-selection methodology for ontology interoperability. Data Knowl Eng, 105 (2016), pp. 53-72.
[36]
LiZ, LiZ, GuoX, LiangP, HeKQ, HuangB. A transformation approach from informal descriptions of SWRL to built-in elements of protégé4.1. In:Proceedings of International Conference on Modelling, Identification and Control:2012 Jun 24-26; Wuhan, China. Piscataway; IEEE; 2012. p. 322-7.
[37]
K.Du, B.Yang, S.Wang, Y.Chang, S.Li, G.Yi. Relation extraction for manufacturing knowledge graphs based on feature fusion of attention mechanism and graph convolution network. Knowl Base Syst, 255 (2022), Article 109703.
[38]
XueL, ConstantN, RobertsA, KaleM, Al-RfouR, SiddhantA, et al. amassively multilingual pre-trained text-to-text transformer. Proceedingsof the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2021Jun 6-11; online. Kerrville: Association for Computational Linguistics; 2021. p. 483-98.
[39]
SaxenaA, TripathiA, TalukdarP. Improvingmulti-hop question answering over knowledge graphs using knowledge base embeddings. Proceedingsof the 58th Annual Meeting of the Association for Computational Linguistics ACL 2020; 2020Jul 5-10; online. Kerrville: Assoc Computational Linguistics-Acl; 2020. p. 4498-507.