Journal Home Online First Current Issue Archive For Authors Journal Information 中文版

Engineering >> 2022, Volume 19, Issue 12 doi: 10.1016/j.eng.2021.07.033

One-Variable Attack on The Industrial Fault Classification System and Its Defense

a State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China
b Department of Automation Engineering, Technische Universität Ilmenau, Ilmenau D-98684, Germany

Received: 2021-01-28 Revised: 2021-06-19 Accepted: 2021-07-13 Available online: 2022-06-03

Next Previous

Abstract

Recently developed fault classification methods for industrial processes are mainly data-driven. Notably, models based on deep neural networks have significantly improved fault classification accuracy owing to the inclusion of a large number of data patterns. However, these data-driven models are vulnerable to adversarial attacks; thus, small perturbations on the samples can cause the models to provide incorrect fault predictions. Several recent studies have demonstrated the vulnerability of machine learning methods and the existence of adversarial samples. This paper proposes a black-box attack method with an extreme constraint for a safe-critical industrial fault classification system: Only one variable can be perturbed to craft adversarial samples. Moreover, to hide the adversarial samples in the visualization space, a Jacobian matrix is used to guide the perturbed variable selection, making the adversarial samples in the dimensional reduction space invisible to the human eye. Using the one-variable attack (OVA) method, we explore the vulnerability of industrial variables and fault types, which can help understand the geometric characteristics of fault classification systems. Based on the attack method, a corresponding adversarial training defense method is also proposed, which efficiently defends against an OVA and improves the prediction accuracy of the classifiers. In experiments, the proposed method was tested on two datasets from the Tennessee–Eastman process (TEP) and Steel Plates (SP). We explore the vulnerability and correlation within variables and faults and verify the effectiveness of OVAs and defenses for various classifiers and datasets. For industrial fault classification systems, the attack success rate of our method is close to (on TEP) or even higher than (on SP) the current most effective first-order white-box attack method, which requires perturbation of all variables.

Figures

Fig. 1

Fig. 2

Fig. 3

Fig. 4

Fig. 5

Fig. 6

Fig. 7

Fig. 8

Fig. 9

Fig. 10

Fig. 11

Fig. 12

Fig. 13

Fig. 14

References

[ 1 ] Ge Z. Semi-supervised data modeling and analytics in the process industry: current research status and challenges. IFAC J Syst Control 2021;16:100150. link1

[ 2 ] Ge Z, Song Z, Ding SX, Huang B. Data mining and analytics in the process industry: the role of machine learning. IEEE Access 2017;5:20590–616. link1

[ 3 ] Dash PK, Samantaray SR, Panda G. Fault classification and section identification of an advanced series-compensated transmission line using support vector machine. IEEE Trans Power Deliv 2007;22(1):67–73. link1

[ 4 ] Chen X, Ge Z. Switching LDS-based approach for process fault detection and classification. Chemom Intell Lab Syst 2015;146(C):169–78. link1

[ 5 ] Wang Y, Wu D, Yuan X. LDA-based deep transfer learning for fault diagnosis in industrial chemical processes. Comput Chem Eng 2020;140:106964. link1

[ 6 ] Chen G, Ge Z. SVM-tree and SVM-forest algorithms for imbalanced fault classification in industrial processes. IFAC J Syst Control 2019;8:100052. link1

[ 7 ] Zhao D, Wang T, Chu F. Deep convolutional neural network based planet bearing fault classification. Comput Ind 2019;107:59–66. link1

[ 8 ] Chadha GS, Panambilly A, Schwung A, Ding SX. Bidirectional deep recurrent neural networks for process fault classification. ISA Trans 2020;106:330–42. link1

[ 9 ] Jiang L, Ge Z, Song Z. Semi-supervised fault classification based on dynamic sparse stacked auto-encoders model. Chemom Intell Lab Syst 2017;168:72–83. link1

[10] Ren K, Zheng T, Qin Z, Liu X. Adversarial attacks and defenses in deep learning. Engineering 2020;6(3):346–60. link1

[11] Akhtar N, Mian A. Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 2018;6:14410–30. link1

[12] Xu H, Ma Y, Liu H, Deb D, Liu H, Tang J, et al. Adversarial attacks and defenses in images, graphs and text: a review. Int J Autom Comput 2020;17(2):151–78. link1

[13] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, et al. Intriguing properties of neural networks. In: Proceedings of the 2nd International Conference on Learning Representations; 2014 Apr 14–16; Banff, AB, Canada; 2014. link1

[14] Goodfellow I, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. In: Proceedings of the 3rd International Conference on Learning Representations; 2015 May 7–9; San Diego, CA, USA; 2015. link1

[15] Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. In: Proceedings of the 6th International Conference on Learning Representations; 2018 Apr 30–May 3; Vancouver, BC, Canada; 2018. link1

[16] Shafahi A, Najibi M, Ghiasi MA, Xu Z, Dickerson J, Studer C, et al. Adversarial training for free! In: Proceedings of Advances in Neural Information Processing Systems 32; 2019 Dec 8–14; Vancouver, BC, Canada; 2019. link1

[17] Zhang D, Zhang T, Lu Y, Zhu Z, Dong B. You only propagate once: accelerating adversarial training via maximal principle. In: Proceedings of Advances in Neural Information Processing Systems 32; 2019 Dec 8–14; Vancouver, BC, Canada; 2019. link1

[18] Su J, Vargas DV, Sakurai K. One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 2019;23(5):828–41. link1

[19] Papernot N, McDaniel PD, Jha S, Fredrikson M, Celik ZB, Swami A. The limitations of deep learning in adversarial settings. In: Proceedings of the 1st IEEE European Symposium on Security and Privacy; 2016 Mar 21–24; Saarbrücken, Germany; 2016. link1

[20] Barreno M, Nelson B, Joseph AD, Tygar JD. The security of machine learning. Mach Learn 2010;81(2):121–48. link1

[21] Biggio B, Corona I, Maiorca D, Nelson B, Srndic N, Laskov P, et al. Evasion attacks against machine learning at test time. In: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases; 2013 Sep 23–27; Prague, Czech Republic. Heidelberg: Springer; 2013. p. 387–402. link1

[22] Hu W, Tan Y. Generating adversarial malware examples for black-box attacks based on GAN. 2017. arXiv:1702.05983.

[23] Sankaranarayanan S, Jain A, Chellappa R, Lim SN. Regularizing deep networks using efficient layerwise adversarial training. In: Proceedings of the ThirtySecond AAAI Conference on Artificial Intelligence; 2018 Feb 2–7; New Orleans, LA, USA; 2018. link1

[24] Gu S, Rigazio L. Towards deep neural network architectures robust to adversarial examples. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015); 2015 May 7–9; San Diego, CA, USA; 2015. link1

[25] Papernot N, Mcdaniel P, Goodfellow I, Jha S, Celik ZB, Swami A. Practical blackbox attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security; 2017 Apr 2–6; Abu Dhabi, United Arab Emirates. New York City: Association for Computing Machinery; 2017. p. 506–519. link1

[26] Akhtar N, Liu J, Mian A. Defense against universal adversarial perturbations. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018 Jun 18–23; Salt Lake City, UT, USA; 2018. link1

[27] Shang C, You F. Data analytics and machine learning for smart process manufacturing: recent advances and perspectives in the big data era. Engineering 2019;5(6):1010–6. link1

[28] Chen Y. Integrated and intelligent manufacturing: perspectives and enablers. Engineering 2017;3(5):588–95. link1

[29] Yi TH, Huang HB, Li HN. Development of sensor validation methodologies for structural health monitoring: a comprehensive review. Measurement 2017;109:200–14. link1

[30] Downs JJ, Vogel EF. A plant-wide industrial process control problem. Comput Chem Eng 1993;17(3):245–55. link1

[31] Research center of sciences of communication [Internet]. Rome: Semeion Communication Science Research Centre; 2022 Apr 19 [cited 2022 Apr 30]. Available from: https://www.semeion.it. link1

Related Research