期刊首页 优先出版 当期阅读 过刊浏览 作者中心 关于期刊 English

《工程(英文)》 >> 2022年 第19卷 第12期 doi: 10.1016/j.eng.2021.07.033

针对工业故障分类系统的单变量攻击及其防御

a State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China
b Department of Automation Engineering, Technische Universität Ilmenau, Ilmenau D-98684, Germany

收稿日期: 2021-01-28 修回日期: 2021-06-19 录用日期: 2021-07-13 发布日期: 2022-06-03

下一篇 上一篇

摘要

近年来,工业过程故障分类系统主要是由数据驱动的,得益于大量的数据模式,基于深度神经网络的模型显著地提高了故障分类的准确性。但是,这些数据驱动模型容易受到对抗攻击,因此,在样本上的微小扰动会导致模型提供错误的故障预测。最近的研究已经证明了机器学习模型的脆弱性以及对抗样本的广泛存在。本文针对安全、关键的工业故障分类系统提出了一种具有极端约束的黑盒攻击方法:只扰动一个变量来制作对抗样本。此外,为了将对抗样本隐藏在可视化空间中,本文使用了雅可比矩阵来引导扰动变量的选择,使降维空间中的对抗样本对人眼不可见。利用单变量攻击(OVA)方法,文本探究了不同工业变量和故障类别的脆弱性,有助于理解故障分类系统的几何特征。基于攻击方法,文本还提出了相应的对抗训练防御方法,该方法能够有效地防御单变量攻击,并提高分类器的预测精度。在实验中,将本文所提出的方法在田纳西-伊士曼过程(TEP)和钢板(SP)故障数据集上进行了测试。本文探索了变量和故障类别的脆弱相关性,并验证了各种分类器和数据集的单变量攻击和防御方法的有效性。对于工业故障分类系统,单变量攻击方法的攻击成功率接近(在TEP上)甚至高于(在SP 上)目前最有效的一阶白盒攻击方法(该方法需要对所有变量进行扰动)。

图片

图1

图2

图3

图4

图5

图6

图7

图8

图9

图10

图11

图12

图13

图14

参考文献

[ 1 ] Ge Z. Semi-supervised data modeling and analytics in the process industry: current research status and challenges. IFAC J Syst Control 2021;16:100150. 链接1

[ 2 ] Ge Z, Song Z, Ding SX, Huang B. Data mining and analytics in the process industry: the role of machine learning. IEEE Access 2017;5:20590‒616. 链接1

[ 3 ] Dash PK, Samantaray SR, Panda G. Fault classification and section identification of an advanced series-compensated transmission line using support vector machine. IEEE Trans Power Deliv 2007;22(1):67‒73. 链接1

[ 4 ] Chen X, Ge Z. Switching LDS-based approach for process fault detection and classification. Chemom Intell Lab Syst 2015;146(C):169‒78. 链接1

[ 5 ] Wang Y, Wu D, Yuan X. LDA-based deep transfer learning for fault diagnosis in industrial chemical processes. Comput Chem Eng 2020;140:106964. 链接1

[ 6 ] Chen G, Ge Z. SVM-tree and SVM-forest algorithms for imbalanced fault classification in industrial processes. IFAC J Syst Control 2019;8:100052. 链接1

[ 7 ] Zhao D, Wang T, Chu F. Deep convolutional neural network based planet bearing fault classification. Comput Ind 2019;107:59‒66. 链接1

[ 8 ] Chadha GS, Panambilly A, Schwung A, Ding SX. Bidirectional deep recurrent neural networks for process fault classification. ISA Trans 2020;106:330‒42. 链接1

[ 9 ] Jiang L, Ge Z, Song Z. Semi-supervised fault classification based on dynamic sparse stacked auto-encoders model. Chemom Intell Lab Syst 2017;168:72‒83. 链接1

[10] Ren K, Zheng T, Qin Z, Liu X. Adversarial attacks and defenses in deep learning. Engineering 2020;6(3):346‒60. 链接1

[11] Akhtar N, Mian A. Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 2018;6:14410‒30. 链接1

[12] Xu H, Ma Y, Liu H, Deb D, Liu H, Tang J, et al. Adversarial attacks and defenses in images, graphs and text: a review. Int J Autom Comput 2020;17(2):151‒78. 链接1

[13] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, et al. Intriguing properties of neural networks. In: Proceedings of the 2nd International Conference on Learning Representations; 2014 Apr 14‒16; Banff, Canada; 2014.

[14] Goodfellow I, Shlens J, Szegedy C. Explaining and harnessing adversarial examples. In: Proceedings of the 3rd International Conference on Learning Representations; 2015 May 7‒9; San Diego, CA, USA; 2015.

[15] Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A. Towards deep learning models resistant to adversarial attacks. In: Proceedings of the 6th International Conference on Learning Representations; 2018 Apr 30‍‒‍May 3; Vancouver, Canada; 2018.

[16] Shafahi A, Najibi M, Ghiasi MA, Xu Z, Dickerson J, Studer C, et al. Adversarial training for free! In: Proceedings of Advances in Neural Information Processing Systems 32; 2019 Dec 8‒14; Vancouver, Canada; 2019. 链接1

[17] Zhang D, Zhang T, Lu Y, Zhu Z, Dong B. You only propagate once: accelerating adversarial training via maximal principle. In: Proceedings of Advances in Neural Information Processing Systems 32; 2019 Dec 8‍‒‍14; Vancouver, Canada; 2019. 链接1

[18] Su J, Vargas DV, Sakurai K. One pixel attack for fooling deep neural networks. IEEE Trans Evol Comput 2019;23(5):828‒41. 链接1

[19] Papernot N, McDaniel PD, Jha S, Fredrikson M, Celik ZB, Swami A. The limitations of deep learning in adversarial settings. In: Proceedings of the 1st IEEE European Symposium on Security and Privacy; 2016 Mar 21‍‒‍24; Saarbrücken, Germany; 2016. 链接1

[20] Barreno M, Nelson B, Joseph AD, Tygar JD. The security of machine learning. Mach Learn 2010;81(2):121‒48. 链接1

[21] Biggio B, Corona I, Maiorca D, Nelson B, Srndic N, Laskov P, et al. Evasion attacks against machine learning at test time. In: European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Heidelberg: Springer, Berlin; 2013. p. 387‒402. 链接1

[22] Hu W, Tan Y. Generating adversarial malware examples for black-box attacks based on GAN. 2017. arXiv:1702.05983.

[23] Sankaranarayanan S, Jain A, Chellappa R, Lim SN. Regularizing deep networks using efficient layerwise adversarial training. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence; 2018 Feb 2‒7; New Orleans, LA, USA; 2018. 链接1

[24] Gu S, Rigazio L. Towards deep neural network architectures robust to adversarial examples. In: Proceedings of the 3rd International Conference on Learning Representations (ICLR 2015); 2015 May 7‒9; San Diego, CA, USA; 2015.

[25] Papernot N, Mcdaniel P, Goodfellow I, Jha S, Celik ZB, Swami A. Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security; 2017 Apr 2‒6; Abu Dhabi, United Arab Emirates. New York: Association for Computing Machinery; 2017. p. 506‒519. 链接1

[26] Akhtar N, Liu J, Mian A. Defense against universal adversarial perturbations. In: Proceedings of 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2018 Jun 18‒23; Salt Lake City, UT, USA; 2018. 链接1

[27] Shang C, You F. Data analytics and machine learning for smart process manufacturing: recent advances and perspectives in the big data era. Engineering 2019;5(6):1010‒6. 链接1

[28] Chen Y. Integrated and intelligent manufacturing: perspectives and enablers. Engineering 2017;3(5):588‒95. 链接1

[29] Yi TH, Huang HB, Li HN. Development of sensor validation methodologies for structural health monitoring: a comprehensive review. Measurement 2017;109:200‒14. 链接1

[30] Downs JJ, Vogel EF. A plant-wide industrial process control problem. Comput Chem Eng 1993;17(3):245‒55. 链接1

[31] Research center of sciences of communication [Internet]. Rome: Semeion Communication Science Research Centre; 2022 April 19 [cited 2022 Apr 30]. Available from: https://www.semeion.it. †† For the variables less than 0.5, the maximal distortions are greater than 100%. For example, the maximal distortion on the variable of value 0.2 is 400% ((1-0.2)/0.2). 链接1

相关研究