Search scope:
排序: Display mode:
CORMAND2: A Deception Attack Against Industrial Robots Article
Hongyi Pu,Liang He,Peng Cheng,Jiming Chen,Youxian Sun
Engineering 2024, Volume 32, Issue 1, Pages 186-202 doi: 10.1016/j.eng.2023.01.013
Industrial robots are becoming increasingly vulnerable to cyber incidents and attacks, particularly with the dawn of the Industrial Internet-of-Things (IIoT). To gain a comprehensive understanding of these cyber risks, vulnerabilities of industrial robots were analyzed empirically, using more than three million communication packets collected with testbeds of two ABB IRB120 robots and five other robots from various Original Equipment Manufacturers (OEMs). This analysis, guided by the confidentiality–integrity–availability (CIA) triad, uncovers robot vulnerabilities in three dimensions: confidentiality, integrity, and availability. These vulnerabilities were used to design Covering Robot Manipulation via Data Deception (CORMAND2), an automated cyber–physical attack against industrial robots. CORMAND2 manipulates robot operation while deceiving the Supervisory Control and Data Acquisition (SCADA) system that the robot is operating normally by modifying the robot’s movement data and data deception. CORMAND2 and its capability of degrading the manufacturing was validated experimentally using the aforementioned seven robots from six different OEMs. CORMAND2 unveils the limitations of existing anomaly detection systems, more specifically the assumption of the authenticity of SCADA-received movement data, to which we propose mitigations for.
Keywords: Industrial robots Vulnerability analysis Deception attacks Defenses
Adversarial Attacks and Defenses in Deep Learning Feature Article
Kui Ren, Tianhang Zheng, Zhan Qin, Xue Liu
Engineering 2020, Volume 6, Issue 3, Pages 346-360 doi: 10.1016/j.eng.2019.12.012
With the rapid developments of artificial intelligence (AI) and deep learning (DL) techniques, it is critical
to ensure the security and robustness of the deployed algorithms. Recently, the security vulnerability of
DL algorithms to adversarial samples has been widely recognized. The fabricated samples can lead to various
misbehaviors of the DL models while being perceived as benign by humans. Successful implementations
of adversarial attacks in real physical-world scenarios further demonstrate their practicality.
Hence, adversarial attack and defense techniques have attracted increasing attention from both machine
learning and security communities and have become a hot research topic in recent years. In this paper,
we first introduce the theoretical foundations, algorithms, and applications of adversarial attack techniques.
We then describe a few research efforts on the defense techniques, which cover the broad frontier
in the field. Several open problems and challenges are subsequently discussed, which we hope will provoke
further research efforts in this critical area.
Keywords: Machine learning Deep neural network Adversarial example Adversarial attack Adversarial defense
Title Author Date Type Operation
CORMAND2: A Deception Attack Against Industrial Robots
Hongyi Pu,Liang He,Peng Cheng,Jiming Chen,Youxian Sun
Journal Article