Journal Home Online First Current Issue Archive For Authors Journal Information 中文版

Frontiers of Information Technology & Electronic Engineering >> 2023, Volume 24, Issue 10 doi: 10.1631/FITEE.2300059

Towards robust neural networks via a global and monotonically decreasing robustness training strategy

Affiliation(s): Institute for Quantum Information & State Key Laboratory of High Performance Computing, National University of Defense Technology, Changsha 410073, China; State Key Laboratory of Computer Science Institute of Software, Chinese Academy of Sciences, Beijing 100190, China; School of Computer Science and Technology, University of Chinese Academy of Sciences, Beijing 100190, China; College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China; Key Laboratory of Software Engineering for Complex Systems, National University of Defense Technology, Changsha 410073, China; less

Received: 2023-02-01 Accepted: 2023-10-27 Available online: 2023-10-27

Next Previous

Abstract

Robustness of deep neural networks (DNNs) has caused great concerns in the academic and industrial communities, especially in safety-critical domains. Instead of verifying whether the robustness property holds or not in certain neural networks, this paper focuses on training with respect to given perturbations. State-of-the-art s, interval bound propagation (IBP) and CROWN-IBP, perform well with respect to small perturbations, but their performance declines significantly in large perturbation cases, which is termed "" in this paper. Specifically, refers to the phenomenon that IBP-family s cannot provide expected in larger perturbation cases, as in smaller perturbation cases. To alleviate the unexpected , we propose a global and training strategy that takes multiple perturbations into account during each training epoch (), and the corresponding robustness losses are combined with monotonically decreasing weights ( training). With experimental demonstrations, our presented strategy maintains performance on small perturbations and the on large perturbations is alleviated to a great extent. It is also noteworthy that our achieves higher model accuracy than the original s, which means that our presented training strategy gives more balanced consideration to robustness and accuracy.

Related Research