
人工智能赋能网络攻击的安全威胁及应对策略
AI-Enabled Cyberspace Attacks: Security Risks and Countermeasures
人工智能(AI)在为社会进步带来显著推动效应的同时,也在促进网络空间安全领域的重大变革,研究AI 和网络空间安全结合带来的安全问题具有迫切意义。本文采用自顶向下的分析方法,从加剧现实安全威胁、催生新型安全威胁两个角度分析了AI 和网络空间安全结合带来的政治安全、经济安全、社会安全、国防安全等重大问题,提炼了自主化规模化的拒绝服务攻击、智能化高仿真的社会工程学攻击、智能化精准化的恶意代码攻击等新型威胁场景,总结了环境自适应隐蔽攻击、分布式自主协作攻击、自我演化攻击等未来发展趋势。为有效应对AI 赋能网络攻击的安全威胁,建议从防范安全威胁、构建对等能力角度加强智能化网络攻防体系建设和能力升级;加强AI 安全数据资产的共享利用,采取以数据为中心的AI 网络攻防技术发展路径;加强对抗评估和测试验证,促进AI 网络攻防技术尽快具备实用性。
Artificial intelligence (AI) brings significant societal progress and it also revolutionizes the cybersecurity sector. Thus, studying the security problems induced by the deep fusion of AI and cyberspace security becomes significant. In this article, we systematically analyze the major national security issues induced by the fusion, involving political, economic, social, and national defense securities. These issues aggravate the existing security risks and trigger new threats. Moreover, new attack scenarios are analyzed, including autonomous and large-scale denial-of-service attacks, intelligent and disguised social engineering attacks, and intelligent and targeted malicious code attacks. Subsequently, future AI-enabled attack types such as situation-awareness covert attacks, distributed autonomous-collaboration attacks, and self-evolving attacks are explored. To effectively address the security threats of AI-enabled cyber attacks, we suggest that an intelligent network attack and defense system should be established and its capabilities upgraded to construct equivalent capabilities. Sharing of AI security data assets should be encouraged to develop a data-centered path for AI-enabled network attack and defense technologies. Furthermore, the AI-enabled network attack and defense technologies should be evaluated and verified through counterwork, enabling these technologies to be practically implemented.
人工智能 / 网络攻防 / 国家安全 / 自主协作 / 自我演化
artificial intelligence (AI) / cyber attack and defense / national security / autonomous collaboration / self-evolution
[1] |
方滨兴. 人工智能安全 [M]. 北京: 电子工业出版社, 2020. Fang B X. Artificial intelligence safety and security [M]. Beijing: Publishing House of Electronics Industry, 2020.
|
[2] |
Gu Z Q, Hu W X, Zhang C J, et al. Gradient shielding: Towards understanding vulnerability of deep neural networks [J]. IEEE Transactions on Network Science and Engineering, DOI: 10.1109/ TNSE.2020.2996738.
|
[3] |
Jia Y, Gu Z Q, Li A P, et al. MDATA: A new knowledge representation model – Theory, methods and applications [M]. Cham: Springer International Publishing, 2021.
|
[4] |
Loganzhu. 还原Facebook史上最大数据外泄事件始末 [EB/OL]. (2018-03-21)[2021-02-26]. https://stock.qq.com/a/20180321/004747. htm. Loganzhu. Restore the story of the biggest data breach in Facebook’s history [EB/OL]. (2018-03-21)[2021-02-26]. https://stock. qq.com/a/20180321/004747.htm.
|
[5] |
Brundage M, Avin S, Clark J, et al. The malicious use of artificial intelligence: Forecasting, prevention, and mitigation [EB/OL]. (2018-05-05)[2021-02-26]. https://maliciousaireport.com/.
|
[6] |
World Economic Forum. The global risks report 2018 [EB/OL]. (2018-01-17)[2021-02-26]. https://cn.weforum.org/reports/theglobal-risks-report-2018.
|
[7] |
Fortinet. Fortiguard labs 2018 threat landscape predictions [EB/ OL]. (2017-11-14)[2021-02-26]. https://www.fortinet.com/blog/ business-and-technology/fortinet-fortiguard-2018-threat-landscape-predictions.html.
|
[8] |
Seymour J, Tully P. Weaponizingdata science for social engineering: Automated E2E spear phishing on Twitter [EB/OL]. (2016-05- 05)[ 2021-02-26]. https://www.blackhat.com/docs/us-16/materials/ us-16-Seymour-Tully-Weaponizing-Data-Science-For-Social-Engineering-Automated-E2E-Spear-Phishing-On-Twitter-wp.pdf.
|
[9] |
Kirat D, Jang J Y, Stoecklin M. DeepLocker-concealing targeted attacks with 人工智能 locksmithing [C]. Las Vegas: Proceedings of Black Hat, 2018.
|
[10] |
Antisnatchor. Practical phishing automation with phishlulz [C]. Wellington: Proceedings of the Kiwicon X, 2016.
|
[11] |
Orru M, Muraena T G. The unexpected phish [C]. Amsterdam: Proceedings of the Hack in the Box Security Conference, 2019.
|
[12] |
Anderson H S, Kharkar A, Filar B, et al. Learning to evade static PE machine learning malware models via reinforcement learning [EB/ OL]. (2018-01-20)[2021-02-26]. https://arxiv.org/abs/1801.08917.
|
/
〈 |
|
〉 |