Journal Home Online First Current Issue Archive For Authors Journal Information 中文版

Strategic Study of CAE >> 2021, Volume 23, Issue 3 doi: 10.15302/J-SSCAE-2021.03.005

Technical Countermeasures for Security Risks of Artificial General Intelligence

Department of Computer Science and Technology, Peking University, Beijing 100871, China

Funding project:中国工程院咨询项目“新一代人工智能安全与自主可控发展战略研究” (2019-ZD-01) Received: 2021-04-07 Revised: 2021-04-25 Available online: 2021-06-01

Next Previous

Abstract

Human beings might face significant security risks after entering into the artificial general intelligence (AGI) era. By summarizing the difference between AGI and traditional artificial intelligence, we analyze the sources of the security risks of AGI from the aspects of model uninterpretability, unreliability of algorithms and hardware, and uncontrollability over autonomous consciousness. Moreover, we propose a security risk assessment system for AGI from the aspects of ability, motivation, and behavior. Subsequently, we discuss the defense countermeasures in the research and application stages. In the research stage, theoretical verification should be improved to develop interpretable models, the basic values of AGI should be rigorously constrained, and technologies should be standardized. In the application stage, man-made risks should be prevented, motivations should be selected for AGI, and human values should be given to AGI. Furthermore, it is necessary to strengthen international cooperation and the education of AGI professionals, to well prepare for the unknown coming era of AGI.

References

[ 1 ] Chen J B, Gao Y F. Artificial intelligence and human intelligence from the perspective of system theory [J]. Studies in Dialectics of Nature, 2019, 35(9): 99–104. Chinese. link1

[ 2 ] Huang T J, Yu Z F, Liu Y J. Brain-like machine: Thought and architecture [J]. Journal of Computer Research and Development, 2019, 56(6): 1133–1148. Chinese. link1

[ 3 ] Zhang B. Towards the real artificial intelligence [J]. Satellite & Network, 2018 (6): 24–27. Chinese. link1

[ 4 ] Xu Z B. AI and math go together towards the era of autonomous intelligence [EB/OL]. (2020-06-08) [2021-02-15]. http://news. sciencenet.cn/htmlnews/2020/6/441057.shtm. Chinese. link1

[ 5 ] Zhou Z H. Views on artificial general intelligence [J]. Communication of the CCF, 2018, 14(1): 45–46. Chinese.

[ 6 ] Huang T J. Different views on artificial general intelligence [J]. Communication of the CCF, 2018, 14(2): 47–48. Chinese.

[ 7 ] Amodei D, Olah C, Steinhardt J, et al. Concrete problems in AI safety [EB/OL]. (2016-07-25) [2021-02-15]. https://arxiv.org/ abs/1606.06565. link1

[ 8 ] Congress of the United States. H.R.5356-National security commission artificial intelligence act of 2018 [EB/OL]. (2018-03-20) [2021-02-15]. https://www.congress.org/bill/115th-congress/housebill/5356. link1

[ 9 ] China Academy of Information and Communications Technology. Global AI governance report [EB/OL]. (2020-12-30) [2021-02- 15]. https://pdf.dfcfw.com/pdf/H3_AP202012301445361107_1.pdf?1609356816000.pdf. Chinese. link1

[10] Jin J, Qin H, Dai Z X. Top-level strategy of artificial intelligence security and the research status of key institutions in the United States [J]. Civil-Military Integration on Cyberspace, 2020 (5): 45–48. Chinese. link1

[11] Whyte C. Deepfake news: AI-enabled disinformation as a multilevel public policy challenge [J]. Journal of Cyber Policy, 2020, 5(2): 1–19. link1

[12] Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks [J]. Advances in Neural Information Processing Systems, 2014, 3(11): 2672–2680.

[13] Bau D, Zhu J Y, Wulff J, et al. Seeing what a GAN cannot generate [C]. Seoul: 2019 IEEE/CVF International Conference on Computer Vision, 2019.

[14] Huang T J. Imitating the brain with neurocomputer a “new” way towards artificial general intelligence [J]. International Journal of Automation and Computing, 2017, 14(5): 520–531. link1

[15] Qu J, Zhang L Y. Statistical analysis of foreign rocket launch and failure [J]. Aerospace China, 2016 (2): 13–18. Chinese. link1

[16] Xing H Q. Research on the legal regulatory framework of high frequency trading in securities and futures market [J]. China Legal Science, 2016 (5): 156–177. Chinese. link1

[17] Tegmark M. Life 3.0: Being human in the age of artificial intelligence [M]. New York: Penguin Random House LLC, 2017.

[18] Bostrom N. Superintelligence: Paths, dangers, strategies [M]. Oxford: Oxford University Press, 2015.

[19] Vilalta R, Drissi Y. A perspective view and survey of meta-learning [J]. Artificial Intelligence Review, 2002, 18(2): 77–95. link1

[20] Li X R, Ji S L, Wu C M, et al. Survey on deepfakes and detection techniques [J]. Journal of Software, 2021, 32(2): 496–518. link1

[21] Asimov I. I, robot [M]. Louisville: Spectra Press and Promotions, 2004. Chinese.

[22] Huang T J. Can human build “super brain”? [N]. China Reading Weekly, 2015-01-07(5). Chinese.

[23] Ministry of Science and Technology of the People’s Republic of China. 25 European countries sign the Declaration on Artificial Intelligence Cooperation [EB/OL]. (2018-07-18) [2021-02-15]. http://www.most.gov.cn/gnwkjdt/201807/t20180718_140708.htm. Chinese. link1

Related Research