期刊首页 优先出版 当期阅读 过刊浏览 作者中心 关于期刊 English

《工程(英文)》 >> 2020年 第6卷 第3期 doi: 10.1016/j.eng.2019.12.015

中国人工智能的伦理原则及其治理技术发展

a State Key Laboratory of Software Development Environment, Beihang University, Beijing 100191, China
b School of Electronics Engineering and Computer Science, Peking University, Beijing 100871, China
c Chinese Institute of New Generation Artificial Intelligence Development Strategie, Nankai University, Tianjin 300071, China

收稿日期: 2019-09-19 修回日期: 2019-11-18 录用日期: 2019-12-25

下一篇 上一篇

摘要

伦理原则和治理技术对于人工智能(AI)的健康和可持续发展至关重要。为了实现AI造福人类社会这一长期目标,中国政府、研究机构和企业已经发布了AI的伦理原则,并启动了研究AI治理技术的项目。本文对这些工作进行了综述,并着重介绍了中国在这一领域的初步成果。此外,本文总结了AI治理研究中所面临的主要挑战,并讨论了未来的研究方向。

图片

图1

图2

参考文献

[ 1 ] National Governance Committee for the New Generation Artificial Intelligence. Governance principles for the new generation artificial intelligence— developing responsible artificial intelligence [Internet]. Beijing: China Daily; c1995–2019 [updated 2019 Jun 17; cited 2019 Dec 18]. Available from: https://www.chinadaily.com.cn/a/201906/17/WS5d07486ba3103dbf14328ab7. html?from=timeline&isappinstalled=0. 链接1

[ 2 ] Beijing AI principles [Internet]. Beijing: Beijing Academy of Artificial Intelligence; c2019 [updated 2019 May 28; cited 2019 Dec 18]. Available from: https://www.baai.ac.cn/blog/beijing-ai-principles. 链接1

[ 3 ] Zeng Y, Lu E, Huangfu C. Linking artificial intelligence principles. 2018. arXiv:1812.04814.

[ 4 ] Yang Q, Liu Y, Chen T, Tong Y. Federated machine learning: concept and applications. ACM Trans Intell Syst Technol 2019;10(2):12. 链接1

[ 5 ] Guide for architectural framework and application of federated machine learning [Internet]. New York: IEEE P3652.1 Federated Machine Learning Working Group; c2019 [cited 2019 Dec 18]. Available from: https://sagroups. ieee.org/3652-1/. 链接1

[ 6 ] Xiao C, Li B, Zhu J, He W, Liu M, Song D. Generating adversarial examples with adversarial networks. 2018. arXiv:1801.02610.

[ 7 ] Liu A, Liu X, Fan J, Ma Y, Zhang A, Xie H, et al. Perceptual-sensitive GAN for generating adversarial patches. In: Proceedings of the 33rd AAAI Conference on Artificial Intelligence; 2019 Jan 27–Feb 1; Honolulu, HI, USA; 2019.

[ 8 ] Yan Z, Guo Y, Zhang C. Deep defense: training DNNs with improved adversarial robustness. 2018. arXiv:1803.00404v3.

[ 9 ] Pang T, Du C, Dong Y, Zhu J. Towards robust detection of adversarial examples. 2018. arXiv:1706.00633v4.

[10] Ling X, Ji S, Zou J, Wang J, Wu C, Li B, et al. DEEPSEC: a uniform platform for security analysis of deep learning model. In: Proceedings of the 40th IEEE Symposium on Security and Privacy; 2019 May 20–22; San Francisco, CA, USA; 2019.

[11] Pulina L, Tacchella A. Challenging SMT solvers to verify neural networks. AI Commun 2012;25(2):117–35. 链接1

[12] Katz G, Barrett C, Dill DL, Julian K, Kochenderfer MJ. Reluplex: an efficient SMT solver for verifying deep neural networks. In: Proceedings of the International Conference on Computer Aided Verification; 2017 Jul 24–28; Heidelberg, Germany; 2017. p. 97–117.

[13] Gehr T, Mirman M, Drachsler-Cohen D, Tsankov P, Chaudhuri S, Vechev M. AI2: safety and robustness certification of neural networks with abstract interpretation. In: Proceedings of the 2018 IEEE Symposium on Security and Privacy; 2018 May 20–24; San Francisco, CA, USA; 2018.

[14] Singh G, Gehr T, Mirman M, Püschel M, Vechev M. Fast and effective robustness certification. In: Proceedings of the Advances in Neural Information Processing Systems 31; 2018 Dec 3–8; Montreal, QC, Canada; 2018. p. 10802–13.

[15] Lin W, Yang Z, Chen X, Zhao Q, Li X, Liu Z, et al. Robustness verification of classification deep neural networks via linear programming. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2019 Jun 16–20; Long Beach, CA, USA; 2019. p. 11418–27.

[16] Yang P, Liu J, Li J, Chen L, Huang X. Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. 2019. arXiv:1902.09866.

[17] Ribeiro MT, Singh S, Guestrin C. ‘‘Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2016 Aug 13–17; San Francisco, CA, USA; 2016. p. 1135–44.

[18] Zhang Q, Yang Y, Ma H, Wu YN. Interpreting CNNs via decision trees. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2019 Jun 16–20; Long Beach, CA, USA; 2019. p. 6261–70.

[19] Liu S, Wang X, Liu M, Zhu J. Towards better analysis of machine learning models: a visual analytics perspective. Visual Inf 2017;1(1):48–56. 链接1

[20] Ma S, Aafer Y, Xu Z, Lee WC, Zhai J, Liu Y, et al. LAMP: data provenance for graph based machine learning algorithms through derivative computation. In: Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering; 2017 Sept 4–8; Paderborn, Germany; 2017. p. 786–97.

[21] Xuan X, Peng B, Dong J, Wang W. On the generalization of GAN image forensics. 2019. arXiv:1902.11153.

[22] Gajane P, Pechenizkiy M. On formalizing fairness in prediction with machine learning. 2017. arXIv:1710.03184.

[23] Kusner MJ, Loftus J, Russell C, Silva R. Counterfactual fairness. 2017. arXiv:1703.06856.

[24] Bolukbasi T, Chang KW, Zou J, Saligrama V, Kalai A. Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. 2016. arXiv:1607.06520.

[25] Weng P. Fairness in reinforcement learning. 2019. arXiv:1907.10323.

[26] Bellamy RKE, Dey K, Hind M, Hoffman SC, Houde S, Kannan K, et al. AI fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. 2018. arXiv:1810.01943.

[27] High-Level Expert Group on AI. Ethics guidelines for trustworthy AI [Internet]. Brussels: European Commission; 2019 Apr 8 [cited 2019 Dec 18]. Available from: https://ec.europa.eu/digital-single-market/en/news/ethics-guidelinestrustworthy-ai. 链接1

[28] Trump DJ. Executive order on maintaining American leadership in artificial intelligence [Internet]. Washington, DC: The White House; 2019 Feb 11 [cited 2019 Dec 18]. Available from: https://www.whitehouse.gov/ presidential-actions/executive-order-maintaining-american-leadership-artificialintelligence/. 链接1

[29] Tencent AI Lab. Technological ethics at intelligent era—reshape trustworthiness in digital society [Internet]. Beijing: Tencent Research Institute; 2019 Jul 8 [cited 2019 Dec 18]. Available from: https://tisi.org/ 10890. Chinese. 链接1

[30] Meet the Partners [Internet]. San Francisco: Partnership on AI; c2016–18 [cited 2019 Dec 18]. Available from: https://www.partnershiponai. org/partners/. 链接1

[31] Li Q, Wen Z, Wu Z, Hu S, Wang N, He B. Federated learning systems: vision, hype and reality for data privacy and protection. 2019. arXiv:1907.09693.

[32] Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, et al. Intriguing properties of neural networks. 2013. arXiv:1312.6199.

[33] Kurakin A, Goodfellow I, Bengio S, Dong Y, Liao F, Liang M. Adversarial attacks and defences competition. In: Escalera S, Weimer M, editors. The NIPS’17 competition: building intelligent systems. Cham: Springer; 2018. p. 195–231. 链接1

[34] Cao Y, Xiao C, Yang D, Fang J, Yang R, Liu M, et al. Adversarial objects against LiDAR-based autonomous driving systems. 2019. arXiv:1907.05418.

[35] Arya V, Bellamy RK, Chen PY, Dhurandhar A, Hind M, Hoffman SC, et al. One explanation does not fit all: a toolkit and taxonomy of AI explainability techniques. 2019. arXiv:1909.03012.

[36] Yu H, Shen Z, Miao C, Leung C, Lesser VR, Yang Q. Building ethics into artificial intelligence. In: Proceedings of the 27th International Joint Conference on Artificial Intelligence; 2018 Jul 13–19; Stockholm, Sweden; 2018. p. 5527–33. 链接1

[37] Everitt T, Kumar R, Krakovna V, Legg S. Modeling AGI safety frameworks with causal influence diagrams. 2019. arXiv:1906.08663.

[38] Awad E, Dsouza S, Kim R, Schulz J, Henrich J, Shariff A, et al. The moral machine experiment. Nature 2018;563(7729):59–64. 链接1

[39] Conitzer V, Sinnott-Armstrong W, Borg JS, Deng Y, Kramer M. Moral decision making frameworks for artificial intelligence. In: Proceedings of the 31st AAAI Conference on Artificial Intelligence; 2017 Feb 4–10; San Francisco, CA, USA; 2017. p. 4831–5.

[40] Kim R, Kleiman-Weiner M, Abeliuk A, Awad E, Dsouza S, Tenenbaum JB, et al. A computational model of commonsense moral decision making. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society; 2018 Feb 2–3; New Orleans, LA, USA; 2018. p. 197–203.

[41] National Artificial Intelligence Standardization Steering Committee. Report on artificial intelligence ethical risk analysis [Internet]. [cited 2019 Dec 18]. Available from: http://www.cesi.ac.cn/images/editor/20190425/ 20190425142632634001.pdf. Chinese. 链接1

[42] Crawford K, Calo R. There is a blind spot in AI research. Nature 2016;538 (7625):311–3. 链接1

相关研究