
从脑科学到人工智能
Jingtao Fan, Lu Fang, Jiamin Wu, Yuchen Guo, Qionghai Dai
工程(英文) ›› 2020, Vol. 6 ›› Issue (3) : 248-252.
从脑科学到人工智能
From Brain Science to Artificial Intelligence
回顾人工智能(artificial intelligence, AI)的发展历史,我们可以清晰地看到脑科学给AI领域带来的巨大突破,如深度学习。目前,尽管AI及其应用的发展趋势已经超越了人类的预期,但AI与人类智能之间仍然存在着难以逾越的鸿沟。从脑科学到AI、从了解大脑到模拟大脑,在脑科学与AI研究之间建立起一座桥梁已经成为一种迫切需求。为此,我们首先需要通过研究新型脑成像技术来探索脑科学的秘密,建立大脑的动态连接图谱以及将神经科学实验与理论、模型和统计学相结合等。在此基础上,我们将进一步研究新一代AI理论和方法,从而建立起从机器感知和机器学习到机器思维和机器决策的颠覆性模型和工作模式。与此同时,本文还将讨论在脑科学启发新一代AI过程中的一些机遇与挑战。
Reviewing the history of the development of artificial intelligence (AI) clearly reveals that brain science has resulted in breakthroughs in AI, such as deep learning. At present, although the developmental trend in AI and its applications has surpassed expectations, an insurmountable gap remains between AI and human intelligence. It is urgent to establish a bridge between brain science and AI research, including a link from brain science to AI, and a connection from knowing the brain to simulating the brain. The first steps toward this goal are to explore the secrets of brain science by studying new brain-imaging technology; to establish a dynamic connection diagram of the brain; and to integrate neuroscience experiments with theory, models, and statistics. Based on these steps, a new generation of AI theory and methods can be studied, and a subversive model and working mode from machine perception and learning to machine thinking and decision-making can be established. This article discusses the opportunities and challenges of adapting brain science to AI.
人工智能,脑科学 /
Artificial intelligence / Brain science
[1] |
Hebb DO. The organization of behavior. Hoboken: John Wiley & Sons; 1949.
|
[2] |
LeCun Y, Boser B, Denker JS, Henderson D, Howard RE, Hubbard W, et al. Backpropagation applied to handwritten zip code recognition. Neural Comput 1989;1(4):541–51.
|
[3] |
Krizhevsky A, Sutskever I, Hinton G. ImageNet classification with deep convolutional neural networks. In: Pereira F, Burges CJC, Bottou L, Weinberger KQ, editors. Proceedings of the Neural Information Processing Systems 2012; 2012 Dec 3–6; Lake Tahoe, NV, USA; 2012. p. 1097–105.
|
[4] |
James W, Burkhardt F, Bowers F, Skrupskelis IK. The principles of psychology. New York: Henry Holt; 1890.
|
[5] |
Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput 1997;9 (8):1735–80.
|
[6] |
Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J, Desjardins G, Rusu AA, et al. Overcoming catastrophic forgetting in neural networks. Proc Natl Acad Sci USA 2017;114(13):3521–6.
|
[7] |
Russell SJ, Norvig P. Artificial intelligence: a modern approach. 3rd ed. New York: Pearson Education; 2010.
|
[8] |
Miller GA. The cognitive revolution: a historical perspective. Trends Cogn Sci 2003;7(3):141–4.
|
[9] |
Turing A. Computing machinery and intelligence. Mind 1950;236:433–60.
|
[10] |
Minsky M, Papert S. Perceptrons: an introduction to computational geometry. Cambridge: MIT Press; 1987.
|
[11] |
McCarthy J. Defending AI research: a collection of essays and reviews. Stanford: CSLI Publications; 1996.
|
[12] |
Hinton GE, Rumelhart DE, McClelland JL. Distributed representations. In: Parallel distributed processing: explorations in the microstructure of cognition: foundations. Cambridge: MIT Press; 1986. p. 77–109.
|
[13] |
Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev 1958;65(6):386–408.
|
[14] |
Hubel DH, Wiesel TN. Receptive fields of single neurones in the cat’s striate cortex. J Physiol 1959;148(3):574–91.
|
[15] |
LeCun Y, Bengio Y, Hinton G. Deep learning. Nature 2015;521(7553):436–44.
|
[16] |
Rumelhart DE, McClelland JL. Learning internal representations by error propagation. In: Parallel distributed processing: explorations in the microstructure of cognition: foundations. Cambridge: MIT Press; 1986. p. 318–62.
|
[17] |
Rumelhart DE, McClelland JL. Parallel distributed processing: explorations in the microstructures of cognition: foundations. Cambridge: MIT Press; 1986.
|
[18] |
Raichle ME. Positron emission tomography. In: Wilson RA, Keil LC, editors. The MIT encyclopedia of the cognitive sciences. Cambridge: MIT Press; 1999. p. 656–8.
|
[19] |
Scolari M, Seidl-Rathkopf KN, Kastner S. Functions of the human frontoparietal attention network: evidence from neuroimaging. Curr Opin Behav Sci 2015;1:32–9.
|
[20] |
Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate. 2014. arXiv:1409.0473.
|
[21] |
Reed S, Zhang Y, Zhang Y, Lee H. Deep visual analogy-making. In: Cortes C, Lawrence ND, Lee DD, Sugiyama M, Garnett R, editors. Proceedings of the Neural Information Processing Systems 2015; 2015 Dec 7–12; Montreal, QC, Canada; 2015. p. 1252–60.
|
[22] |
Atkinson RC, Shiffrin RM. Human memory: a proposed system and its control processes. In: Spence KW, Spence JT, editors. Psychology of learning and motivation (volume 2). New York: Academic Press; 1968. p. 89–195.
|
[23] |
Baddeley AD, Hitch G. Working memory. In: Bower GH, editor. Psychology of learning and motivation (volume 8). New York: Academic Press; 1974. p. 47–89.
|
[24] |
Goldman-Rakic PS. Cellular and circuit basis of working memory in prefrontal cortex of nonhuman primates. Prog Brain Res 1990;85:325–35.
|
[25] |
McCarthy G, Puce A, Constable RT, Krystal JH, Gore JC, Goldman-Rakic P. Activation of human prefrontal cortex during spatial and nonspatial working memory tasks measured by functional MRI. Cereb Cortex 1996;6(4):600–11.
|
[26] |
Jonides J, Smith EE, Koeppe RA, Awh E, Minoshima S, Mintun MA. Spatial working memory in humans as revealed by PET. Nature 1993;363 (6430):623–5.
|
[27] |
Graves A, Wayne G, Reynolds M, Harley T, Danihelka I, Grabska-Barwin´ ska A, et al. Hybrid computing using a neural network with dynamic external memory. Nature 2016;538(7626):471–6.
|
[28] |
Santoro A, Bartunov S, Botvinick M, Wierstra D, Lillicrap T. One-shot learning with memory-augmented neural networks. 2016. arXiv:1605.06065.
|
[29] |
Denk W, Strickler JH, Webb WW. Two-photon laser scanning fluorescence microscopy. Science 1990;248(4951):73–6.
|
[30] |
Nishiyama J, Yasuda R. Biochemical computation for spine structural plasticity. Neuron 2015;87(1):63–75.
|
[31] |
Cichon J, Gan WB. Branch-specific dendritic Ca2+ spikes cause persistent synaptic plasticity. Nature 2015;520(7546):180–5.
|
[32] |
Sutton R, Barto A. Introduction to reinforcement learning. Cambridge: MIT Press; 1998.
|
[33] |
Sutton RS, Barto AG. Toward a modern theory of adaptive networks: expectation and prediction. Psychol Rev 1981;88(2):135–70.
|
[34] |
Insel TR, Landis SC, Collins FS. The NIH BRAIN Initiative. Science 2013;340 (6133):687–8.
|
[35] |
Jeong S, Lee Y, Jun B, Ryu Y, Sohn J, Kim S, et al. Korea Brain Initiative: emerging issues and institutionalization of neuroethics. Neuron 2019;101(3):390–3.
|
[36] |
Amunts K, Ebell C, Muller J, Telefont M, Knoll A, Lippert T. The human brain project: creating a European research infrastructure to decode the human brain. Neuron 2016;92(3):574–81.
|
[37] |
Okano H, Sasaki E, Yamamori T, Iriki A, Shimogori T, Yamaguchi Y, et al. Brain/ MINDS: a Japanese national brain project for marmoset neuroscience. Neuron 2016;92(3):582–90.
|
[38] |
Jabalpurwala I. Brain Canada: one brain one community. Neuron 2016;92 (3):601–6.
|
[39] |
Alliance A. A neuroethics framework for the Australian Brain Initiative. Neuron 2019;101(3):365–9.
|
[40] |
Deisseroth K. Optogenetics. Nat Methods 2011;8(1):26–9.
|
[41] |
Pégard NC, Mardinly AR, Oldenburg IA, Sridharan S, Waller L, Adesnik H. Threedimensional scanless holographic optogenetics with temporal focusing (3DSHOT). Nat Commun 2017;8(1):1228.
|
[42] |
Hochbaum DR, Zhao Y, Farhi SL, Klapoetke N, Werley CA, Kapoor V, et al. Alloptical electrophysiology in mammalian neurons using engineered microbial rhodopsins. Nat Methods 2014;11(8):825–33.
|
[43] |
Ji N, Freeman J, Smith SL. Technologies for imaging neural activity in large volumes. Nat Neurosci 2016;19(9):1154–64.
|
[44] |
Weisenburger S, Vaziri A. A guide to emerging technologies for large-scale and whole-brain optical imaging of neuronal activity. Annu Rev Neurosci 2018;41 (1):431–52.
|
[45] |
Ahrens MB, Orger MB, Robson DN, Li JM, Keller PJ. Whole-brain functional imaging at cellular resolution using light-sheet microscopy. Nat Methods 2013;10(5):413–20.
|
[46] |
Kim TH, Zhang Y, Lecoq J, Jung JC, Li J, Zeng H, et al. Long-term optical access to an estimated one million neurons in the live mouse cortex. Cell Rep 2016;17 (12):3385–94.
|
[47] |
McConnell G, Trägårdh J, Amor R, Dempster J, Reid E, Amos WB. A novel optical microscope for imaging large embryos and tissue volumes with sub-cellular resolution throughout. Elife 2016;5:e18659.
|
[48] |
Stirman JN, Smith IT, Kudenov MW, Smith SL. Wide field-of-view, multiregion, two-photon imaging of neuronal activity in the mammalian brain. Nat Biotechnol 2016;34(8):857–62.
|
[49] |
Chen JL, Carta S, Soldado-Magraner J, Schneider BL, Helmchen F. Behaviourdependent recruitment of long-range projection neurons in somatosensory cortex. Nature 2013;499:336–40.
|
[50] |
Sofroniew NJ, Flickinger D, King J, Svoboda K. A large field of view two-photon mesoscope with subcellular resolution for in vivo imaging. Elife 2016;5: e14472.
|
[51] |
Joesch M, Mankus D, Yamagata M, Shahbazi A, Schalek R, Suissa-Peleg A, et al. Reconstruction of genetically identified neurons imaged by serial-section electron microscopy. Elife 2016;5:e15015.
|
[52] |
Friedrich J, Yang W, Soudry D, Mu Y, Ahrens MB, Yuste R, et al. Multi-scale approaches for high-speed imaging and analysis of large neural populations. PLoS Comput Biol 2017;13(8):e1005685.
|
[53] |
Berens P, Freeman J, Deneux T, Chenkov N, McColgan T, Speiser A, et al. Community-based benchmarking improves spike rate inference from twophoton calcium imaging data. PLoS Comput Biol 2018;14(5):e1006157.
|
[54] |
Paninski L, Cunningham JP. Neural data science: accelerating the experimentanalysis-theory cycle in large-scale neuroscience. Curr Opin Neurobiol 2018;50:232–41.
|
[55] |
Hoffer E, Hubara I, Soudry D. Train longer, generalize better: closing the generalization gap in large batch training of neural networks. In: Guyon I, Luxburg UV, Bengio S, Wallach H, Fergus R, Vishwanathan S, et al., editors. Proceedings of the Neural Information Processing Systems 2017; 2017 Dec 4– 9; Long Beach, CA, USA; 2017. p. 1731–41.
|
[56] |
Kadmon J, Sompolinsky H. Optimal architectures in a solvable model of deep networks. In: Lee DD, Sugiyama M, Luxburg UV, Guyon I, Garnett R, editors. Proceedings of the Neural Information Processing Systems 2016; 2016 Dec 5– 10; Barcelona, Spain; 2016. p. 4788–96
|
/
〈 |
|
〉 |