Journal Home Online First Current Issue Archive For Authors Journal Information 中文版

Strategic Study of CAE >> 2018, Volume 20, Issue 6 doi: 10.15302/J-SSCAE-2018.06.015

Intelligence Originating from Human Beings and Expanding in Industry— A View on the Development of Artificial Intelligence

Key Laboratory of Embedded System and Service Computing, Ministry of Education, Tongji University, Shanghai 201804, China

Funding project:CAE Advisory Project “Strategic Research on Disruptive Technologies for Engineering Science and Technology” (2017-ZD-10); Major Research Plan Integrating Project of the National Natural Science Foundation of China (91218301) Received: 2018-10-22 Revised: 2018-10-30 Available online: 2018-12-31

Next Previous

Abstract

Artificial Intelligence (AI) aims to simulate information storage and processing mechanisms and other intelligent behaviors of a human brain, so that the machine has a certain level of intelligence. With the rapid development of the new generation of information technology, such as the Internet, big data, cloud computing, and deep learning, researches and applications of AI have made and are making important progresses. In this paper, the historical integration and evolution of computer science, control science, brain-inspired intelligence, human brain intelligence, and other disciplines or fields closely related to AI are analyzed in depth; then it is pointed out that the research results on the structure and functional mechanism of brain from neuroscience, brain science and cognitive science provide some important inspirations for the construction of an intelligent computing model. Moreover, the drives and developments of AI are discussed from the aspects of logic model and system, neuron network model, visual nerve hierarchy mechanism, etc. Finally, the development trend of AI is prospected from the following five aspects: the computational theory of the Internet, the integration of AI calculus and computation, the model and mechanism of brain-inspired intelligence, the impetus of AI to neuroscience, and the algorithm design of feedback computation and the energy level of the control system.

Figures

Fig.1

Fig.2

Fig.3

Fig.4

References

[ 1 ] Lin C D, Yang Z L, Huang X T. A dictionary of psychology [M]. Shanghai: Shanghai Publishing House, 2003. Chinese.

[ 2 ] Turing A M. Computing machinery intelligence [J]. Mind, 1950, 59: 433–460.

[ 3 ] Chomsky N. Three models for the description of language [J]. IEEE Transactions on Information Theory, 1956, 2(3): 113–124. link1

[ 4 ] Hinton G E, Salakhutdinov R R. Reducing the dimensionality of data with neural networks [J]. Science, 2006, 313(5786): 504– 507. link1

[ 5 ] McCulloch W S, Pitts W. A logical calculus of the ideas immanent in nervous activity [J]. The Bulletin of Mathematical Biophysics, 1943, 5(4): 115–133. link1

[ 6 ] Hebb D O. The organization of behavior [M]. New York: Wiley, 1949.

[ 7 ] Rosenblatt F. The perceptron: A probabilistic model for information storage and organization in the brain [J]. Psychological Review, 1958, 65(6): 386–408. link1

[ 8 ] Minsky M, Papert S. Perceptrons [M]. Oxford: MIT Press, 1969.

[ 9 ] Werbos P J. Backpropagation through time: What it does and how to do it [J]. Proceedings of the IEEE, 1990, 78(10): 1550– 1560. link1

[10] Hopfield J J. Neural networks and physical systems with emergent collective computational abilities [J]. Proceedings of the National Academy of Sciences, 1982, 79(8): 2554–2558. link1

[11] Ackley D H, Hinton G E, Sejnowski T J. A learning algorithm for elmholtz machines [J]. Cognitive Science, 1985, 9(1): 147– 169.

[12] Rumelhart D E, Hinton G E, Williams R J. Learning representations by back-propagating errors [J]. Nature, 1986, 323: 533– 536. link1

[13] Hubel D H, Wiesel T N. Early exploration of the visual cortex [J]. Neuron, 1998, 20: 401–412. link1

[14] Lecun Y, Boser B, Denker J S, et al. Backpropagation applied to handwritten zip code recognition [J]. Neural Computation, 1989 (1): 541–551. link1

[15] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks [C]. Nevada: International Conference on Neural Information Processing Systems, 2012. link1

[16] Chen T, Du Z, Sun N, et al. DianNao: A small-footprint highthroughput accelerator for ubiquitous machine learning [C]. Salt Lake City: Proceedings of the 19th International Conference on Architectural Support for Programming Languages and Operating Systems, 2014.

[17] Carver M. Analog VLSI and neural systems [M]. Boston: Addison-Wesley Mass, 1989.

[18] Wolfgang M. Networks of spiking neurons: The third generation of neural network models [J]. Neural Networks, 1997, 10(9): 1659–1671. link1

[19] Merolla P A. A million spiking-neuron integrated circuit with a scalable communication network and interface [J]. Science, 2014, 345 (6197): 668–673. link1

[20] Chua L O. Memristor—The missing circuit element [J]. IEEE Transactions on Circuit Theory, 1971, 18(5): 507–519. link1

[21] Dmitri B S, Gregory S S, Duncan R S. The missing memristor found [J]. Nature, 2008, 453: 80–83.

[22] Borghetti J. Memristive switches enable stateful logic operations via material implication [J]. Nature, 2010, 464: 873–876. link1

[23] Irem B, Manuel L G, Nandakumar S R, et al. Neuromorphic computing with multi-memristive synapses [J]. Nature Communication, 2018, 9: 2514–2529. link1

[24] Hochreiter S, Schmidhuber J. Long short-term memory [J]. Neural Computation, 1997, 9(8): 1735–1780. link1

[25] Andrea B, Caswell B, Benigno U, et al. Vector-based navigation using grid-like representations in artificial agents [J]. Nature, 2018, 557: 429–433. link1

[26] Collins A M, Quillian M R. Retrieval time from semantic memory [J]. Journal of verbal learning and verbal behavior, 1969, 8(2): 240–248. link1

[27] McClelland J, Rumelhart D. An interactive activation model of context effects in letter perception: Part 1. An account of basic findings [J]. Psychology Review, 1981, 88: 375–407. link1

[28] Bengio Y, Ducharme R, Vincent P, et al. A neural probabilistic language model [J]. Journal of Machine Learning Research, 2003, (3): 1137–1155. link1

[29] Mikolov T, Chen K, Corrado C, et al. Efficient estimation of word representation in vector space [C]. Florida: Proceedings of Workshop at ICLR, 2013. link1

[30] Testolin A, Stoianov I, Zorzi M. Letter perception emerges from unsupervised deep learning and recycling of natural image features [J]. Human Behaviour, 2017, 1(9): 657–664. link1

[31] Wiener N. Cybernetics: Or control and communication in the animal and the machine [M]. Cambridge: The MIT Press, 1948.

[32] Minsky M L. Theory of neural-analog reinforcement systems and its application to the brain-model problem [D]. Princeton: Princeton University (Doctoral dissertation), 1954.

[33] Zhuang Y, Wu F, Chen C, et al. Challenges and opportunities: From big data to knowledge in AI 2.0 [J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(1): 3–14. link1

[34] Li W, Wu W, Wang H, et al. Crowd intelligence in AI 2.0 era [J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(1): 15–43. link1

[35] Peng Y, Zhu W, Zhao Y, et al. Cross-media analysis and reasoning: Advances and directions [J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(1): 44–57. link1

[36] Zheng N, Liu Z, Ren P, et al. Hybrid-augmented intelligence: Collaboration and cognition [J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(2): 153–179. link1

[37] Tian Y, Chen X, Xiong H, et al. Towards human-like and transhuman perception in AI 2.0: A review [J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(1): 58–67. link1

[38] Zhang T, Li Q, Zhang C, et al. Current trends in the development of intelligent unmanned autonomous systems [J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(1): 68–85. link1

[39] Li B, Hou B, Yu W, et al. Applications of artificial intelligence in intelligent manufacturing: A review [J]. Frontiers of Information Technology & Electronic Engineering, 2017, 18(1): 86–96. link1

[40] Marblestone A H, Wayne G, Kording K P. Towards an integration of deep learning and neuroscience [J]. Frontiers in Computational Neuroscience, 2016, 10(5): 1–60. link1

Related Research