多机协同智能发展战略研究

薛建儒, 房建武, 庞善民, 郑南宁

中国工程科学 ›› 2024, Vol. 26 ›› Issue (1) : 101-116.

PDF(2626 KB)
PDF(2626 KB)
中国工程科学 ›› 2024, Vol. 26 ›› Issue (1) : 101-116. DOI: 10.15302/J-SSCAE-2024.01.013
新一代人工智能及产业集群发展战略研究

多机协同智能发展战略研究

作者信息 +

Collaborative Multiple Autonomous Systems

Author information +
History +

摘要

多个自主智能系统通过信息、行为交互构成的多机协同智能,代表着未来智能系统的必然发展趋势,是我国新一代人工智能规划部署的主攻方向,也是支撑国防、社会安全的核心技术和推动制造业由大到强的必由之路。开展突破多机协同智能技术发展研究,对于推动我国军事智能、智能产业高质量发展、加快工业转型升级具有重要意义。本文基于多机协同智能系统当前面临的难以适应复杂任务这一挑战,从基础理论和核心关键技术两个层面出发,系统地梳理了多机协同智能的研究现状,分析了制约基础理论与关键技术发展的主要瓶颈性问题,并以多机协同智能制造为典型应用,剖析理论与技术发展中存在的问题。研究认为,多机协同智能将朝着人机群组智能的方向发展,为抢占发展先机,需及早布局人机群组智能的基础理论探索,加速核心技术突破,并加快应用示范。

Abstract

Collaborative intelligence formed via information and behavioral interactions of multiple autonomous systems is an inevitable trend of future intelligent systems. It is a focus of planning of the next-generation artificial intelligence in China and is crucial for supporting national security and strengthening the manufacturing industry. Research aimed at overcoming bottlenecks regarding collaborative multiple autonomous systems will significantly aid the advancement of intelligent industries and accelerate industrial transformation and upgrading in China. Focusing on the challenge that collaborative multiple autonomous systems cannot adapt to complex tasks, this study thoroughly analyzes the research status and major bottlenecks of collaborative multiple autonomous systems from the aspects of fundamental research and engineering. Using multi-robot collaborative intelligent manufacturing as an example, we provide an in-depth analysis of relevant theoretic and technical problems. Our research indicates that collaborative multiple autonomous systems will inevitably evolve toward human ‒ machine teaming. To master this opportunity, it is critical to proactively lay the groundwork for the theoretical exploration and technological breakthroughs of human‒machine teaming and to conduct exemplary applications.

关键词

多机协同智能 / 集群智能 / 人机群组智能 / 多机协同制造 / 全域感知

Keywords

collaborative multiple autonomous systems / swarm intelligence / human ‒ machine teaming / multi-robot collaborative intelligent manufacturing / all-domain perception

引用本文

导出引用
薛建儒, 房建武, 庞善民. 多机协同智能发展战略研究. 中国工程科学. 2024, 26(1): 101-116 https://doi.org/10.15302/J-SSCAE-2024.01.013

参考文献

[1]
中国人工智能‍发展战略研究项目组. 中国人工智能2.0发展战略研究 [M]. 杭州: 浙江大学出版社, 2018.
[2]
Liu B Y, Wang L J, Liu M, et al. Federated imitation learning: A novel framework for cloud robotic systems with heterogeneous sensor data [J]. IEEE Robotics and Automation Letters, 2020, 5(2): 3509‒3516.
[3]
Zhao S B, Zhang H R, Wang P, et al. Super odometry: IMU-centric LiDAR-visual-inertial estimator for challenging environments [C]. Prague: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.
[4]
Xu J, Li R F, Zhao L J, et al. CamMap: Extrinsic calibration of non-overlapping cameras based on SLAM map alignment [J]. IEEE Robotics and Automation Letters, 2022, 7(4): 11879‒11885.
[5]
Chen S W, Nardari G V, Lee E S, et al. SLOAM: Semantic lidar odometry and mapping for forest inventory [J]. IEEE Robotics and Automation Letters, 2020, 5(2): 612‒619.
[6]
McGuire K N, De Wagter C, Tuyls K, et al. Minimal navigation solution for a swarm of tiny flying robots to explore an unknown environment [J]. Science Robotics, 2019, 4(35): eaaw9710.
[7]
Saboia M, Clark L, Thangavelu V, et al. Achord: Communication-aware multi-robot coordination with intermittent connectivity [J]. IEEE Robotics and Automation Letters, 2022, 7(4): 10184‒10191.
[8]
Talbot B, Dayoub F, Corke P, et al. Robot navigation in unseen spaces using an abstract map [J]. IEEE Transactions on Cognitive and Developmental Systems, 2021, 13(4): 791‒805.
[9]
Zhao D Y, Zhang Z, Lu H, et al. Learning cognitive map representations for navigation by sensory-motor integration [J]. IEEE Transactions on Cybernetics, 2022, 52(1): 508‒521.
[10]
Xiao J P, Pisutsin P, Feroskhan M. Collaborative target search with a visual drone swarm: An adaptive curriculum embedded multistage reinforcement learning approach [J]. IEEE Transactions on Neural Networks and Learning Systems, 2023: 1‒15.
[11]
Farmani N, Sun L, Pack D J. A scalable multitarget tracking system for cooperative unmanned aerial vehicles [J]. IEEE Transactions on Aerospace and Electronic Systems, 2017, 53(4): 1947‒1961.
[12]
Vásárhelyi G, Virágh C, Somorjai G, et al. Optimized flocking of autonomous drones in confined environments [J]. Science Robotics, 2018, 3(20): eaat3536.
[13]
Huber L, Slotine J J, Billard A. Avoiding dense and dynamic obstacles in enclosed spaces: Application to moving in crowds [J]. IEEE Transactions on Robotics, 2022, 38(5): 3113‒3132.
[14]
Zhou X, Wen X Y, Wang Z P, et al. Swarm of micro flying robots in the wild [J]. Science Robotics, 2022, 7(66): eabm5954.
[15]
Notomista G, Mayya S, Hutchinson S, et al. An optimal task allocation strategy for heterogeneous multi-robot systems [C]. Naples: 2019 18th European Control Conference (ECC), 2019.
[16]
Jaderberg M, Czarnecki W M, Dunning I, et al. Human-level performance in 3D multiplayer games with population-based reinforcement learning [J]. Science, 2019, 364(6443): 859‒865.
[17]
苗润龙‍. 分布式无人艇集群协同区域搜索与目标定位研究 [D]. 哈尔滨: 哈尔滨工程大学(博士学位论文), 2021.
[18]
Marconi L, Melchiorri C, Beetz M, et al. The SHERPA project: Smart collaboration between humans and ground-aerial robots for improving rescuing activities in alpine environments [C]. College Station: 2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), 2012.
[19]
Chung T. Offensive swarm-enabled tactics (offset) [R]. Arlington: DARPA Tactical Technology Office, 2017.
[20]
Ackerman E. A robot for the worst job in the warehouse: Boston Dynamics´ Stretch can move 800 heavy boxes per hour [J]. IEEE Spectrum, 2022, 59(1): 50‒51.
[21]
Cao C, Zhu H, Ren Z, et al. Representation granularity enables time-efficient autonomous exploration in large, complex worlds [J]. Science Robotics, 2023, 8(80): eadf0970.
[22]
Qiao Y Y, Qi Y K, Yu Z, et al. March in chat: Interactive prompting for remote embodied referring expression [C]. Paris: 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2023.
[23]
Mokaram S, Aitken J M, Martinez-Hernandez U, et al. A ROS-integrated API for the KUKA LBR iiwa collaborative robot [J]. IFAC-PapersOnLine, 2017, 50(1): 15859‒15864.
[24]
Srinivasa S S, Berenson D, Cakmak M, et al. Herb 2.0: Lessons learned from developing a mobile manipulator for the home [J]. Proceedings of the IEEE, 2012, 100(8): 2410‒2428.
[25]
Apolinarska A A, Pacher M, Li H, et al. Robotic assembly of timber joints using reinforcement learning [J]. Automation in Construction, 2021, 125: 103569.
[26]
Aguinaldo A, Bunker J, Pollard B, et al. RoboCat: A category theoretic framework for robotic interoperability using goal-oriented programming [J]. IEEE Transactions on Automation Science and Engineering, 2022, 19(3): 2637‒2645.
[27]
Abadi M, Agarwal A, Barham P, et al. TensorFlow: Large-scale machine learning on heterogeneous distributed systems [EB/OL]. (2016-03-14)[2023-11-15].https://www.semanticscholar.org/paper/TensorFlow: -Large-Scale-Machine-Learning-on-Systems-Abadi-Agarwal/9c9d7247f8c51ec5a02b0d911d1d7b9e8160495d.
[28]
Paszke A, Gross S, Massa F, et al. PyTorch: An imperative style, high-performance deep learning library [C].Vancouver: The 33rd Conference on Neural Information Processing Systems, 2019.
[29]
Bi R, Xu T T, Xu M X, et al. PaddlePaddle: A production-oriented deep learning platform facilitating the competency of enterprises [C]. Hainan: 2022 IEEE 24th International Conference on High Performance Computing and Communications, 2022.
[30]
Integration B O H S. Human-AI teaming: State-of-the-art and research needs [M]. Washington, DC: National Academies Press, 2022.
基金
中国工程院咨询项目“新一代人工智能及产业集群发展战略研究”(2022-PP-07);国家自然科学基金重点项目 (62036008)
PDF(2626 KB)

Accesses

Citation

Detail

段落导航
相关文章

/