Journal Home Online First Current Issue Archive For Authors Journal Information 中文版

Strategic Study of CAE >> 2024, Volume 26, Issue 1 doi: 10.15302/J-SSCAE-2024.01.013

Collaborative Multiple Autonomous Systems

1. Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China;
2. National Key Laboratory of Human‒Machine Hybrid Augmented Intelligence, Xi’an 710049, China;
3. College of Control Science and Engineering, Zhejiang University, Hangzhou 310027, China;
4. School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China

Funding project:Chinese Academy of Engineering project “Strategic Research on New Generation of Artificial Intelligence and Its Industrial Cluster” (2022-PP-07); National Natural Science Fund Project (62036008) Received: 2023-12-20 Revised: 2024-01-19 Available online: 2024-02-18

Next Previous

Abstract

Collaborative intelligence formed via information and behavioral interactions of multiple autonomous systems is an inevitable trend of future intelligent systems. It is a focus of planning of the next-generation artificial intelligence in China and is crucial for supporting national security and strengthening the manufacturing industry. Research aimed at overcoming bottlenecks regarding collaborative multiple autonomous systems will significantly aid the advancement of intelligent industries and accelerate industrial transformation and upgrading in China. Focusing on the challenge that collaborative multiple autonomous systems cannot adapt to complex tasks, this study thoroughly analyzes the research status and major bottlenecks of collaborative multiple autonomous systems from the aspects of fundamental research and engineering. Using multi-robot collaborative intelligent manufacturing as an example, we provide an in-depth analysis of relevant theoretic and technical problems. Our research indicates that collaborative multiple autonomous systems will inevitably evolve toward human ‒ machine teaming. To master this opportunity, it is critical to proactively lay the groundwork for the theoretical exploration and technological breakthroughs of human‒machine teaming and to conduct exemplary applications.
 

Figures

图1

图2

图3

图4

图5

图6

图7

图8

图9

图10

图11

References

[ 1 ] 中国人工智能‍发展战略研究项目组. 中国人工智能2.0发展战略研究 [M]. 杭州: 浙江大学出版社, 2018.
Chinese Artificial Intelligence Development Strategy Research Project Team. Chinese artificial intelligence 2.0 development war a little research [M]. Hangzhou: Zhejiang University Press, 2018.

[ 2 ] Liu B Y, Wang L J, Liu M, et al. Federated imitation learning: A novel framework for cloud robotic systems with heterogeneous sensor data [J]. IEEE Robotics and Automation Letters, 2020, 5(2): 3509‒3516.

[ 3 ] Zhao S B, Zhang H R, Wang P, et al. Super odometry: IMU-centric LiDAR-visual-inertial estimator for challenging environments [C]. Prague: 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021.

[ 4 ] Xu J, Li R F, Zhao L J, et al. CamMap: Extrinsic calibration of non-overlapping cameras based on SLAM map alignment [J]. IEEE Robotics and Automation Letters, 2022, 7(4): 11879‒11885.

[ 5 ] Chen S W, Nardari G V, Lee E S, et al. SLOAM: Semantic lidar odometry and mapping for forest inventory [J]. IEEE Robotics and Automation Letters, 2020, 5(2): 612‒619.

[ 6 ] McGuire K N, De Wagter C, Tuyls K, et al. Minimal navigation solution for a swarm of tiny flying robots to explore an unknown environment [J]. Science Robotics, 2019, 4(35): eaaw9710.

[ 7 ] Saboia M, Clark L, Thangavelu V, et al. Achord: Communication-aware multi-robot coordination with intermittent connectivity [J]. IEEE Robotics and Automation Letters, 2022, 7(4): 10184‒10191.

[ 8 ] Talbot B, Dayoub F, Corke P, et al. Robot navigation in unseen spaces using an abstract map [J]. IEEE Transactions on Cognitive and Developmental Systems, 2021, 13(4): 791‒805.

[ 9 ] Zhao D Y, Zhang Z, Lu H, et al. Learning cognitive map representations for navigation by sensory-motor integration [J]. IEEE Transactions on Cybernetics, 2022, 52(1): 508‒521.

[10] Xiao J P, Pisutsin P, Feroskhan M. Collaborative target search with a visual drone swarm: An adaptive curriculum embedded multistage reinforcement learning approach [J]. IEEE Transactions on Neural Networks and Learning Systems, 2023: 1‒15.

[11] Farmani N, Sun L, Pack D J. A scalable multitarget tracking system for cooperative unmanned aerial vehicles [J]. IEEE Transactions on Aerospace and Electronic Systems, 2017, 53(4): 1947‒1961.

[12] Vásárhelyi G, Virágh C, Somorjai G, et al. Optimized flocking of autonomous drones in confined environments [J]. Science Robotics, 2018, 3(20): eaat3536.

[13] Huber L, Slotine J J, Billard A. Avoiding dense and dynamic obstacles in enclosed spaces: Application to moving in crowds [J]. IEEE Transactions on Robotics, 2022, 38(5): 3113‒3132.

[14] Zhou X, Wen X Y, Wang Z P, et al. Swarm of micro flying robots in the wild [J]. Science Robotics, 2022, 7(66): eabm5954.

[15] Notomista G, Mayya S, Hutchinson S, et al. An optimal task allocation strategy for heterogeneous multi-robot systems [C]. Naples: 2019 18th European Control Conference (ECC), 2019.

[16] Jaderberg M, Czarnecki W M, Dunning I, et al. Human-level performance in 3D multiplayer games with population-based reinforcement learning [J]. Science, 2019, 364(6443): 859‒865.

[17] 苗润龙‍. 分布式无人艇集群协同区域搜索与目标定位研究 [D]. 哈尔滨: 哈尔滨工程大学(博士学位论文), 2021.
Miao R L. Research on swarm of distributed unmanned surface vehicles for collaborative search and targets positioning [D]. Harbin: Harbin Engineering University(Doctoral dissertation), 2021.

[18] Marconi L, Melchiorri C, Beetz M, et al. The SHERPA project: Smart collaboration between humans and ground-aerial robots for improving rescuing activities in alpine environments [C]. College Station: 2012 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), 2012.

[19] Chung T. Offensive swarm-enabled tactics (offset) [R]. Arlington: DARPA Tactical Technology Office, 2017.

[20] Ackerman E. A robot for the worst job in the warehouse: Boston Dynamics´ Stretch can move 800 heavy boxes per hour [J]. IEEE Spectrum, 2022, 59(1): 50‒51.

[21] Cao C, Zhu H, Ren Z, et al. Representation granularity enables time-efficient autonomous exploration in large, complex worlds [J]. Science Robotics, 2023, 8(80): eadf0970.

[22] Qiao Y Y, Qi Y K, Yu Z, et al. March in chat: Interactive prompting for remote embodied referring expression [C]. Paris: 2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2023.

[23] Mokaram S, Aitken J M, Martinez-Hernandez U, et al. A ROS-integrated API for the KUKA LBR iiwa collaborative robot [J]. IFAC-PapersOnLine, 2017, 50(1): 15859‒15864.

[24] Srinivasa S S, Berenson D, Cakmak M, et al. Herb 2.0: Lessons learned from developing a mobile manipulator for the home [J]. Proceedings of the IEEE, 2012, 100(8): 2410‒2428.

[25] Apolinarska A A, Pacher M, Li H, et al. Robotic assembly of timber joints using reinforcement learning [J]. Automation in Construction, 2021, 125: 103569.

[26] Aguinaldo A, Bunker J, Pollard B, et al. RoboCat: A category theoretic framework for robotic interoperability using goal-oriented programming [J]. IEEE Transactions on Automation Science and Engineering, 2022, 19(3): 2637‒2645.

[27] Abadi M, Agarwal A, Barham P, et al. TensorFlow: Large-scale machine learning on heterogeneous distributed systems [EB/OL]. (2016-03-14)[2023-11-15].https://www.semanticscholar.org/paper/TensorFlow: -Large-Scale-Machine-Learning-on-Systems-Abadi-Agarwal/9c9d7247f8c51ec5a02b0d911d1d7b9e8160495d.

[28] Paszke A, Gross S, Massa F, et al. PyTorch: An imperative style, high-performance deep learning library [C].Vancouver: The 33rd Conference on Neural Information Processing Systems, 2019.

[29] Bi R, Xu T T, Xu M X, et al. PaddlePaddle: A production-oriented deep learning platform facilitating the competency of enterprises [C]. Hainan: 2022 IEEE 24th International Conference on High Performance Computing and Communications, 2022.

[30] Integration B O H S. Human-AI teaming: State-of-the-art and research needs [M]. Washington, DC: National Academies Press, 2022.

Related Research