
车路协同感知技术研究进展及展望
Vehicle‒Infrastructure Cooperative Sensing: Progress and Prospect
近年来,我国自动驾驶研究逐步从聚焦于单车智能技术向车路协同技术转变,为智能交通产业发展带来了重大机遇;我国在车路协同感知领域的研究虽处于起步阶段,但注重技术推动,未来发展前景广阔。本文致力于深入探讨车路协同感知技术的发展动态,梳理了车路协同感知基础支撑技术的特性和发展现状,厘清了车路协同感知技术的研究进展,探讨了其技术发展趋势,并针对推动车路协同感知技术发展提出了一系列建议。研究表明,车路协同感知技术正朝着多源数据融合方向发展,主要集中在纯视觉协同感知技术优化、激光雷达点云处理技术升级、多传感器时空信息匹配与数据融合技术发展以及车路协同感知技术标准体系构建等方面。为进一步促进我国车路协同自动驾驶产业的迅速成长,研究建议,加大对多模态车路协同感知技术的研发投入、深化行业间的合作、制定统一的感知数据处理技术标准并加速技术应用普及,以期推动我国在全球自动驾驶竞争中赢得主动,推动自动驾驶行业稳定持续发展。
Recently, the autonomous driving industry in China has been gradually shifting its focus from individual-vehicle intelligence to vehicle‒infrastructure cooperation. This shift has brought significant opportunities for the intelligent transportation industry. Although research on vehicle‒infrastructure cooperative sensing is still in its early stage in China, it shows a strong dedication to technological innovation, indicating significant potentials for future growth. This study examines the development status of vehicle‒infrastructure cooperative sensing and thoroughly explores the characteristics and status of core technologies that support vehicle‒infrastructure cooperative sensing. It discusses ongoing advancements in this field, investigates future technology trends, and proposes a range of recommendations for further development. Research indicates that vehicle‒infrastructure cooperative sensing is evolving toward the integration of multi-source data. Presently, its development directions mainly focus on the optimization of pure visual cooperative sensing, upgrades in LiDAR point cloud processing, advancements in multi-sensor spatiotemporal information matching and data fusion, as well as the establishment of a standards system for vehicle‒infrastructure cooperative sensing technologies. To further boost the rapid growth of vehicle‒infrastructure cooperation in China, increasing investment in the research and development of relevant technologies is advised. Enhancing partnerships among different industry sectors, establishing unified standards for processing perception data, and expediting the broad application of these technologies are also key recommendations. These strategies aim to position China advantageously in the global market of autonomous driving, contributing to the sustainable development of the industry.
自动驾驶 / 车路协同感知 / 多源数据 / 激光雷达 / 视频摄像机 / 标准体系
autonomous driving / vehicle‒infrastructure cooperative sensing / multi-source data / LiDAR / video camera / standards system
[1] |
冉斌, 谭华春, 张健, 等. 智能网联交通技术发展现状及趋势 [J]. 汽车安全与节能学报, 2018, 9(2): 119‒130.
|
[2] |
张毅, 姚丹亚, 李力, 等. 智能车路协同系统关键技术与应用 [J]. 交通运输系统工程与信息, 2021, 21(5): 40‒51.
|
[3] |
王鲲, 张珠华, 杨凡, 等. 面向高等级自动驾驶的车路协同关键技术 [J]. 移动通信, 2021, 45(6): 69‒76.
|
[4] |
贾子永, 任国全, 李冬伟, 等. 基于激光雷达深度信息和视觉HOG特征的车辆识别与跟踪方法 [J]. 装甲兵工程学院学报, 2017, 31(6): 88‒95.
|
[5] |
龙学军, 谭志国, 高枫. 多传感器融合路侧感知技术应用现状分析 [J]. 中国交通信息化, 2021 (10): 137‒140.
|
[6] |
皮任东. 基于路侧激光雷达和摄像头融合的目标轨迹追踪方法研究 [D]. 济南: 山东大学(硕士学位论文), 2022.
|
[7] |
安鑫, 蔡伯根, 上官伟. 车路协同路侧感知融合方法的研究 [J]. 测控技术, 2022, 41(2): 1‒12, 35.
|
[8] |
姚海敏, 冯霏, 陈建华. 基于高精度地图及多传感器融合定位的车路协同应用实践 [J]. 测绘地理信息, 2022, 47(3): 65‒69.
|
[9] |
邢吉平. 基于多源数据融合的城市路网交通流量估计方法研究 [D]. 南京: 东南大学(博士学位论文), 2021.
|
[10] |
Swain M J, Ballard D H. Color indexing [J]. International Journal of Computer Vision, 1991, 7(1): 11‒32.
|
[11] |
Lowe D G. Distinctive image features from scale-invariant keypoints [J]. International Journal of Computer Vision, 2004, 60(2): 91‒110.
|
[12] |
Anjulan A, Canagarajah N. Object based video retrieval with local region tracking [J]. Signal Processing: Image Communication, 2007, 22(7‒8): 607‒621.
|
[13] |
Woesler R. Fast extraction of traffic parameters and reidentification of vehicles from video data [C]. Shanghai: The 2003 IEEE International Conference on Intelligent Transportation Systems, 2003.
|
[14] |
Zapletal D, Herout A. Vehicle re-identification for automatic video traffic surveillance [C]. Las Vegas: 2016 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2016.
|
[15] |
Goodfellow I, Pouget-Abadie J, Mirza M, et al. Generative adversarial networks [J]. Communications of the ACM, 2020, 63(11): 139‒144.
|
[16] |
Creswell A, White T, Dumoulin V, et al. Generative adversarial networks: An overview [J]. IEEE Signal Processing Magazine, 2018, 35(1): 53‒65.
|
[17] |
Bank D, Koenigstein N, Giryes R. Autoencoders [M]// Rokach L, Maimon O, Shmueli E. Machine Learning for Data Science Handbook. Cham: Springer International Publishing, 2023: 353‒374.
|
[18] |
Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks [J]. Communications of the ACM, 2017, 60(6): 84‒90.
|
[19] |
Szegedy C, Liu W, Jia Y Q, et al. Going deeper with convolutions [C]. Boston: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.
|
[20] |
He K M, Zhang X Y, Ren S Q, et al. Deep residual learning for image recognition [C]. Las Vegas: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
|
[21] |
Li H, Sima C, Dai J, et al. Delving into the devils of bird´s-eye-view perception: A review, evaluation and recipe [J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023.
|
[22] |
Kumar V R, Eising C, Witt C, et al. Surround-view fisheye camera perception for automated driving: Overview, survey & challenges [J]. IEEE Transactions on Intelligent Transportation Systems, 2023.
|
[23] |
Strobel M, Döttling D. High dynamic range CMOS (HDRC) imagers for safety systems [J]. Advanced Optical Technologies, 2013, 2(2): 147‒157.
|
[24] |
Hansard M, Lee S, Choi O, et al. Time-of-flight cameras: Principles, methods and applications [M]. London: Springer London, 2013.
|
[25] |
Warren M E. Automotive LIDAR technology [C]. Kyoto: 2019 Symposium on VLSI Circuits, 2019.
|
[26] |
李欣, 李京英. 基于激光雷达点云多特征提取的车辆目标识别算法 [J]. 传感器与微系统, 2020, 39(10): 138‒141.
|
[27] |
Zhou Y, Tuzel O. VoxelNet: End-to-end learning for point cloud based 3D object detection [C]. Salt Lake City: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018.
|
[28] |
Charles R Q, Hao S, Mo K C, et al. PointNet: Deep learning on point sets for 3D classification and segmentation [C]. Honolulu: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
|
[29] |
Shi W J, Rajkumar R. Point-GNN: Graph neural network for 3D object detection in a point cloud [C]. Seattle: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020.
|
[30] |
Sebald D J, Bucklew J A. Support vector machines and the multiple hypothesis test problem [J]. IEEE Transactions on Signal Processing, 2001, 49(11): 2865‒2872.
|
[31] |
郭泉成, 黄梓健, 刘乐, 等. 汽车自动驾驶传感器发展 [J]. 科技与创新, 2023 (16): 19‒22.
|
[32] |
杨路, 周文豪, 余翔, 等. 一种抑制杂波的高精度车载雷达目标检测方法 [J]. 仪器仪表学报, 2022, 43(10): 145‒151.
|
[33] |
Zhang G, Chi G, Zhang Y, et al. Push the limit of millimeter-wave radar localization [J]. ACM Transactions on Sensor Networks, 2023, 19(3): 1‒21.
|
[34] |
石晏丞, 李军. 汽车自动驾驶领域的传感器融合技术 [J]. 装备机械, 2021 (3): 1‒6, 12.
|
[35] |
葛宇, 杜春晖, 李亚杰, 等. 大数据环境下多维传感器数据融合算法研究 [J]. 现代电子技术, 2021, 44(7): 28‒31.
|
[36] |
Cui Y D, Chen R, Chu W B, et al. Deep learning for image and point cloud fusion in autonomous driving: A review [J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(2): 722‒739.
|
[37] |
张雯靓. 基于多源信息的高速公路交通事件检测方法研究 [D]. 南京: 东南大学(硕士学位论文), 2018.
|
[38] |
刘树伟, 梁聪聪. 基于红外技术与激光雷达的新能源汽车无人驾驶障碍检测 [J]. 应用激光, 2022, 42(9): 97‒104.
|
[39] |
汪勇, 张英, 廖如超, 等. 基于可见光、热红外及激光雷达传感的无人机图像融合方法 [J]. 激光杂志, 2020, 41(2): 141‒145.
|
[40] |
Fayyad J, Jaradat M A, Gruyer D, et al. Deep learning sensor fusion for autonomous vehicle perception and localization: A review [J]. Sensors, 2020, 20(15): 4220.
|
[41] |
Zhang M J, Liang H W, Zhou P F. UGV-UAV cooperative 3D multi-object tracking based on multi-source data fusion [C]. Hefei: 2023 IEEE International Conference on Unmanned Systems (ICUS), 2023.
|
[42] |
Chen H, Liu J, Wang J, et al. Towards secure intra-vehicle communications in 5G advanced and beyond: Vulnerabilities, attacks and countermeasures [J]. Vehicular Communications, 2023, 39: 100548.
|
[43] |
Wu P, Ding L Q, Wang Y, et al. V2V-assisted V2I MmWave communication for cooperative perception with information value-based relay [C]. Madrid: 2021 IEEE Global Communications Conference (GLOBECOM), 2021.
|
[44] |
Sakaguchi K, Fukatsu R. Cooperative perception realized by millimeter-wave V2V for safe automated driving [C]. Kyoto: 2018 Asia-Pacific Microwave Conference (APMC), 2018: 180‒182.
|
[45] |
Duan X T, Jiang H, Tian D X, et al. V2I based environment perception for autonomous vehicles at intersections [J]. China Communications, 2021, 18(7): 1‒12.
|
[46] |
Thandavarayan G, Sepulcre M, Gozalvez J. Redundancy mitigation in cooperative perception for connected and automated vehicles [C]. Antwerp: 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring), 2020.
|
[47] |
Zhang X J, Debroy S. Resource management in mobile edge computing: A comprehensive survey [J]. ACM Computing Surveys, 55(13s): 291.
|
[48] |
Shao Z L, Huang C, Li H. Replica selection and placement techniques on the IoT and edge computing: A deep study [J]. Wireless Networks, 2021, 27(7): 5039‒5055.
|
[49] |
Han Y G, Wang Z S. Reinforcement learning scheduling research for edge servers [C]. Xi´an: 2023 6th International Conference on Computer Network, Electronic and Automation (ICCNEA), 2023.
|
[50] |
蔡创新, 高尚兵, 周君, 等. 车路视觉协同的高速公路防碰撞预警算法 [J]. 中国图象图形学报, 2020, 25(8): 1649‒1657.
|
[51] |
Zhang M, Wang S, Gao Q. A joint optimization scheme of content caching and resource allocation for Internet of vehicles in mobile edge computing [J]. Journal of Cloud Computing, 2020, 9(1): 33.
|
[52] |
Wei Y K, Zhang J X. A vehicular edge computing-based architecture and task scheduling scheme for cooperative perception in autonomous driving [J]. Mathematics, 2022, 10(18): 3328.
|
[53] |
Zaki A M, Elsayed S A, Elgazzar K, et al. Multi-vehicle task offloading for cooperative perception in vehicular edge computing [C]. Rome: ICC 2023 —IEEE International Conference on Communications, 2023.
|
[54] |
Bahrami B, Khayyambashi M R, Mirjalili S. Edge server placement problem in multi-access edge computing environment: Models, techniques, and applications [J]. Cluster Computing, 2023, 26(5): 3237‒3262.
|
[55] |
Ho J. Systems and methods for mobility aware multiaccess edge computing device selection and relocation: US11431597 [P]. 2022-08-30.
|
[56] |
Benassi W R, O´Brien W. Systems and methods for edge site selection and metrics capture: US20220210034 [P]. 2022-06-30.
|
[57] |
Han Y S, Zhang H, Li H F, et al. Collaborative perception in autonomous driving: Methods, datasets, and challenges [J]. IEEE Intelligent Transportation Systems Magazine, 2023, 15(6): 131‒151.
|
[58] |
Arnold E, Dianati M, de Temple R, et al. Cooperative perception for 3D object detection in driving scenarios using infrastructure sensors [J]. IEEE Transactions on Intelligent Transportation Systems, 2022, 23(3): 1852‒1864.
|
[59] |
Chen Q, Tang S H, Yang Q, et al. Cooper: Cooperative perception for connected autonomous vehicles based on 3D point clouds [C]. Dallas: 2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), 2019.
|
[60] |
Zhang X M, Zhang A L, Sun J C, et al. EMP: Edge-assisted multi-vehicle perception [C]. New Orleans: The 27th Annual International Conference on Mobile Computing and Networking, 2021.
|
[61] |
Wang T H, Manivasagam S, Liang M, et al. V2VNet: Vehicle-to-vehicle communication for joint perception and prediction [C]. Glasgow: Computer Vision-ECCV 2020, 2020.
|
[62] |
Chen Q, Ma X, Tang S H, et al. F-cooper: Feature based cooperative perception for autonomous vehicle edge computing system using 3D point clouds [C]. Arilington: The 4th ACM/IEEE Symposium on Edge Computing, 2019.
|
[63] |
Guo J D, Carrillo D, Tang S H, et al. CoFF: Cooperative spatial feature fusion for 3D object detection on autonomous vehicles [J]. IEEE Internet of Things Journal, 2021, 8(14): 11078‒11087.
|
[64] |
Liu H S, Ren P F, Jain S, et al. FusionEye: Perception sharing for connected vehicles and its bandwidth-accuracy trade-offs [C]. Arlington: 2019 16th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), 2019.
|
[65] |
Hwang S, Kim N, Choi Y, et al. Fast multiple objects detection and tracking fusing color camera and 3D LIDAR for intelligent vehicles [C]. Xi´an: 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI), 2016.
|
[66] |
Wu T E, Tsai C C, Guo J N. LiDAR/camera sensor fusion technology for pedestrian detection [C]. Kuala Lumpur: 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2017.
|
[67] |
Zhang F H, Clarke D, Knoll A. Vehicle detection based on LiDAR and camera fusion [C]. Qingdao: 17th International IEEE Conference on Intelligent Transportation Systems (ITSC), 2014.
|
[68] |
Hwang J P, Cho S E, Ryu K J, et al. Multi-classifier based LIDAR and camera fusion [C]. Bellevue: 2007 IEEE Intelligent Transportation Systems Conference, 2007.
|
[69] |
Cho K, Baeg S H, Lee K, et al. Pedestrian and car detection and classification for unmanned ground vehicle using 3D LiDAR and monocular camera [C]. Orlando: SPIE Defense, Security, and Sensing, 2011.
|
[70] |
Ambrosin M, Alvarez I J, Buerkle C, et al. Object-level perception sharing among connected vehicles [C]. Auckland: 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019.
|
[71] |
Xie Q, Zhou X B, Qiu T, et al. Soft actor-critic-based multilevel cooperative perception for connected autonomous vehicles [J]. IEEE Internet of Things Journal, 2022, 9(21): 21370‒21381.
|
/
〈 |
|
〉 |