Journal Home Online First Current Issue Archive For Authors Journal Information 中文版

Engineering >> 2018, Volume 4, Issue 4 doi: 10.1016/j.eng.2018.07.010

A Co-Point Mapping-Based Approach to Drivable Area Detection for Self-Driving Cars

a Institute of Artificial Intelligence and Robotics, Xi’an Jiaotong University, Xi’an 710049, China

b National Engineering Laboratory for Visual Information Processing and Applications, Xi’an Jiaotong University, Xi’an 710049, China

Received: 2017-04-06 Revised: 2017-11-13 Accepted: 2017-12-28 Available online: 2018-07-18

Next Previous

Abstract

The randomness and complexity of urban traffic scenes make it a difficult task for self-driving cars to detect drivable areas. Inspired by human driving behaviors, we propose a novel method of drivable area detection for self-driving cars based on fusing pixel information from a monocular camera with spatial information from a light detection and ranging (LIDAR) scanner. Similar to the bijection of collineation, a new concept called co-point mapping, which is a bijection that maps points from the LIDAR scanner to points on the edge of the image segmentation, is introduced in the proposed method. Our method positions candidate drivable areas through self-learning models based on the initial drivable areas that are obtained by fusing obstacle information with superpixels. In addition, a fusion of four features is applied in order to achieve a more robust performance. In particular, a feature called drivable degree (DD) is proposed to characterize the drivable degree of the LIDAR points. After the initial drivable area is characterized by the features obtained through self-learning, a Bayesian framework is utilized to calculate the final probability map of the drivable area. Our approach introduces no common hypothesis and requires no training steps; yet it yields a state-of-art performance when tested on the ROAD-KITTI benchmark. Experimental results demonstrate that the proposed method is a general and efficient approach for detecting drivable area.

Figures

Fig. 1

Fig. 2

Fig. 3

Fig. 4

Fig. 5

Fig. 6

Fig. 7

Fig. 8

Fig. 9

Fig. 10

Fig. 11

References

[ 1 ] Bar Hillel A, Lerner R, Levi D, Raz G. Recent progress in road and lane detection: a survey. Mach Vis Appl 2014;25(3):727–45. link1

[ 2 ] Zhang G, Zheng N, Cui D, Yang G. An efficient road detection method in noisy urban environment. In: 2009 IEEE Intelligent Vehicles Symposium Proceedings; 2009. P. 556–561. link1

[ 3 ] Cong Y, Peng JJ, Sun J, Zhu LL, Tang YD. V-disparity based UGV obstacle detection in rough outdoor terrain. Acta Autom Sin 2010;36(5):667–73. link1

[ 4 ] Lee DT, Schachter BJ. Two algorithms for constructing a delaunay triangulation. Int J Comput Inf Sci 1980;9(3):219–42. link1

[ 5 ] Fritsch J, Kuehnl T, Geiger A. A new performance measure and evaluation benchmark for road detection algorithms. In: Proceedings of International Conference on Intelligent Transportation Systems (ITSC); 2013 Oct 6–9; The Hague, the Netherlands. New York: IEEE; 2013. link1

[ 6 ] Tan C, Hong T, Chang T, Shneier M. Color model-based real-time learning for road following. In: Proceedings of Intelligent Transportation Systems Conference. New York: IEEE; 2006. p. 939–44. link1

[ 7 ] Rotaru C, Graf T, Zhang J. Color image segmentation in HSI space for automotive applications. J Real-Time Image Process 2008;3(4):311–22. link1

[ 8 ] Jau UL, Teh CS, Ng GW. A comparison of RGB and HSI colour segmentation in real-time video images: a preliminary study on road sign detection. In: Proceedings of the 2008 International Symposium on Information Technology; 2008 Aug 26–28; Kuala Lumpur, Malaysia. New York: IEEE; 2008. link1

[ 9 ] Finlayson GD, Hordley SD, Lu C, Drew MS. On the removal of shadows from images. IEEE Trans Pattern Anal Mach Intell 2006;28(1):59–68.

[10] Maddern W, Stewart A, McManus C, Upcroft B, Churchill W, Newman P. Illumination invariant imaging: applications in robust vision-based localisation, mapping and classification for autonomous vehicles. Proceedings of the Visual Place Recognition in Changing Environments Workshop; 2014 Dec 5–10; Hong Kong, China. New York: IEEE; 2014. link1

[11] Alvarez JM, Gevers T, LeCun Y, Lopez AM. Road scene segmentation from a single image. In: Proceedings of the 12th European Conference on Computer Vision: Volume Part VII; 2012 Oct 7–13; Florence, Italy. Heidelberg: Springer- Verlag Berlin; 2012. p. 376–89.

[12] Teichmann M, Weber M, Zoellner M, Cipolla B, Urtasun R. MultiNet: real-time joint semantic reasoning for autonomous driving. 2016. arXiv:1612.07695. link1

[13] Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 2017;64(6):64–90. link1

[14] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. arXiv:1409.1556. link1

[15] Badrinarayanan V, Handa A, Cipolla R. Segnet: a deep convolutional encoder- decoder architecture for robust semantic pixel-wise labelling. 2015. arXiv:1505.07293. link1

[16] Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2015 Jun 7–12; Boston, MA, USA. New York: IEEE; 2015. p. 3431–40. link1

[17] Cong Y, Liu J, Yuan J, Luo J. Self-supervised online metric learning with low rank constraint for scene categorization. IEEE Trans Image Process 2013;22 (8):3179–91. link1

[18] Cong Y, Liu J, Fan B, Zeng P, Yu H, Luo J. Online similarity learning for big data with overfitting. IEEE Trans Big Data 2017;4(1):78–89. link1

[19] Alvarez JM. Gevers T. Lopez AM. 3D scene priors for road detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2010; 2010 Jun 13–18; San Francisco, CA, USA. New York; 2010. P. 57–64. link1

[20] Nan Z, Wei P, Xu L, Zheng N. Efficient lane boundary detection with spatial- temporal knowledge filtering. Sensors 2016;16(8):1276. link1

[21] Hoiem D, Efros AA, Hebert M. Recovering surface layout from an image. Int J Comput Vis 2007;75(1):151–72. link1

[22] Kong H, Audibert J, Ponce J. Vanishing point detection for road detection. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition; 2009 Jun 20–25; Miami, FL, USA. New York: IEEE; 2009. link1

[23] Sivic J, Kaneva B, Torralba A, Avidan S, Freeman WT. Creating and exploring a large photorealistic virtual space. In: Proceedings of 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; 2008 Jun 23–28; Anchorage, AK, USA. New York: IEEE; 2008. link1

[24] Montemerlo M, Becker J, Bhat S, Dahlkamp H, Dolgov D, Ettinger S, et al. Junior: the Stanford entry in the urban challenge. J Field Robot 2008;25 (9):569–97. link1

[25] Thrun S, Montemerlo M, Dahlkamp H, Stavens D, Aron A, Diebel J, et al. Stanley: the robot that won the DARPA grand challenge. J Field Robot 2006;23 (9):661–92. link1

[26] Urmson C, Anhalt J, Bagnell D, Baker C, Bittner R, Clark MN, et al. Autonomous driving in urban environments: boss and the urban challenge. J Field Robot 2008;25(8):425–66. link1

[27] Neidhart H, Sester M. Extraction of building ground plans from LiDAR data. Int Arch Photogramm Remote Sens Spat Inf Sci 2008;37(Pt 2):405–10. link1

[28] Hu X, Rodriguez FSA, Gepperth A. A multi-modal system for road detection and segmentation. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings; 2014 Jun 8–11; Dearborn, MI, USA. New York: IEEE; 2014. p. 1365–70. link1

[29] Fischler MA, Bolles RC. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM 1981;24(6):381–95. link1

[30] Wellington C, Courville A, Stentz AT. A generative model of terrain for autonomous navigation in vegetation. Int J Robot Res 2006;25(12):1287–304. link1

[31] Klasing K, Wollherr D, Buss M. Realtime segmentation of range data using continuous nearest neighbors. In: Proceedings of 2009 IEEE International Conference on Robotics and Automation; 2008 May 13–19; Kobe, Japan. New York: IEEE; 2009. p. 2431–6. link1

[32] Diebel J, Thrun S. An application of Markov random fields to range sensing. Adv Neural Inf Process Syst 2005;18:291–8. link1

[33] Shotton J, Winn J, Rother C, Criminisi A. Textonboost: joint appearance, shape and context modeling for multi-class object recognition and segmentation. In: Proceedings of the 9th European Conference on Computer Vision: Volume Part I; 2006 May 7–13; Graz, Austria. Berlin: Springer; 2006. p. 1–15. link1

[34] Xiao L, Dai B, Liu D, Tingbo H, Tao W. CRF based road detection with multi- sensor fusion. In: Proceedings of 2015 IEEE Intelligent Vehicles Symposium (IV); 2015 Jun 28–Jul 1; Seoul, Korea. New York: IEEE; 2015. p. 192–8.

[35] Huang W, Gong X, Yang MY. Joint object segmentation and depth upsampling. IEEE Signal Process Lett 2015;22(2):192–6. link1

[36] Alvarez AJM, Lopez AM. Road detection based on illuminant invariance. IEEE Trans Intell Transp Syst 2011;12(1):184–93. link1

[37] Shinzato PY, Wolf DF, Stiller C. Road terrain detection: avoiding common obstacle detection assumptions using sensor fusion. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings; 2014 Jun 8–11; Dearvorn, MI, USA. New York: IEEE; 2014. p. 687–92. link1

[38] Ren X, Malik J. Learning a classification model for segmentation. In: Proceedings of the 9th IEEE International Conference on Computer Vision; 2003 Oct 13–16; Nice, France. New York: IEEE; 2003. p. 10–7.

[39] Dollár P, Zitnick CL. Structured forests for fast edge detection. In: Proceedings of 2013 IEEE International Conference on Computer Vision; 2013 Dec 1–8; Sydney, NSW, Australia. New York: IEEE; 2013. p. 1841–8. link1

[40] Zitnick CL, Dollár P. Edge boxes: locating object proposals from edges. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T, editors. Computer Vision—ECCV 2014; 2014 Sep 6–12; Zurich, Switzerland. Cham: Springer; 2014.p. 391–405. link1

[41] Achanta R, Shaji A, Smith K, Lucchi A, Fua P, Süsstrunk S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell 2012;34(11):2274–82. link1

[42] Geiger A, Lenz P, Stiller C, Urtasun R. Vision meets robotics: the KITTI dataset. Int J Robot Res 2013;32(11):1231–7. link1

[43] Hu X, Rodriguez FSA, Gepperth A. A multi-modal system for road detection and segmentation. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings. New York: IEEE; 2014. p. 1365–70. link1

[44] Wang T, Zheng N, Xin J, Ma Z. Integrating millimeter wave radar with a monocular vision sensor for on-road obstacle detection applications. Sensors 2011;11(9):8992–9008. link1

[45] Liu J, Gong X. Guided depth enhancement via anisotropic diffusion. In: Proceedings of the 14th Pacific-Rim Conference on Multimedia; 2013 Dec 13– 16; Nanjing, China. Berlin: Springer; 2013. p. 408–17. link1

Related Research