A Co-Point Mapping-Based Approach to Drivable Area Detection for Self-Driving Cars

Ziyi Liu, Siyu Yu, Nanning Zheng

Engineering ›› 2018, Vol. 4 ›› Issue (4) : 479-490.

PDF(4008 KB)
PDF(4008 KB)
Engineering ›› 2018, Vol. 4 ›› Issue (4) : 479-490. DOI: 10.1016/j.eng.2018.07.010
Research
Research Robotics—Article

A Co-Point Mapping-Based Approach to Drivable Area Detection for Self-Driving Cars

Author information +
History +

Abstract

The randomness and complexity of urban traffic scenes make it a difficult task for self-driving cars to detect drivable areas. Inspired by human driving behaviors, we propose a novel method of drivable area detection for self-driving cars based on fusing pixel information from a monocular camera with spatial information from a light detection and ranging (LIDAR) scanner. Similar to the bijection of collineation, a new concept called co-point mapping, which is a bijection that maps points from the LIDAR scanner to points on the edge of the image segmentation, is introduced in the proposed method. Our method positions candidate drivable areas through self-learning models based on the initial drivable areas that are obtained by fusing obstacle information with superpixels. In addition, a fusion of four features is applied in order to achieve a more robust performance. In particular, a feature called drivable degree (DD) is proposed to characterize the drivable degree of the LIDAR points. After the initial drivable area is characterized by the features obtained through self-learning, a Bayesian framework is utilized to calculate the final probability map of the drivable area. Our approach introduces no common hypothesis and requires no training steps; yet it yields a state-of-art performance when tested on the ROAD-KITTI benchmark. Experimental results demonstrate that the proposed method is a general and efficient approach for detecting drivable area.

Keywords

Drivable area / Self-driving / Data fusion / Co-point mapping

Cite this article

Download citation ▾
Ziyi Liu, Siyu Yu, Nanning Zheng. A Co-Point Mapping-Based Approach to Drivable Area Detection for Self-Driving Cars. Engineering, 2018, 4(4): 479‒490 https://doi.org/10.1016/j.eng.2018.07.010

References

[1]
Bar Hillel A., Lerner R., Levi D., Raz G.. Recent progress in road and lane detection: a survey. Mach Vis Appl. 2014; 25(3): 727-745.
[2]
Zhang G, Zheng N, Cui D, Yang G. An efficient road detection method in noisy urban environment. In: 2009 IEEE Intelligent Vehicles Symposium Proceedings; 2009. P. 556–561.
[3]
Cong Y., Peng J.J., Sun J., Zhu L.L., Tang Y.D.. V-disparity based UGV obstacle detection in rough outdoor terrain. Acta Autom Sin. 2010; 36(5): 667-673.
[4]
Lee D.T., Schachter B.J.. Two algorithms for constructing a delaunay triangulation. Int J Comput Inf Sci. 1980; 9(3): 219-242.
[5]
Fritsch J., Kuehnl T., Geiger A.. A new performance measure and evaluation benchmark for road detection algorithms. In: Proceedings of International Conference on Intelligent Transportation Systems (ITSC); 2013 Oct 6–9; The Hague, the Netherlands. New York: IEEE; 2013.
[6]
Tan C., Hong T., Chang T., Shneier M.. Color model-based real-time learning for road following. In: Proceedings of Intelligent Transportation Systems Conference. New York: IEEE; 2006. p. 939-944.
[7]
Rotaru C., Graf T., Zhang J.. Color image segmentation in HSI space for automotive applications. J Real-Time Image Process. 2008; 3(4): 311-322.
[8]
Jau U.L., Teh C.S., Ng G.W.. A comparison of RGB and HSI colour segmentation in real-time video images: a preliminary study on road sign detection. In: Proceedings of the 2008 International Symposium on Information Technology; 2008 Aug 26–28; Kuala Lumpur, Malaysia. New York: IEEE; 2008.
[9]
Finlayson G.D., Hordley S.D., Lu C., Drew M.S.. On the removal of shadows from images. IEEE Trans Pattern Anal Mach Intell. 2006; 28(1): 59-68.
[10]
Maddern W., Stewart A., McManus C., Upcroft B., Churchill W., Newman P.. Illumination invariant imaging: applications in robust vision-based localisation, mapping and classification for autonomous vehicles.
[11]
Alvarez J.M., Gevers T., LeCun Y., Lopez A.M.. Road scene segmentation from a single image. In: Proceedings of the 12th European Conference on Computer Vision: Volume Part VII; 2012 Oct 7–13; Florence, Italy. Heidelberg: Springer-Verlag Berlin; 2012. p. 376-389.
[12]
Teichmann M, Weber M, Zoellner M, Cipolla B, Urtasun R. MultiNet: real-time joint semantic reasoning for autonomous driving. 2016. arXiv:1612.07695.
[13]
Krizhevsky A., Sutskever I., Hinton G.E.. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst. 2017; 64(6): 64-90.
[14]
Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. 2014. arXiv:1409.1556.
[15]
Badrinarayanan V, Handa A, Cipolla R. Segnet: a deep convolutional encoder-decoder architecture for robust semantic pixel-wise labelling. 2015. arXiv:1505.07293.
[16]
Long J., Shelhamer E., Darrell T.. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2015 Jun 7–12; Boston, MA, USA. New York: IEEE; 2015. p. 3431-3440.
[17]
Cong Y., Liu J., Yuan J., Luo J.. Self-supervised online metric learning with low rank constraint for scene categorization. IEEE Trans Image Process. 2013; 22(8): 3179-3191.
[18]
Cong Y., Liu J., Fan B., Zeng P., Yu H., Luo J.. Online similarity learning for big data with overfitting. IEEE Trans Big Data. 2017; 4(1): 78-89.
[19]
Alvarez JM. Gevers T. Lopez AM. 3D scene priors for road detection. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2010; 2010 Jun 13–18; San Francisco, CA, USA. New York; 2010. P. 57–64.
[20]
Nan Z., Wei P., Xu L., Zheng N.. Efficient lane boundary detection with spatial-temporal knowledge filtering. Sensors. 2016; 16(8): 1276.
[21]
Hoiem D., Efros A.A., Hebert M.. Recovering surface layout from an image. Int J Comput Vis. 2007; 75(1): 151-172.
[22]
Kong H., Audibert J., Ponce J.. Vanishing point detection for road detection.
[23]
Sivic J., Kaneva B., Torralba A., Avidan S., Freeman W.T.. Creating and exploring a large photorealistic virtual space. In: Proceedings of 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops; 2008 Jun 23–28; Anchorage, AK, USA. New York: IEEE; 2008.
[24]
Montemerlo M., Becker J., Bhat S., Dahlkamp H., Dolgov D., Ettinger S., . Junior: the Stanford entry in the urban challenge. J Field Robot. 2008; 25(9): 569-597.
[25]
Thrun S., Montemerlo M., Dahlkamp H., Stavens D., Aron A., Diebel J., . Stanley: the robot that won the DARPA grand challenge. J Field Robot. 2006; 23(9): 661-692.
[26]
Urmson C., Anhalt J., Bagnell D., Baker C., Bittner R., Clark M.N., . Autonomous driving in urban environments: boss and the urban challenge. J Field Robot. 2008; 25(8): 425-466.
[27]
Neidhart H., Sester M.. Extraction of building ground plans from LiDAR data. Int Arch Photogramm Remote Sens Spat Inf Sci. 2008; 37(Pt 2): 405-410.
[28]
Hu X., Rodriguez F.S.A., Gepperth A.. A multi-modal system for road detection and segmentation. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings; 2014 Jun 8–11; Dearborn, MI, USA. New York: IEEE; 2014. p. 1365-1370.
[29]
Fischler M.A., Bolles R.C.. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Commun ACM. 1981; 24(6): 381-395.
[30]
Wellington C., Courville A., Stentz A.T.. A generative model of terrain for autonomous navigation in vegetation. Int J Robot Res. 2006; 25(12): 1287-1304.
[31]
Klasing K., Wollherr D., Buss M.. Realtime segmentation of range data using continuous nearest neighbors. In: Proceedings of 2009 IEEE International Conference on Robotics and Automation; 2008 May 13–19; Kobe, Japan. New York: IEEE; 2009. p. 2431-2436.
[32]
Diebel J., Thrun S.. An application of Markov random fields to range sensing. Adv Neural Inf Process Syst. 2005; 18: 291-298.
[33]
Shotton J., Winn J., Rother C., Criminisi A.. Textonboost: joint appearance, shape and context modeling for multi-class object recognition and segmentation. In: Proceedings of the 9th European Conference on Computer Vision: Volume Part I; 2006 May 7–13; Graz, Austria. Berlin: Springer; 2006. p. 1-15.
[34]
Xiao L., Dai B., Liu D., Tingbo H., Tao W.. CRF based road detection with multi-sensor fusion. In: Proceedings of 2015 IEEE Intelligent Vehicles Symposium (IV); 2015 Jun 28–Jul 1; Seoul, Korea. New York: IEEE; 2015. p. 192-198.
[35]
Huang W., Gong X., Yang M.Y.. Joint object segmentation and depth upsampling. IEEE Signal Process Lett. 2015; 22(2): 192-196.
[36]
Alvarez A.J.M., Lopez A.M.. Road detection based on illuminant invariance. IEEE Trans Intell Transp Syst. 2011; 12(1): 184-193.
[37]
Shinzato P.Y., Wolf D.F., Stiller C.. Road terrain detection: avoiding common obstacle detection assumptions using sensor fusion. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings; 2014 Jun 8–11; Dearvorn, MI, USA. New York: IEEE; 2014. p. 687-692.
[38]
Ren X., Malik J.. Learning a classification model for segmentation. In: Proceedings of the 9th IEEE International Conference on Computer Vision; 2003 Oct 13–16; Nice, France. New York: IEEE; 2003. p. 10-17.
[39]
Dollár P., Zitnick C.L.. Structured forests for fast edge detection. In: Proceedings of 2013 IEEE International Conference on Computer Vision; 2013 Dec 1–8; Sydney, NSW, Australia. New York: IEEE; 2013. p. 1841-1848.
[40]
Zitnick C.L., Dollár P.. Edge boxes: locating object proposals from edges. In: editor. Computer Vision—ECCV 2014; 2014 Sep 6–12; Zurich, Switzerland. Cham: Springer; 2014. p. 391-405.
[41]
Achanta R., Shaji A., Smith K., Lucchi A., Fua P., Süsstrunk S.. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans Pattern Anal Mach Intell. 2012; 34(11): 2274-2282.
[42]
Geiger A., Lenz P., Stiller C., Urtasun R.. Vision meets robotics: the KITTI dataset. Int J Robot Res. 2013; 32(11): 1231-1237.
[43]
Hu X., Rodriguez F.S.A, Gepperth A.. A multi-modal system for road detection and segmentation. In: 2014 IEEE Intelligent Vehicles Symposium Proceedings. New York: IEEE; 2014. p. 1365-1370.
[44]
Wang T., Zheng N., Xin J., Ma Z.. Integrating millimeter wave radar with a monocular vision sensor for on-road obstacle detection applications. Sensors. 2011; 11(9): 8992-9008.
[45]
Liu J., Gong X.. Guided depth enhancement via anisotropic diffusion. In: Proceedings of the 14th Pacific-Rim Conference on Multimedia; 2013 Dec 13–16; Nanjing, China. Berlin: Springer; 2013. p. 408-417.

Acknowledgements

This research was partially supported by the National Natural Science Foundation of China (61773312), the National Key Research and Development Plan (2017YFC0803905), and the Program of Introducing Talents of Discipline to University (B13043).
Compliance with ethics guidelines

Ziyi Liu, Siyu Yu, and Nanning Zheng declare that they have no conflict of interest or financial conflicts to disclose.

AI Summary AI Mindmap
PDF(4008 KB)

Accesses

Citations

Detail

Sections
Recommended

/