Hostname: page-component-78c5997874-8bhkd Total loading time: 0 Render date: 2024-11-17T15:08:49.177Z Has data issue: false hasContentIssue false

LIDAR and stereo combination for traversability assessment of off-road robotic vehicles

Published online by Cambridge University Press:  15 June 2015

Giulio Reina*
Affiliation:
Department of Engineering for Innovation, University of Salento, Via Arnesano, 73100 Lecce, Italy
Annalisa Milella
Affiliation:
Institute of Intelligent Systems for Automation, National Research Council, via G. Amendola 122 D/O, 70126, Bari, Italy. E-mail: [email protected]
Rainer Worst
Affiliation:
Fraunhofer IAIS, Schloss Birlinghoven, 53757 Sankt Augustin, Germany. E-mail: [email protected]
*
*Corresponding author. E-mail: [email protected]

Summary

Reliable assessment of terrain traversability using multi-sensory input is a key issue for driving automation, particularly when the domain is unstructured or semi-structured, as in natural environments. In this paper, LIDAR-stereo combination is proposed to detect traversable ground in outdoor applications. The system integrates two self-learning classifiers, one based on LIDAR data and one based on stereo data, to detect the broad class of drivable ground. Each single-sensor classifier features two main stages: an adaptive training stage and a classification stage. During the training stage, the classifier automatically learns to associate geometric appearance of 3D data with class labels. Then, it makes predictions based on past observations. The output obtained from the single-sensor classifiers are statistically combined in order to exploit their individual strengths and reach an overall better performance than could be achieved by using each of them separately. Experimental results, obtained with a test bed platform operating in rural environments, are presented to validate and assess the performance of this approach, showing its effectiveness and potential applicability to autonomous navigation in outdoor contexts.

Type
Articles
Copyright
Copyright © Cambridge University Press 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Aycard, O., Baig, Q., Bota, S., Nashashibi, F., Nedevschi, S., Pantilie, C. D., Parent, M., Resende, P. and Vu, Trung-Dung, “Intersection Safety using Lidar and Stereo Vision Sensors,” IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany, (2011), pp. 863–869.Google Scholar
2. Badino, H., Huber, D. and Kanade, T., “Integrating Lidar into Stereo for Fast and Improved Disparity Computation,” 3D Imaging, Modeling, Processing, Visualization, Transmission (3DIMPVT), International Conference on, Hangzhou, China, (2011) pp. 1–8.Google Scholar
3. Bajracharya, M., Maimone, M. W. and Helmick, D., “Autonomy for mars rovers: Past, present, and future,” Computer 41 (12), 4450 (2008).CrossRefGoogle Scholar
4. Bradski, G. and Kaehler, A., “Learning OpenCV: Computer Vision with the OpenCV Library (O'Reilly Media, 2008).Google Scholar
5. Broggi, A., Cappalunga, A., Caraffi, C., Cattani, S., Ghidoni, S., Grisleri, P., Porta, P., Posterli, M. and Zani, P., “Terramax vision at the urban challenge 2007,” IEEE Trans. Intell. Transp. Syst. 11 (1), 194205 (2010).Google Scholar
6. Broggi, A., Cardarelli, E., Cattani, S. and Sabbatelli, M., “Terrain Mapping for Off-Road Autonomous Ground Vehicles using Rational B-Spline Surfaces and Stereo Vision,” IEEE Intelligent Vehicles Symposium (2013), Gold Coast City, Australia, pp. 648–653.Google Scholar
7. Broggi, A., Cattani, S., Patander, M., Sabbatelli, M. and Zani, P., A Full-3D Voxel-Based Dynamic Obstacle Detection for Urban Scenario using Stereo Vision,” International IEEE Annual Conference on Intelligent Transportation Systems (ITSC 2013) (2013) pp. 71–76.Google Scholar
8. Dahlkamp, H. A., Kaehler, D. S., Thrun, S. and Bradski, G., “Self-Supervised Monocular road Detection in Desert Terrain,” Robotics Science and Systems Conference (2006), Philadelphia, USA, pp. 1–6.Google Scholar
9. Dima, C., Vandapel, N. and Hebert, M., “Classifier fusion for outdoor obstacle detection,” IEEE International Conference on Robotics and Automation (2004), New Orleans, LA, USA, pp. 665–671.Google Scholar
10. Foo, P. and NG, G., “High-level information fusion: An overview,” J. Adv. Inform. Fusion 8 (1), 3372 (2013).Google Scholar
11. Hadsell, R., Sermanet, P., Ben, J., Erkan, A., Scoffie, M. and Kavukcuoglu, K., “Learning long-range vision for autonomous off-road driving,” J. Field Robot. 26 (2), 120144 (2009).CrossRefGoogle Scholar
12. Hague, T., Marchant, J. and Tillett, N., “Ground-based sensing systems for autonomous agricultural vehicles,” Comput. Electron. Agric. 25 (1–2), 1128 (2000).Google Scholar
13. Hastie, T., Tibshirani, R. and Friedman, J., “The Elements of Statistical Learning (Springer-Verlag, New York, 2003).Google Scholar
14. Konolige, K., Agrawal, M., Blas, M. R., Bolles, R. C., Gerkey, B. P., Solà, J. and Sundaresan, A., “Mapping, navigation, and learning for off-road traversal,” J. Field Robot. 26 (1), 88113 (2009).CrossRefGoogle Scholar
15. Konolige, K., Bowman, J., Chen, J., Mihelick, P., Lepetit, M. C. V. and Fua, P., “View-based maps,” Int. J. Robot. Res. 29 (8), 941957 (2010).Google Scholar
16. Lalonde, J., Vandapel, N., Huber, D. and Hebert, M., “Natural terrain classification using three-dimensional ladar data for ground robot mobility,” J. Field Robot. 23 (10), 839861 (2006).CrossRefGoogle Scholar
17. Manduchi, R., Castano, A., Talukder, A. and Matthies, L., “Obstacle detection and terrain classification for autonomous off-road navigation,” Auton. Robot 18, 81102 (2004).Google Scholar
18. Milella, A. and Reina, G., “3D reconstruction and classification of natural environments by an autonomous vehicle using multi-baseline stereo,” Intell. Serv. Robot. 7, 7992 (2014).Google Scholar
19. Milella, A., Reina, G. and Underwood, J., “A self-learning framework for statistical ground classification using radar and monocular vision,” J. Field Robot. 32 (1), 2041 (2015).Google Scholar
20. Mousazadeh, H., “A technical review on navigation systems of agricultural autonomous off-road vehicles,” J. Terramech. 50 (3), 211232 (2013).CrossRefGoogle Scholar
21. Nüchter, A., Lingemann, K., Hertzberg, J. and Surmann, H., “6D SLAM–3D mapping outdoor environments,” J. Field Robot. 24, 699722 (2007).Google Scholar
22. Nedevschi, S., Danescu, R., Frentiu, D., Marita, T., Oniga, F., Pocol, C., Graf, T. and Schmidt, R., “High Accuracy Stereovision Approach for Obstacle Detection on Non-Planar Roads,” Proceedings of IEEE INES, (2004) Cluji-Napoca, Romania (2004) pp. 211–216.Google Scholar
23. Nickels, K., Castano, A. and Cianci, C. M., “Fusion of Lidar and Stereo Range for Mobile Robots,” International Conference on Advanced Robotics (2003), Coimbra, Portugal, pp. 1–6.Google Scholar
24. Perrollaz, M., Yoder, J. D. and Laugier, C., “Using Obstacles and Road Pixels in the Disparity-Space Computation of Stereo-Vision Based Occupancy Grids,” IEEE Conference on Intelligent Transportation Systems (ITSC) (2010) pp. 1147–1152.Google Scholar
25. Point Grey, “Triclops software development kit,” Available at: http://www.ptgrey.com/triclops. [accessed on June 1st, 2015].Google Scholar
26. Poppinga, J., Birk, A. and Pathak, K., “Hough-based terrain classification for realtime detection of drivable ground,” J. Field Robot. 25 (1–2), 6788 (2008).Google Scholar
27. Reina, G., Ishigami, G., Nagatani, K. and Yoshida, K., “Vision-Based Estimation of Slip Angle for Mobile Robots and Planetary Rovers,” Proceedings of IEEE International Conference on Robotics and Automation, Pasadena, CA, USA (2008) pp. 486–491.Google Scholar
28. Reina, G. and Milella, A., “Toward autonomous agriculture: Automatic ground detection using trinocular stereovision,” Sensors 60 (11), 1240512423 (2012).CrossRefGoogle Scholar
29. Reina, G., Milella, A., Halft, W. and Worst, R., “LIDAR and Stereo Imagery Integration for Safe Navigation in Outdoor Settings,” IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR) (2013) pp. 1–6.Google Scholar
30. Reina, G., Milella, A. and Underwood, J., “Self-learning classification of radar features for scene understanding,” Robot. Auton. Syst. 60 (11), 13771388 (2012).Google Scholar
31. Rohmer, E., Reina, G. and Yoshida, K., “Dynamic simulation-based action planner for a reconfigurable hybrid legwheel planetary exploration rover,” Adv. Robot. 24 (8–9), 12191238 (2010).Google Scholar
32. Santana, P., Guedes, M., Correia, L. and Barata, J., “A Saliency-Based Solution for Robust Off-Road Obstacle Detection,” IEEE International Conference on Robotics and Automation, Anchorage, Alaska, USA (2010), pp. 3096–3101.Google Scholar
33. Tax, D., One-Class Classification. Concept Learning in the Absence of Counter Examples Ph.D. Thesis (Delft University of Technology, Delft, Netherlands, 2001).Google Scholar
34. Wedel, A., Badino, H., Rabe, C., Loose, H., Franke, U. and Cremers, D., “B-spline modeling of road surfaces with an application to free-space estimation,” IEEE Trans. Intell. Transp. Syst. 10 (4), 572583 (2009).Google Scholar
35. Weiss, U. and Biber, P., “Plant detection and mapping for agricultural robots using a 3D LIDAR sensor,” Robot. Auton. Syst. 59 (5), 265273 (2011).CrossRefGoogle Scholar
36. Wellington, C. and Stentz, A., “Online Adaptive Rough-Terrain Navigation in Vegetation,” Proceedings of International Conference on Robotics and Automation (2004), New Orleans, LA, USA, pp. 96–101.Google Scholar
37. Wurm, K. M., Hornung, A., Bennewitz, M., Stachniss, C. and Burgard, W., “Octomap: A Probabilistic, Flexible, and Compact 3D Map Representation for Robotic Systems,” ICRA Workshop on Best Practice in 3D Perception and Modeling for Mobile Manipulation, Anchorage, Alaska, USA (2010), pp. 1–8.Google Scholar
38. Zhang, Q. and Pless, R., “Extrinsic calibration of a camera and laser range finder (improves camera calibration),” Proceedings of IEEE/RSJ International Conference onIntelligent Robots and Systems, Sendai, Japan, vol. 3 (2004) pp. 2301–2306.Google Scholar
39. Zhao, J., Katupitiya, J. and Ward, J., “Global Correlation Based Ground Plane Estimation using v-Disparity Image,” International Conference on Robotics and Automation, Rome, Italy (2007), pp. 529–534.Google Scholar
40. Zhou, S., Xi, J., McDaniel, M. W., Nishihata, T., Salesses, P. and Iagnemma, K., “Self-supervised learning to visually detect terrain surfaces for autonomous robots operating in forested terrain,” J. Field Robot. 29 (2), 277297 (2012).Google Scholar