Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-22T08:45:15.410Z Has data issue: false hasContentIssue false

Mind the gap: detection and traversability analysis of terrain gaps using LIDAR for safe robot navigation

Published online by Cambridge University Press:  14 May 2013

Arnab Sinha
Affiliation:
ALCOR, Vision, Perception and Cognitive Robotics Laboratory, Department of Computer, Control and Management Engineering, University of Rome, “La Sapienza,” Italy
Panagiotis Papadakis*
Affiliation:
ALCOR, Vision, Perception and Cognitive Robotics Laboratory, Department of Computer, Control and Management Engineering, University of Rome, “La Sapienza,” Italy
*
*Corresponding author. E-mail: [email protected]

Summary

Safe navigation of robotic vehicles is considered as a key pre-requisite of successful mission operations within highly adverse and unconstrained environments. While there has been extensive research in the perception of positive obstacles, little progress can be accredited to the field of negative obstacles. This paper hypostatizes an elaborative attempt to address the problem of negative obstacle detection and traversability analysis in the form of gaps by processing 3-dimensional range data. The domain of application concerns Urban Search and Rescue scenarios that reflect environments of increased complexity in terms of diverse terrain irregularities. To allow real-time performance and, in turn, timely prevention of unrecoverable robotic states, the proposed approach is based on the application of efficient image morphological operations for noise reduction and border following the detection and grouping of gaps. Furthermore, we reason about gap traversability, a concept that is novel within the field. Traversability assessments are based on features extracted through Principal Component Analysis by exploring the spatial distribution of the interior of the individual gaps or the orientation distribution of the corresponding contour. The proposed approach is evaluated within a realistic scenario of a tunnel car accident site and a challenging outdoor scenario. Using a contemporary Search and Rescue robot, we have performed extensive experiments under various parameter settings that allowed the robot to always detect the real gaps, and either optimally cross over those that were traversable or otherwise avoid them.

Type
Articles
Copyright
Copyright © Cambridge University Press 2013 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Bellutta, P., Manduchi, R., Matthies, L., Owens, K. and Rankin, A., “Terrain Perception for Demo III.” Proceedings of the IEEE Intelligent Vehicles Symposium (2000).Google Scholar
2.Bluebotics, Mobile robot. No. PCT/EP2011/060937 (BlueBotics SA, Switzerland, Jun. 2011).Google Scholar
3.Intel Corporation, Open source computer vision library, Available at: http://opencv.willowgarage.com/wiki/ (Aug. 2011). (Accessed April 2013)Google Scholar
4.Crane, C. D. III, Armstrong, D. G. II, Touchton, R., Galluzzo, T., Solanki, S., Lee, J., Kent, D., Ahmed, M., Montane, R., Ridgeway, S., Velat, S., Garcia, G., Griffis, M., Gray, S., Washburn, J. and Routson, G., “Team cimar's navigator: An unmanned ground vehicle for the 2005 darpa grand challenge,” J. Field Robot. 23 (8), 599623 (2006).CrossRefGoogle Scholar
5.Dima, C., Vandapel, N. and Hebert, M., “Classifier Fusion for Outdoor Obstacle Detection,” Proceedings of the International Conference on Robotics and Automation (2004).CrossRefGoogle Scholar
6.Dubbelmanand, G., van der Mark, W., van den Heuvel, J. C. J. and Groen, F. C. A., “Obstacle Detection During Day and Night Conditions Using Stereo Vision,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2007).CrossRefGoogle Scholar
7.Heckman, N., Lalonde, J.-F., Vandapel, N. and Hebert, M., “Potential Negative Obstacle Detection by Occlusion Labeling,” In: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2007) pp. 2168–2173.Google Scholar
8.Kelly, A., Stentz, A., Amidi, O., Bode, M., Bradley, D., Diaz-Calderon, A., Happold, M., Herman, H., Mandelbaum, R., Pilarski, T., Rander, P., Thayer, S., Vallidis, N. and Warner, R., “Toward Reliable Off Road Autonomous Vehicles Operating in Challenging Environments,” Int. J. Robot. Res. 25 (5–6), 449483 (2006).CrossRefGoogle Scholar
9.Kruijff, G.-J., Janicek, M., Keshavdas, S., Larochelle, B., Zender, H., Smets, N., Mioch, T., Neerincx, M., van Diggelen, J., Colas, F., Liu, M., Pomerleau, F., Siegwart, R., Hlavac, V., Svoboda, T., Petrickek, T., Reinstein, M., Zimmerman, K., Pirri, F., Gianni, M., Papadakis, P., Sinha, A., Patrick, B., Tomatis, N., Worst, R., Linder, T., Surmann, H. and Tretyakov, V., “Experience in System Design for Human-Robot Teaming in Urban Search & Rescue,” Proceedings of the International Conference on Field and Service Robotics (2012).CrossRefGoogle Scholar
10.Lalonde, J. F., Vandapel, N., Huber, D. and Hebert, M., “Natural terrain classification using three-dimensional ladar data for ground robot mobility,” J. Field Robot. 23 (1), 839861 (2006).CrossRefGoogle Scholar
11.Larson, J. and Trivedi, M., “Lidar Based Off-Road Negative Obstacle Detection and Analysis,” Proceedings of the IEEE International Conference on Intelligent Transportation Systems (2011).CrossRefGoogle Scholar
12.Larson, J., Trivedi, M. and Bruch, M., “Off-Road Terrain Traversability Analysis and Hazard Avoidance for UGVs,” Technical Report (2010). Department of Electrical Engineering, University of California San Diego.Google Scholar
13.Matthies, L., Kelly, A., Litwin, T. and Tharp, G., “Obstacle Detection for Unmanned Ground Vehicles: A Progress Report,” In: Proceedings of the IEEE Intelligent Vehicles Conference (1995) pp. 66–71.Google Scholar
14.Matthies, L. and Rankin, A., “Negative Obstacle Detection by Thermal Signature,” IEEE/RSJ International Conference on Intelligent Robots and Systems (2003).Google Scholar
15.Mioch, T., Smets, N. J. J. M. and Neerincx, M. A., Assessing human-robot performances in complex situations with unit task tests, Proceedings of the 21th IEEE International Symposium on Robot and Human Interactive Communication, pp. 621626, Paris, France.Google Scholar
16.Papadakis, P. and Pirri, F., “3D Mobility Learning and Regression of Articulated, Tracked Robotic Vehicles by Physics-Based Optimization,” In: Virtual Reality Interaction and Physical Simulation (2012) pp. 147–156.Google Scholar
17.Papadakis, P., Content-Based 3D Model Retrieval Considering the User's Relevance Feedback. PhD Thesis (University of Athens, Athens Greece, 2009).Google Scholar
18.Papadakis, P., Pratikakis, I., Perantonis, S. and Theoharis, T., “Efficient 3D shape matching and retrieval using a concrete radialized spherical projection representation,” Pattern Recognit. 40 (9), 24372452 (2007).CrossRefGoogle Scholar
19.Shevtsov, M., Soupikov, A. and Kapustin, A., “Highly parallel fast KD-tree construction for interactive ray tracing of dynamic scenes,” Comput. Graph. Forum 26 (3), 395404 (2007).CrossRefGoogle Scholar
20.Suzuki, S. and Abe, K., “Topological structural analysis of digitized binary images by border following,” Comput. Vis. Graph. Image Process. 30 (1), 3246 (1985).CrossRefGoogle Scholar