Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-23T17:58:18.195Z Has data issue: false hasContentIssue false

Vision-based mobile robot motion control combining T2 and ND approaches

Published online by Cambridge University Press:  06 September 2013

Francisco Bonin-Font*
Affiliation:
Systems, Robotics and Vision Group, Department of Mathematics and Computer Sciences, University of the Balearic Islands, Palma de Mallorca, Islas Baleares, Spain
Javier Antich Tobaruela
Affiliation:
Systems, Robotics and Vision Group, Department of Mathematics and Computer Sciences, University of the Balearic Islands, Palma de Mallorca, Islas Baleares, Spain
Alberto Ortiz Rodriguez
Affiliation:
Systems, Robotics and Vision Group, Department of Mathematics and Computer Sciences, University of the Balearic Islands, Palma de Mallorca, Islas Baleares, Spain
Gabriel Oliver
Affiliation:
Systems, Robotics and Vision Group, Department of Mathematics and Computer Sciences, University of the Balearic Islands, Palma de Mallorca, Islas Baleares, Spain
*
*Corresponding author. E-mail: [email protected]

Summary

Navigating along a set of programmed points in a completely unknown environment is a challenging task which mostly depends on the way the robot perceives and symbolizes the environment and decisions it takes in order to avoid the obstacles while it intends to reach subsequent goals. Tenacity and Traversability (T2)1-based strategies have demonstrated to be highly effective for reactive navigation, extending the benefits of the artificial Potential Field method to complex situations, such as trapping zones or mazes. This paper presents a new approach for reactive mobile robot behavior control which rules the actions to be performed to avoid unexpected obstacles while the robot executes a mission between several defined sites. This new strategy combines the T2 principles to escape from trapping zones together with additional criteria based on the Nearness Diagram (ND)13 strategy to move in cluttered or densely occupied scenarios. Success in a complete set of experiments, using a mobile robot equipped with a single camera, shows extensive environmental conditions where the strategy can be applied.

Type
Articles
Copyright
Copyright © Cambridge University Press 2013 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1.Antich, J. and Ortiz, A., “Extending the Potential Fields Approach to Avoid Trapping Situations,” Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS), Edmonton, Alberta, Canada, Aug. 2–6 (2005).Google Scholar
2.Bonin, F., Ortiz, A. and Oliver, G., “Visual navigation for mobile robots: A survey,” J. Intell. Robot. Syst. 3 (53), 263296 (2008).CrossRefGoogle Scholar
3.Bonin, F., Burguera, A., Ortiz, A. and Oliver, G., “Concurrent visual navigation and localization using the inverse perspective transformation,” Electron. Lett. 48 (5), 264266 (2012).Google Scholar
4.Bonin, F., Ortiz, A. and Oliver, G., “A Novel Inverse Perspective Transformation-based Reactive Navigation Strategy,” Proceedings of the 4th European Conference on Mobile Robots (ECMR), Mlini/Dubrovnik, Croatia, Sep. 23–25 (2009).Google Scholar
5.Bonin, F., Ortiz, A. and Oliver, G., “Experimental Assessment of Different Feature Tracking Strategies for an IPT-Based Navigation Task,” Proceeding of IFAC Intelligent Autonomous Vehicles Conference (IAV), Lecce, Italy, Sep. 6–8 (2010).Google Scholar
6.Burguera, A., Gonzalez, Y. and Oliver, G., “The UspIC: performing scan matching localization using an imaging sonar,” Sensors 12 (6), 78557885 (2012).Google Scholar
7.Duda, R. O. and Hart, P., Pattern Classification and Scene Analysis (John Wiley, New York, NY, 1973).Google Scholar
8.Durham, J. W. and Bullo, F., “Smooth Nearness-Diagram Navigation,” Proceedings of IEEE International Conference on Intelligent Robots and Systems (IROS), Nice, France (2008).Google Scholar
9.Hartley, R. and Zisserman, A., Multiple View Geometry in Computer Vision (Cambridge University Press, Cambridge, UK, 2003).Google Scholar
10.Kuo, C. H., Syu, Y. S., Tsai, T. C. and Chen, T. S., “An Embedded Robotic Wheelchair Control Architecture with Reactive Navigations,” Proceedings of the IEEE Conference on Automation Science and Engineering (2011).Google Scholar
11.Gerkey, B., Vaughan, R. and Howard, A., “The player project,” available at: http://playerstage.sourceforge.net/Google Scholar
12.Koren, Y. and Borenstein, J., “Potential Fields Methods and Their Inherent Limitations for Mobile Robot Navigation,” Proceedings of IEEE International Conference on Robotics and Automation (ICRA) (1991).Google Scholar
13.Minguez, J. and Montano, L., “Nearness diagram (ND) navigation: collision avoidance in troublesome scenarios,” IEEE Trans. Robot. Autom. 23 (1), 4559 (2004).CrossRefGoogle Scholar
14.Mujahad, M., Fischer, D., Mertsching, B. and Jaddu, H., “Closest Gap Based (CG) Reactive Obstacle Avoidance Navigation for Highly Cluttered Environments,” Proceedings of IEEE International Workshop on Intelligent Robots and Systems (IROS) (2010).CrossRefGoogle Scholar
15.Minguez, J., Osuna, J. and Montano, L., “A Divide and Conquer Strategy Based on Situations to Achieve Reactive Collision Avoidance in Troublesome Scenarios,” Proceedings of IEEE ICRA (2004).Google Scholar
16.Correa, D. S. O., Sciotti, D. F., Prado, M. G., Sales, D. O., Wolf, D. F. and Osorio, F. S., “A Mobile Robots Navigation in Indoor Environments Using Kinect Sensor,” Proceedings of IEEE Brazilian Conference on Critical Embedded Systems (2012).Google Scholar
17.Bouguet, J.-Y., “Pyramidal Implementation of the Lucas Kanade Feature Tracker,” available at: http://robots.stanford.edu/cs223b04/algo_tracking.pdf (Intel Corporation Microprocessor Research Labs, 2000).Google Scholar