Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-22T18:11:23.676Z Has data issue: false hasContentIssue false

Scale robust IMU-assisted KLT for stereo visual odometry solution

Published online by Cambridge University Press:  30 August 2016

L. Chermak*
Affiliation:
Centre of Electronic Warfare, Cranfield University, Shrivenham, SN6 8LA, UK E-mails: [email protected], [email protected]
N. Aouf
Affiliation:
Centre of Electronic Warfare, Cranfield University, Shrivenham, SN6 8LA, UK E-mails: [email protected], [email protected]
M. A. Richardson
Affiliation:
Centre of Electronic Warfare, Cranfield University, Shrivenham, SN6 8LA, UK E-mails: [email protected], [email protected]
*
*Corresponding author. E-mail: [email protected]

Summary

We propose a novel stereo visual IMU-assisted (Inertial Measurement Unit) technique that extends to large inter-frame motion the use of KLT tracker (Kanade–Lucas–Tomasi). The constrained and coherent inter-frame motion acquired from the IMU is applied to detected features through homogenous transform using 3D geometry and stereoscopy properties. This predicts efficiently the projection of the optical flow in subsequent images. Accurate adaptive tracking windows limit tracking areas resulting in a minimum of lost features and also prevent tracking of dynamic objects. This new feature tracking approach is adopted as part of a fast and robust visual odometry algorithm based on double dogleg trust region method. Comparisons with gyro-aided KLT and variants approaches show that our technique is able to maintain minimum loss of features and low computational cost even on image sequences presenting important scale change. Visual odometry solution based on this IMU-assisted KLT gives more accurate result than INS/GPS solution for trajectory generation in certain context.

Type
Articles
Copyright
Copyright © Cambridge University Press 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Nister, D., Naroditsky, O. and Bergen, J., “Visual Odometry,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Washington, D.C., USA, vol. 1, (Jun. 2004) pp. 652–659.Google Scholar
2. Boulekchour, M., Aouf, N. and Richardson, M., “Robust L∞ convex pose-graph optimisation for monocular localisation solution for unmanned aerial vehicles,” Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering, 229 (10), 19031918 (2014).CrossRefGoogle Scholar
3. Mouats, T., Aouf, N., Chermak, L. and Richardson, M., “Thermal stereo odometry for UAVs,” IEEE Sensors J. 15 (11), 63356347 (2015).CrossRefGoogle Scholar
4. Howard, A., “Real-Time Stereo Visual Odometry for Autonomous Ground Vehicles,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems, Nice, France (Sep. 2008) pp. 3946–3952.CrossRefGoogle Scholar
5. Sünderhauf, N., Protzel, P., “Stereo odometry – a review of approaches”, Chemnitz University of Technology Technical Report, March, 2007.Google Scholar
6. Lucas, B. D. and Kanade, T., “An Iterative Image Registration Technique with an Application to Stereo Vision,” Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, B.C., Canada, vol. 2, (1981) pp. 674–679.Google Scholar
7. Shi, J. and Tomasi, C., “Good Features to Track,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Seattle, W.A., USA, (Jun. 1994) pp. 593–600.Google Scholar
8. Baker, S. and Matthews, I., “Lucas-kanade 20 years on: A unifying framework,” Int. J. Comput. Vis. 56 (3), 221255 (2004).CrossRefGoogle Scholar
9. Bouguet, J.-Y., “Pyramidal Implementation of the Affine Lucas Kanade Feature Tracker Description of the Algorithm,” OpenCV documentation, Intel Corporation, Microprocessor Research Labs, (1999).Google Scholar
10. Zinßer, T., Gräßl, C. and Niemann, H., “Efficient feature tracking for long video sequences,” In: Joint Pattern Recognition Symposium, (Springer Berlin Heidelberg, 2004) pp. 326333.CrossRefGoogle Scholar
11. Sinha, S. N., Frahm, J.-M., Pollefeys, M. and Genc, Y., “Feature tracking and matching in video using programmable graphics hardware,” Mach. Vis. Appl. 22 (1), 207217 (2007).CrossRefGoogle Scholar
12. Lee, H.-K., Choi, K.-W., Kong, D., and Won, J., “Improved Kanade-Lucas-Tomasi Tracker for Images with Scale Changes,” Proceedings of the IEEE International Conference on Consumer Electronics, Berlin, Germany, (Sep. 2013) pp. 33–34.Google Scholar
13. Hwangbo, M., Kim, J.-S. and Kanade, T., “Gyro-aided feature tracking for a moving camera: Fusion, auto-calibration and GPU implementation,” Int. J. Robot. Res. 30 (14), 17551774 (2011).CrossRefGoogle Scholar
14. Ryu, Y.-G., Roh, H.-C. and Chung, M.-Y., “Video Stabilization for Robot Eye Using IMU-Aided Feature Tracker,” Proceedings of the IEEE International Conference on Control Automation and Systems, Gyeonggi-do, Korea, (Oct. 2010) pp. 1875–1878.CrossRefGoogle Scholar
15. Tanathong, S. and Lee, I., “Translation-based KLT tracker under severe camera rotation using GPS/INS data,” IEEE Geoscience Remote Sensing Lett. 11 (1), 6468 (2013).CrossRefGoogle Scholar
16. Scaramuzza, D. and Fraundorfer, F., “Visual odometry [tutorial],” IEEE Robot. Autom. Mag. 18 (4), pp. 8092 (2011).CrossRefGoogle Scholar
17. Goslinski, J., Nowicki, M. and Skrzypczynski, P., “Performance comparison of EKF-based algorithms for orientation estimation on android platform,” IEEE Sensors J. 15 (7), pp. 3781–3792 (2015).CrossRefGoogle Scholar
18. Konolige, K., Agrawal, M. and Sola, J., “Large-Scale Visual Odometry for Rough Terrain,” In: Robotics Research, (Springer Berlin Heidelberg, 2011) pp. 201212.Google Scholar
19. Tardif, J.-P., George, M., Laverne, M., Kelly, A. and Stentz, A., “A New Approach to Vision-Aided Inertial Navigation,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, Taiwan, (Oct. 2010) pp. 4161–4168.CrossRefGoogle Scholar
20. Chatfield, A. B., Fundamentals of High Accuracy Inertial Navigation, Reston, V.A., USA, vol. 174. (AIAA, Sep. 1997).CrossRefGoogle Scholar
21. Makadia, A. and Daniilidis, K., “Correspondenceless Ego-Motion Estimation Using an IMU,” in Proceedings of the IEEE International Conference on Robotics and Automation, (2005) pp. 3534–3539.Google Scholar
22. Mirzaei, F. M. and Roumeliotis, S. I., “A Kalman filter-based algorithm for IMU-camera calibration: Observability analysis and performance evaluation,” IEEE Trans. Robot. 24 (5), pp. 11431156 (2008).CrossRefGoogle Scholar
23. Martinelli, A., “Vision and IMU data fusion: Closed-form solutions for attitude, speed, absolute scale, and bias determination,” IEEE Trans. Robot. 28 (1), pp. 4460 (2012).CrossRefGoogle Scholar
24. Leutenegger, S., Lynen, S., Bosse, M., Siegwart, R. and Furgale, P., “Keyframe-based visual–inertial odometry using nonlinear optimization,” Int. J. Robot. Res. 34 (3), pp. 314334 (2015).CrossRefGoogle Scholar
25. Hartley, R. I. and Sturm, P., “Triangulation,” Comput. Vis. Image Understanding 68 (2), pp. 146157 (1997).CrossRefGoogle Scholar
26. Geiger, A., Ziegler, J. and Stiller, C., “Stereoscan: Dense 3d Reconstruction in Real-Time,” Proceedings of the IEEE Symposium on Intelligent Vehicles, Baden-Baden, Germany, (Jun. 2011) pp. 963–968.CrossRefGoogle Scholar
27. Sünderhauf, N., Konolige, K., Lacroix, S. and Protzel, P., “Visual Odometry using Sparse Bundle Adjustment on an Autonomous Outdoor Vehicle,” In: Autonome Mobile Systeme, (Springer Berlin Heidelberg, 2006) pp. 157163.Google Scholar
28. Wu, C., Agarwal, S., Curless, B. and Seitz, S. M., “Multicore Bundle Adjustment,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, R.I., USA (2011) pp. 3057–3064.Google Scholar
29. Levenberg, K., “A method for the solution of certain problems in least squares,” Quarterly Appl. Math. 2, 164168 (1944).CrossRefGoogle Scholar
30. Marquardt, D. W., “An algorithm for least-squares estimation of nonlinear parameters,” J. Soc. Ind. Appl. Math. 11 (2), pp. 431441 (1963).CrossRefGoogle Scholar
31. Dennis, J. E. Jr, and Mei, H. H. W., “Two new unconstrained optimization algorithms which use function and gradient values,” J. Optimization Theory Appl. 28 (4), pp. 453482 (1979).CrossRefGoogle Scholar
32. Powell, M.J., “A new algorithm for unconstrained optimization'', Nonlinear programming, Proceedings of a Symposium Conducted by the Mathematics Research Center, the University of Wisconsin–Madison, (May 1970) pp. 31–65CrossRefGoogle Scholar
33. Lourakis, M. L. A. and Argyros, A. A., “Is Levenberg-Marquardt the Most Efficient Optimization Algorithm for Implementing Bundle Adjustment?,” Proceedings of the IEEE International Conference on Computer Vision, Beijing, China, vol. 2, (Oct. 2005) pp. 1526–1531.CrossRefGoogle Scholar
34. Sun, W. and Yuan, Y.-X., Optimization Theory and Methods: Nonlinear Programming, vol. 1. (Springer Berlin Heidelberg, 2006).Google Scholar
35. Warren, M., McKinnon, D., He, H. and Upcroft, B., “Unaided Stereo Vision Based Pose Estimation,” Proceedings of the Australasian Conference on Robotics and Automation, Brisbane, Australia, (Dec. 2010).Google Scholar
36. Lourakis, M. I. and Argyros, A. A., “SBA: A software package for generic sparse bundle adjustment,” ACM Trans. Math. Software 36 (1), pp. 130 (2009).CrossRefGoogle Scholar
37. Geiger, A., Lenz, P., Stiller, C. and Urtasun, R., “Vision meets robotics: The KITTI dataset,” Int. J. Robot. Res. (2013).CrossRefGoogle Scholar