×

zbMATH — the first resource for mathematics

Closed-form solution of visual-inertial structure from motion. (English) Zbl 1328.68248
Summary: This paper investigates the visual-inertial structure from motion problem. A simple closed form solution to this problem is introduced. Special attention is devoted to identify the conditions under which the problem has a finite number of solutions. Specifically, it is shown that the problem can have a unique solution, two distinct solutions and infinite solutions depending on the trajectory, on the number of point-features and on their layout and on the number of camera images. The investigation is also performed in the case when the inertial data are biased, showing that, in this latter case, more images and more restrictive conditions on the trajectory are required for the problem resolvability.
MSC:
68T45 Machine vision and scene understanding
68T40 Artificial intelligence for robotics
Software:
MonoSLAM
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Armesto, L., & Tornero, J. (2007). Fast ego-motion estimation with multi-rate fusion of inertial and vision. The International Journal of Robotics Research, 26, 577–589. · Zbl 05744587
[2] Berthoz, A., Pavard, B., & Young, L. R. (1975). Perception of linear horizontal self-motion induced by peripheral vision (linearvection) basic characteristics and visual-vestibular interactions. Experimental Brain Research, 23, 471–489.
[3] Bryson, M., & Sukkarieh, S. (2008). Observability analysis and active control for airbone SLAM. IEEE Transaction on Aerospace and Electronic Systems, 44(1), 261–280.
[4] Chiuso, A., Favaro, P., Jin, H., & Soatto, S. (2002). Structure from motion causally integrated over time. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(4), 523–535. · Zbl 05111059
[5] Christopher Longuet-Higgins, H. (1981). A computer algorithm for reconstructing a scene from two projections. Nature, 293, 133–135.
[6] Davison, A. J., Reid, I. D., Molton, N. D., & Stasse, O. (2007). MonoSLAM: Real-time single camera SLAM. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(6), 1052–1067. · Zbl 05340861
[7] Dokka, K., MacNeilage, P. R., De Angelis, G. C., & Angelaki, D. E. (2011). Estimating distance during self-motion: A role for visual-vestibular interactions. Journal of Vision, 11(13), 1–16.
[8] Dong-Si, T. C., & Mourikis, A. I., (2012). Initialization in vision-aided inertial navigation with unknown camera-IMU calibration: Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura, Portugal, October 7–12 2012, pp. 1064–1071.
[9] Farrell, J. A. (2008). Aided navigation: GPS and high rate sensors. New York: McGraw-Hill.
[10] Fetsch, C. R., DeAngelis, G. C., & Angelaki, D. E. (2010). Visual-vestibular cue integration for heading perception: Applications of optimal cue integration theory. European Journal of Neuroscience, 31(10), 1721–1729.
[11] Gemeiner, P., Einramhof, P., & Vincze, M. (2007). Simultaneous motion and structure estimation by fusion of inertial and vision data. The International Journal of Robotics Research, 26, 591–605. · Zbl 05744588
[12] Hartley, R. I. (1997). In defense of the eight-point algorithm. IEEE Transaction on Pattern Recognition and Machine Intelligence, 19(6), 580–593. · Zbl 05111719
[13] Hunt, B. R., Sauer, T., & Yorke, J. A. (1992). Prevalence: a translation-invariant ”almost every” on infinite-dimensional spaces. Bulletin of the American Mathematical Society, 27(2), 217–238. · Zbl 0763.28009
[14] Jones, E., & Soatto, S. (Apr. 2011). Visual-inertial navigation, mapping and localization: A scalable real-time causal approach. The International Journal of Robotics Research, 30(4), 407–430.
[15] Kelly, J., & Sukhatme, G. (2011). Visual-inertial simultaneous localization, mapping and sensor-to-sensor self-calibration. International Journal of Robotics Research, 30(1), 56–79. · Zbl 05891501
[16] Kim, J., & Sukkarieh, S. (2007). Real-time implementation of airborne inertial-SLAM. Robotics and Autonomous Systems, 55, 62–71. · Zbl 05137129
[17] Martinelli, A. (2012). Vision and IMU data fusion: Closed-form solutions for attitude, speed, absolute scale and bias determination. Transaction on Robotics, 28(1), 44–60.
[18] Merfeld, D. M., Zupan, L., & Peterka, R. J. (1999). Humans use internal models to estimate gravity and linear acceleration. Nature, 398, 615–618.
[19] Meyer, C. D. (2000). Matrix analysis and applied linear algebra. Philadelphia: SIAM.
[20] NistĂ©r, D. (2004). An efficient solution to the five-point relative pose problem. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 26(6), 756–770. · Zbl 05111562
[21] Strelow, D., & Singh, S. (2004). Motion estimation from image and inertial measurements. International Journal of Robotics Research, 23(12), 1157–1195. · Zbl 05422565
[22] Veth, M., & Raquet, J. (2007). Fusing low-cost image and inertial sensors for passive, navigation. Journal of the Institute of Navigation, 54(1), 11–20.
[23] Weiss, S., Scaramuzza, D., & Siegwart, R. (2011). Monocular-SLAM-based navigation for autonomous micro helicopters in GPS-denied environments. Journal of Field Robotics, 26(6), 854–874.
[24] Weiss. S. (2012). Vision based navigation for micro helicopters, PhD thesis, Diss. ETH No. 20305.
[25] Woodman, O. J. (2007). An introduction to inertial navigation, Technical Report, University of Cambridge, Computer Laboratory, UCAM-CL-TR-696.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.