×

zbMATH — the first resource for mathematics

Intrackability: characterizing video statistics and pursuing video representations. (English) Zbl 1235.68265
Summary: Videos of natural environments contain a wide variety of motion patterns of varying complexities which are represented by many different models in the vision literature. In many situations, a tracking algorithm is formulated as maximizing a posterior probability. In this paper, we propose to measure the video complexity by the entropy of the posterior probability, called the intrackability, to characterize the video statistics and pursue optimal video representations. Based on the definition of intrackability, our study is aimed at three objectives. Firstly, we characterize video clips of natural scenes by intrackability. We calculate the intrackabilities of image points to measure the local inferential uncertainty, and collect the histogram of the intrackabilities over the video in space and time as the global video statistics. We find that a PCA scatter-plot based on the first two principle components of intrackability histograms can reflect the major variations, i.e., image scaling and object density, in natural video clips. Secondly, we show that different video representations, including deformable contours, tracking kernels with various appearance features, dense motion fields, and dynamic texture models, are connected by the change of intrackability and thus develop a simple criterion for model transition and for pursuing the optimal video representation. Thirdly, we derive the connections between the intrackability measure and other criteria in the literature such as the Shi-Tomasi texturedness measure, conditional number, and Harris-Stephens \(R\) score, and compare with the Shi-Tomasi measure in tracking experiments.

MSC:
68T45 Machine vision and scene understanding
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Ali, S., & Shah, M. (2007). A Lagrangian particle dynamics approach for crowd flow segmentation and stability analysis. In CVPR.
[2] Ali, S., & Shah, M. (2008). Floor fields for tracking in high density crowd scenes. In ECCV.
[3] Badrinarayanan, V., Perez, P., Le Clerc, F., & Oisel, L. (2007). On uncertainties, random features and object tracking. In ICIP.
[4] Black, M. J., & Fleet, D. J. (2000). Probabilistic detection and tracking of motion boundaries. Int. J. Comput. Vis., 38(3), 231–245. · Zbl 1012.68694 · doi:10.1023/A:1008195307933
[5] Collins, R., Liu, Y., & Leordeanu, M. (2005). Online selection of discriminative tracking features. IEEE Trans. Pattern Anal. Mach. Intell., 27(10), 1631–1643. · Zbl 05112482 · doi:10.1109/TPAMI.2005.205
[6] Collins, R. T. (2003). Mean-shift blob tracking through scale space. In CVPR.
[7] Comaniciu, D., Ramesh, V., & Meer, P., (2003). Kernel-based object tracking. IEEE Trans. Pattern. Anal. Mach. Intell., 25(5), 564–577. · Zbl 05111294 · doi:10.1109/TPAMI.2003.1195991
[8] Cong, Y., Gong, H., Zhu, S. C., & Tang, Y. (2009). Flow mosaicking: real-time pedestrian counting without scene-specific learning. In CVPR (pp. 1093–1100).
[9] Dreschler, L., & Nagel, H. H. (1981). Volumetric model and 3D trajectory of a moving car derived from monocular TV frame sequences of a street scene. In IJCAI (pp. 692–697).
[10] Fan, Z., Yang, M., Wu, Y., Hua, G., & Yu, T. (2006). Efficient optimal kernel placement for reliable visual tracking. In CVPR.
[11] Fitzgibbon, A. (2001). Stochastic ridigity: image registration for nowhere-static scenes. In ICCV.
[12] Han, T. X., Ramesh, V., Zhu, Y., & Huang, T. S. (2005). On optimizing template matching via performance characterization. In ICCV.
[13] Harris, C., & Stephens, M. (1988). A combined corner and edge detector. In Proceedings of the fourth Alvey vision conference, Manchester, UK (pp. 147–151).
[14] Horn, B., & Schunck, B. (1981). Determining optical flow. Artif. Intell., 17, 185–203. · doi:10.1016/0004-3702(81)90024-2
[15] Kadir, T., & Brady, M. (2001). Saliency, scale and image description. Int. J. Comput. Vis., 45(2), 83–105. · Zbl 0987.68597 · doi:10.1023/A:1012460413855
[16] Koenderink, J. J. (1984). The structure of images. Biol. Cybern., 50, 363–370. · Zbl 0537.92011 · doi:10.1007/BF00336961
[17] Kwon, J., Lee, K. M., & Park, F. C. (2009). Visual tracking via geometric particle filtering on the affine group with optimal importance function. In IEEE conf on computer vision and pattern recognition.
[18] Li, Z., Gong, H., Sang, N., & Zhu, G. (2007a). Intrackability theory and application. In SPIE MIPPR.
[19] Li, Z., Gong, H., Zhu, S. C., & Sang, N. (2007b). Dynamic feature cascade for multiple object tracking with trackability analysis. In EMMCVPR.
[20] Lindeberg, T. (1993). Detecting salient blob-like image structures and their scales with a scale-space primal sketch: a method for focus-of-attention. Int. J. Comput. Vis., 11(3), 283–318. · doi:10.1007/BF01469346
[21] Maccormick, J., & Blake, A. (2000). A probabilistic exclusion principle for tracking multiple objects. Int. J. Comput. Vis., 39(1), 57–71. · Zbl 1060.68629 · doi:10.1023/A:1008122218374
[22] Marr, D., Poggio, T., & Ullman, S. (1979). Bandpass channels, zero-crossings, and early visual information processing. J. Opt. Soc. Am. A, 69, 914–916. · doi:10.1364/JOSA.69.000914
[23] Nickels, K., & Hutchinson, S. (2002). Estimating uncertainty in SSD-based feature tracking. Image Vis. Comput., 20(1), 47–68. · doi:10.1016/S0262-8856(01)00076-2
[24] Pan, P., Porikli, F., & Schonfeld, D. (2009). Recurrent tracking using multifold consistency. In IEEE workshop on VS-PETS.
[25] Pylyshyn, Z. W. (2004). Some puzzling findings in multiple object tracking (MOT): I. Tracking without keeping track of object identities. Vis. Cogn., 11(7), 801–822. · doi:10.1080/13506280344000518
[26] Pylyshyn, Z. W. (2006). Some puzzling findings in multiple object tracking (mot): II. Inhibition of moving nontargets. Vis. Cogn., 14(2), 175–198. · doi:10.1080/13506280544000200
[27] Pylyshyn, Z. W., & Vidal Annan, J. (2006). Dynamics of target selection in multiple object tracking (mot). Spat. Vis., 19(6), 485–504. · doi:10.1163/156856806779194017
[28] Ross, D. A., Lim, J., Lin, R. S., & Yang, M. H. (2008). Incremental learning for robust visual tracking. Int. J. Comput. Vis., 77, 125–141. · Zbl 05322217 · doi:10.1007/s11263-007-0075-7
[29] Sato, K., & Aggarwal, J. K. (2004). Temporal spatio-velocity transform and its application to tracking and interaction. Comput. Vis. Image Underst., 96, 100–128. · Zbl 02238399 · doi:10.1016/j.cviu.2004.02.003
[30] Segvic, S., Remazeilles, A., & Chaumette, F. (2006). Enhancing the point feature tracker by adaptive modelling of the feature support. In ECCV.
[31] Serby, D., Koller-Meier, S. & Gool, L.V., (2004). Probabilistic object tracking using multiple features. In ICPR (pp. 184–187).
[32] Shi, J., & Tomasi, C. (1994). Good features to track. In CVPR.
[33] Soatto, S., Doretto, G., & Wu, Y. (2001). Dynamic textures. In ICCV. · Zbl 1030.68646
[34] Srivastava, A., Lee, A., Simoncelli, E., & Zhu, S. (2003). On advances in statistical modeling of natural images. J. Math. Imaging Vis., 18(1), 17–33. · Zbl 1033.68133 · doi:10.1023/A:1021889010444
[35] Szummer, M., & Picard, R. W. (1996). Temporal texture modeling. In ICIP.
[36] Tommasini, T., Fusiello, A., Trucco, E., & Roberto, V. (1998). Making good features track better. In CVPR.
[37] Veenman, C., Reinders, M., & Backer, E. (2001). Resolving motion correspondence for densely moving points. IEEE Trans. Pattern Anal. Mach. Intell., 23, 54–72. · Zbl 05110351 · doi:10.1109/34.899946
[38] Wang, Y., & Zhu, S. C. (2003). Modeling textured motion: particle, wave and sketch. In ICCV (pp. 213–220).
[39] Wang, Y., & Zhu, S. (2008). Perceptual scale space and its applications. Int. J. Comput. Vis., 80(1), 143–165. · Zbl 05322268 · doi:10.1007/s11263-008-0138-4
[40] Wang, Y., Bahrami, S., & Zhu, S. C. (2005). Perceptual scale space and its applications. In ICCV (pp. 58–65).
[41] Witkin, A. (1983). Scale space filtering. In Int’l Joint Conf. on AI, Kaufman, Palo Alto, 1983.
[42] Wu, Y., Zhu, S., & Guo, C. (2008). From information scaling of natural images to regimes of statistical models. Q. Appl. Math., 66(1), 81–122. · Zbl 1169.62345
[43] Yilmaz, A., Javed, O., & Shah, M. (2006). Object tracking: a survey. ACM Comput. Surv., 38(4), 13. · doi:10.1145/1177352.1177355
[44] Zhou, X. S., Comaniciu, D., & Gupta, A. (2005). An information fusion framework for robust shape tracking. IEEE Trans. Pattern Anal. Mach. Intell., 27(1), 115–123. · Zbl 05110407 · doi:10.1109/TPAMI.2005.3
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.