Multifocus image fusion using the nonsubsampled contourlet transform. (English) Zbl 1178.94035

Summary: A novel image fusion algorithm based on the nonsubsampled contourlet transform (NSCT) is proposed in this paper, aiming at solving the fusion problem of multifocus images. The selection principles of different subband coefficients obtained by the NSCT decomposition are discussed in detail. Based on the directional vector normal, a ‘selecting’ scheme combined with the ‘averaging’ scheme is presented for the lowpass subband coefficients. Based on the directional bandlimited contrast and the directional vector standard deviation, a selection principle is put forward for the bandpass directional subband coefficients. Experimental results demonstrate that the proposed algorithm cannot only extract more important visual information from source images, but also effectively avoid the introduction of artificial information. It significantly outperforms the traditional discrete wavelet transform-based and the discrete wavelet frame transform-based image fusion methods in terms of both visual quality and objective evaluation, especially when the source images are not perfectly registered.


94A08 Image processing (compression, reconstruction, etc.) in information and communication theory
65T50 Numerical methods for discrete and fast Fourier transforms


NSCT toolbox
Full Text: DOI


[1] Pajares, G.; De La Cruz, J. M.: A wavelet-based image fusion tutorial, Pattern recognition 37, No. 9, 1855-1872 (2004)
[2] Li, S. T.; Kwok, J. T.; Wang, Y. N.: Multifocus image fusion using artificial neural networks, Pattern recognition letters 23, No. 8, 985-997 (2002) · Zbl 1032.68722
[3] Z.H. Li, Z.L. Jing, G. Liu, S.Y. Sun, H. Leung, Pixel visibility based multifocus image fusion, in: IEEE International Conference on Neural Networks and Signal Processing, Nanjing , China, 14 – 17 December, 2003, pp. 1050 – 1053.
[4] Eltoukhy, H. A.; Kavusi, S.: A computationally efficient algorithm for multi-focus image reconstruction, Proceedings of SPIE electronic imaging, 332-341 (June 2003)
[5] Zhang, Z.; Blum, R. S.: A categorization of multiscale-decomposition-based image fusion schemes with a performance study for a digital camera application, Proceedings of IEEE 87, No. 8, 1315-1326 (1999)
[6] Burt, P. J.; Adelson, E. H.: The Laplacian pyramid as a compact image code, IEEE transactions on communications 31, No. 4, 532-540 (1983)
[7] Toet, A.: Image fusion by a ratio of low-pass pyramid, Pattern recognition letters 9, No. 4, 245-253 (1989) · Zbl 0800.68746
[8] Petrovic\(^{\prime}\), V. S.; Xydeas, C. S.: Gradient-based multiresolution image fusion, IEEE transactions on image processing 13, No. 2, 228-237 (2004)
[9] Li, H.; Manjunath, B. S.; Mitra, S. K.: Multisensor image fusion using the wavelet transform, Graphical models and image processing 57, No. 3, 235-245 (1995)
[10] De, I.; Chanda, B.: A simple and efficient algorithm for multifocus image fusion using morphological wavelets, Signal processing 86, No. 5, 924-936 (2006) · Zbl 1163.94325
[11] Li, M.; Cai, W.; Tan, Z.: A region-based multi-sensor image fusion scheme using pulse-coupled neural network, Patter recognition letters 27, No. 16, 1948-1956 (2006)
[12] Do, M. N.; Vetterli, M.: The contourlet transform: an efficient directional multiresolution image representation, IEEE transactions on image processing 14, No. 12, 2091-2106 (2005)
[13] Do, M. N.; Vetterli, M.: Framing pyramids, IEEE transactions on signal processing 51, No. 9, 2329-2342 (2003) · Zbl 1369.94023
[14] Bamberger, R. H.; Smith, M. J. T.: A filter bank for the directional decomposition of images: theory and design, IEEE transactions on signal processing 40, No. 4, 882-893 (1992)
[15] Da Cunha, A. L.; Zhou, J. P.; Do, M. N.: The nonsubsampled contourlet transform: theory, design, and applications, IEEE transactions on image processing 15, No. 10, 3089-3101 (2006)
[16] A.L. da Cunha, J.P. Zhou, M.N. Do, Nonsubsampled contourlet transform: filter design and application in denoising, in: IEEE International Conference on Image Processing, Genoa, Italy, 11 – 14 September, 2005, pp.749 – 752.
[17] J.P. Zhou, A.L. da Cunha, M.N. Do, Nonsubsampled contourlet transform: construction and application in enhancement, in: IEEE International Conference on Image Processing, Genoa, Italy, 11 – 14 September, 2005, pp.469 – 476.
[18] Shensa, M. J.: The discrete wavelet transform: wedding the à trous and mallat algorithms, IEEE transactions on signal processing 40, No. 10, 2464-2482 (1992) · Zbl 0825.94053
[19] Tay, D. B. H.; Kingsbury, N. G.: Flexible design of multidimensional perfect reconstruction FIR 2-band filters using transformation of variables, IEEE transactions on image processing 2, No. 4, 466-480 (1993)
[20] Sweldens, W.: The lifting scheme: A custom-design construction of biorthogonal wavelets, Applied and computational harmonic analysis 3, No. 2, 186-200 (1996) · Zbl 0874.65104
[21] M.I. Sezan, G. Pavlovic\(^{\prime}\), A.M. Tekalp, A.T. Erdem, On modeling the focus blur in image restoration, in: IEEE International Conference on Acoustics, Speech, and Signal Processing, Toronto, Canada, 14 – 17 April, 1991, pp. 2485 – 2488.
[22] Toet, A.; Van Ruyven, L. J.; Valeton, J. M.: Merging thermal and visual images by a contrast pyramid, Optical engineering 28, No. 7, 789-792 (1989)
[23] De Valois, R. L.; Yund, E. W.; Hepler, N.: The orientation and direction selectivity of cells in macaque visual cortex, Vision research 22, No. 5, 531-544 (1982)
[24] Wang, Z.; Bovik, A. C.; Sheikh, H. R.; Simoncelli, E. P.: Image quality assessment: from error visibility to structural similarity, IEEE transactions on image processing 13, No. 4, 600-612 (2004)
[25] V. Petrović, C. Xydeas, Objective image fusion performance characterisation, in: IEEE International Conference on Computer Vision, Beijing, China, 17 – 21 October, 2005, pp. 1866 – 1871.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.