Home   >   CSC-OpenAccess Library   >    Manuscript Information
Full Text Available

This is an Open Access publication published under CSC-OpenAccess Policy.
Publications from CSC-OpenAccess Library are being accessed from over 74 countries worldwide.
Fusion of Multispectral And Full Polarimetric SAR Images In NSST Domain
Ghada S. El-Taweel, Ashraf Khaled Helmy
Pages - 497 - 513     |    Revised - 01-12-2014     |    Published - 31-12-2014
Volume - 8   Issue - 6    |    Publication Date - November / December 2014  Table of Contents
Multi-spectral Data Fusion, POLSAR, NSST, m-PCNN.
Polarimetric SAR (POLSAR) and multispectral images provide different characteristics of the imaged objects. Multispectral provides information about surface material while POLSAR provides information about geometrical and physical properties of the objects. Merging both should resolve many of object recognition problems that exist when they are used separately. Through this paper, we propose a new scheme for image fusion of full polarization radar image (POLSAR) with multispectral optical satellite image (Egyptsat). The proposed scheme is based on Non-Subsampled Shearlet Transform (NSST) and multi-channel Pulse Coupled Neural Network (m-PCNN). We use NSST to decompose images into low frequency and band-pass sub- band coefficients. With respect to low frequency coefficients, a fusion rule is proposed based on local energy and dispersion index. In respect of sub-band coefficients, m-PCNN is used to guide how the fused sub-band coefficients are calculated using image textural information.

The proposed method is applied on three batches of Egyptsat (Red-Green-infra-red) and radarsat2 (C-band full-polarimetric HH-HV and VV-polarization) images. The batches are selected to react differently with different polarization. Visual assessment of the obtained fused image gives excellent information on clarity and delineation of different objects. Quantitative evaluations show the proposed method can superior the other data fusion methods.
1 Google Scholar 
2 CiteSeerX 
3 refSeek 
4 Scribd 
5 SlideShare 
6 PdfSR 
1 King, RL. 2000. “A challenge for high spatial, spectral, and temporal resolution data fusion” In: Proceedings, IEEE international geoscience and remote sensing symposium, IGARSS no. 6: 2602–2604. doi: 10.1109/IGARSS.2000.859654.
2 Zhu, X., and Yin L. 2001. “Data fusion of multi-sensor for petroleum geological exploration” In: Proceedings, international conferences on infotech. and info-net, ICII, Beijing no.1: 70– 75. doi: 10.1109/ICII.2001.982723.
3 Sabry R., and Vachon P. W. 2014. “A Unified Framework for General Compact and Quad Polarimetric SAR Data and Imagery Analysis.” IEEE Transactions on Geoscience and Remote Sensing 52(1): 582-602. doi:10.1109/TGRS.2013.2242479.
4 Campbell, J. B. 2002. Introduction to Remote Sensing, New York London: The Guilford Press.
5 Giorgio, L. 2014. “A novel approach to polarimetric SAR data processing based on Nonlinear PCA” Pattern Recognition no. 47:1953–1967. doi: 10.1016/j.patcog.2013.11.009.
6 Kajimoto, M. 2013. “Urban density estimation frompolarimetric SAR images based on a POA correction method” IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 6(3): 1418–1429. doi: 10.1109/JSTARS.2013.2255584.
7 Jong-Sen, Lee, and Eric Pottier. 2009. Polarimetric Radar Imaging: From Basics to Applications. CRCPress: BocaRaton.
8 Huang, W., and Jing Z.L. 2003. “Evaluation of focus measures in multi-focus image fusion” Pattern Recognition Letter no. 28: 493–500. doi: 10.1016/j.patrec.2006.09.005.
9 Shutao, Li., and Bin Yang. 2008. “Multi-focus image fusion using region segmentation and spatial frequency” Image and Vision Computing no. 26: 971–979. doi: 10.1016/j.imavis.2007.10.012.
10 Zhang, Q., and Guo, B.L. 2009. “Multifocus fusion using the non subsampled contourlet transform” Signal Processing 89(7): 1334–1346. doi: 10.1016/j.sigpro.2009.01.012.
11 Starck, J.L., Candes, E.J., and Donoho, D.L. 2002. “The curvelet transform for image denoising” IEEE Transaction on Image Processing 11(6): 131–141. doi: 10.1109/TIP.2002.1014998.
12 Candes, E. 1998. “Ridgelets: Theory and Applications”, Technical report, Department of Statistics, Stan-ford University, https://statistics.stanford.edu/sites/default/files/1998-17.pdf.
13 Huafeng Li., Chai, Yi., and Zhaofei Li. 2013. “Multi-focus image fusion based on nonsubsampled contourlet transform and focused regions detection” Optik - International Journal for Light and Electron Optics 124(1): 40–51. doi: 10.1016/j.ijleo.2011.11.088.
14 Easley, G., Labate, D., and Lim, W.Q. 2008. “Sparse directional image representations using the discrete shearlet transform” Applied and Computational Harmonic Analysis 25 (1): 25–46. doi: 10.1016/j.acha.2007.09.003.
15 Ranganath , H. S. and Kuntimad, G., 1999. “Object Detection Using Pulse Coupled Neural Networks” IEEE Transactions on Neural Networks 10, (3): 615–620. doi: 045- 9227(99)03181-1.
16 John L. Johnson and Mary Lou Padgett. 1999. “PCNN Models and Applications” IEEE Transactions on Neural Networks. 10(3): 480- 498. doi: 1045-9227(99)03191-4.
17 Wang, Z., and Yide, Ma. 2008. “Medical image fusion using m-PCNN” Information Fusion 9 (2): 176– 185. doi: 10.1016/j.inffus.2007.04.003.
18 Wang, Z., and Yide. Ma. 2007 “Dual-channel PCNN and its application in the field of image fusion” Proc. of the 3rd International Conference on Natural Computation no. (1):755–759. doi: 10.1109/ICNC.2007.338.
19 Miao, Q. and Wang, B. 2005. “A novel algorithm of multi-focus image fusion using adaptive PCNN”, "A novel adaptive multi-focus image fusion algorithm based on PCNN and sharpness" Proc. SPIE 5778, Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense IV, (704): doi:10.1117/12.603092.
20 Xu, B., and Chen, Z. 2004. “A multisensor image fusion algorithm based on PCNN” Proc. of the 5th World Congress on Intelligent Control and Automation no. (4): 3679–3682. doi: 10.1109/WCICA.2004.1343284.
21 Li, W. and Zhu, X. 2005. “A new image fusion algorithm based on wavelet packet analysis and PCNN”, Proc. of the 4th International Conference on Machine Learning and Cybernetics no. 9: pp. 5297–5301. doi: 10.1109/ICMLC.2005.1527879.
22 El-taweel, Ghada Sami and Helmy, Ashraf Khaled. 2013. “Image fusion scheme based on modified dual pulse coupled neural network”, IET Image Processing 7(5): 407–414, doi: 10.1049/iet-ipr.2013.0045.
23 Zhaobin Wang, Yide Ma , Feiyan Cheng, Lizhen. 2010. “Yang Review of pulse-coupled neural networks” Image and Vision Computing no. 28: 5–13, doi:10.1016/j.imavis.2009.06.007.
24 Wang, Zhao-hui, Wang Jia-qi, Zhao De-gong, and Wei FU. 2012. “Image fusion based on shearlet and improved PCNN” Laser Infrared. 42(2): 213–216. http://caod.oriprobe.com/order.htm?id=29200739&ftext=base.
25 Xuan Liu, Yue Zhou, and Jiajun Wang. 2014. “Image fusion based on shearlet transform and regional features” Int Journal Electronics and Communications (AEÜ) 68(6): 471-477. doi:10.1016/j.aeue.2013.12.003.
26 Jingbo Zhang, et. al., 2008 . “Multi-focus image fusion using quality assessment of spatial domain and genetic algorithm” Conference on Human System Interactions 8(2): 71– 75. doi: 10.1109/HSI.2008.4581411.
27 Liu, Wei, YIN Ming, LUAN Jing, and Guo Yu. 2013. “Adaptive image fusion algorithm based on Shift-invariant shearlet transform” Acta Photonica Sinica 42(4): 496-503. doi: 10.3788/gzxb20134204.0496.
28 Lei, Wang, and Bin Li. 2012. “Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients” Information Fusion no. 19: 20-28. doi: /10.1016/j.inffus.2012.03.002.
29 Wang, Z., Bovik, A.C., Sheikh, H.R., and Simoncelli, E.P. 2004. “Image quality assessment: from error visibility to structural similarity”, IEEE Transaction on Image Processing 13 (4): 600–612. doi: 10.1109/TIP.2003.819861.
30 Petrovic, V., and Xydeas, C. 2000. “On the effects of sensor noise in pixel-level image fusion performance” Proc. of the Third International Conference on Image Fusion no. 2: 14– 19. doi: 10.1109/IFIC.2000.859842.
31 Lee, J.S., Grunes, M. R. and Grandi, G. De. 1999. “Polarimetric SAR Speckle Filtering and its Implication for Classification”, IEEE Transaction on Geoscience and Remote Sensing 37(5): 2363-2373. doi: 10.1109/36.789635.
32 Jixian, Zhang. 2010. “Multi-source remote sensing data fusion: status and trends” International Journal of Image and Data Fusion 1(1): 5-24, doi: 10.1080/19479830903561035.
33 Liu X, et al., .2014. “Image fusion based on shearlet transform and regional features”. International Journal of Electronics Communications (AEÜ) 68(6): 471-477. doi:10.1016/j.aeue.2013.12.003.
34 Lei Wang, and Tian Lian-fang. 2014. “Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients” Information Fusion no. (19): 20-28. doi:10.1016/j.inffus.2012.03.002.
Associate Professor Ghada S. El-Taweel
Computer Science Dept., Faculty of Computers and In formatics, Suez Canal University, Ismailia - Egypt
Dr. Ashraf Khaled Helmy
Data Reception, Analysis and Receiving Station Affa irs Division, National Authority for Remote Sensing and Space Sci ences, Cairo - Egypt