Document Type : Original Research Paper


Department of Surveying Engineering, Faculty of Civil Engineering, Shahid Rajaee Teacher Training University, Tehran, Iran


Background and Objectives: High resolution multi-spectral (HRMS) images are essential for most of the practical remote sensing applications. Pan-sharpening is an effective mechanism to produce HRMS image by integrating the significant structural details of panchromatic (PAN) image and rich spectral features of multi-spectral (MS) images.
Methods: The traditional pan-sharpening methods incur disadvantages like spectral distortion, spatial artifacts and lack of details preservation in the fused image. The pan-sharpening approach proposed in this paper is based on integrating wavelet decomposition and convolutional sparse representation (CSR). The wavelet decomposition is performed on PAN and MS images to obtain low-frequency and high-frequency bands. The low-frequency bands are fused by exploring the CSR based activity level measurement.
Results: The HRMS image is restored by implementing the inverse transform on fused bands. The fusion rules are constructed, thus to transfer the crucial details from source images to the fused image effectively.
Conclusion: The proposed method produces HRMS images with rational spatial and spectral qualities. The visual outcomes and quantitative measures approve the eminence of the proposed fusion framework.

©2019 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.


Main Subjects

[1] A. Kosari, A. Sharifi, A. Ahmadi, M. Khoshsima, “Remote sensing satellite’s attitude control system: rapid performance sizing for passive scan imaging mode,” Aircr. Eng. Aerosp. Technol., 92(7): 1073–1083, 2020.

[2] A. Sharifi, “Using Sentinel-2 Data to Predict Nitrogen Uptake in Maize Crop,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 13: 2656–2662, 2020.

[3] H. Demirel, G. Anbarjafari, “Discrete wavelet transform-based satellite image resolution enhancement,” IEEE Trans. Geosci. Remote Sens., 49(6 PART 1): 1997–2004, 2011.

[4] H. Abdi, L. J. Williams, “Principal component analysis,” Wiley Interdisciplinary Reviews: Computational Statistics, 2(4): 433–459, 2010.

[5] M. Oza, H. Vaghela, S. Bagul, “Semi-Supervised Image-To-Image Translation,” in Proceeding - 2019 International Conference of Artificial Intelligence and Information Technology, ICAIIT 2019,: 16–20, 2019.

[6] A. Garzelli, B. Aiazzi, L. Alparone, S. Lolli, G. Vivone, “Multispectral pansharpening with radiative transfer-based detail-injection modeling for preserving changes in vegetation cover,” Remote Sens., 10(8): 1–18, 2018.

[7] H. Loussifi, K. Nouri, N. Benhadj Braiek, “A new efficient hybrid intelligent method for nonlinear dynamical systems identification: The Wavelet Kernel Fuzzy Neural Network,” Commun. Nonlinear Sci. Numer. Simul., 32: 10–30, 2016.

[8] S. Bama, D. Selvathi, “Despeckling of medical ultrasound kidney images in the curvelet domain using diffusion filtering and MAP estimation,” Signal Processing, 103: 230–241, 2014.

[9] Y. Leung, J. Liu, and J. Zhang, “An improved adaptive intensity-hue-saturation method for the fusion of remote sensing images,” IEEE Geosci. Remote Sens. Lett., 11(5): 985–989, 2014.

[10] G. Plonka, J. Ma, “Curvelet-wavelet regularized split Bregman iteration for compressed sensing,” Int. J. Wavelets, Multiresolution Inf. Process., 9(1): 79–110, 2011.

[11] L. Alparone, S. Baronti, B. Aiazzi, A. Garzelli, “Spatial Methods for Multispectral Pansharpening: Multiresolution Analysis Demystified,” IEEE Trans. Geosci. Remote Sens., 54(5): 2563–2576, 2016.

[12] Z. Omar, T. Stathaki, “Image fusion: An overview,” Proc. - Int. Conf. Intell. Syst. Model. Simulation, ISMS,: 306–310, 2015.

[13] Z. Shao, J. Cai, “Remote Sensing Image Fusion with Deep Convolutional Neural Network,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., 11(5): 1656–1669, 2018.

[14] C. Jiang, Q. Zhang, R. Fan, Z. Hu, “Super-resolution CT Image Reconstruction Based on Dictionary Learning and Sparse Representation,” Sci. Rep., 8(1), 2018.

[15] X. Yang, L. Jian, B. Yan, K. Liu, L. Zhang, Y. Liu, “A sparse representation based pansharpening method,” Futur. Gener. Comput. Syst., 88: 385–399, 2018.

[16] B. Johnson, “Effects of pansharpening on vegetation indices,” ISPRS Int. J. Geo-Information, 3(2): 507–522, 2014.

[17] S. Sulochana, R. Vidhya, K. Mohanraj, D. Vijayasekaran, “Effect of wavelet based image fusion techniques with principal component analysis (PCA) and singular value decomposition (SVD) in supervised classification,” Indian J. Geo-Marine Sci., 46(2): 338–348, 2017.

[18] K. C. Jambu, S. D. Degadwala, D. Agrawal, “Satellite Image Fusion Using Adaptive Pan-sharpening Filters,” 21(1): 16–19.

[19] J. Yang, J. Zhang, “Parallel performance of typical algorithms in remote sensing-based mapping on a multi-core computer,” Photogramm. Eng. Remote Sensing, 81(5): 373–385, 2015.

[20] G. Masi, D. Cozzolino, L. Verdoliva, G. Scarpa, “Pansharpening by convolutional neural networks,” Remote Sens., 8(7), 2016.

[21] B. Wohlberg, “Efficient algorithms for convolutional sparse representations,” IEEE Trans. Image Process., 25(1): 301–315, 2016.

[22] Y. Liu, X. Chen, R. K. Ward, J. Wang, “Image Fusion with Convolutional Sparse Representation,” IEEE Signal Process. Lett., 23(12): 1882–1886, 2016.

[23] J. Sulam, A. Aberdam, A. Beck, M. Elad, “On Multi-Layer Basis Pursuit, Efficient Algorithms and Convolutional Neural Networks,” IEEE Trans. Pattern Anal. Mach. Intell.: 1–1, 2019.

[24] Y. Liu, S. Liu, Z. Wang, “A general framework for image fusion based on multi-scale transform and sparse representation,” Inf. Fusion, 24: 147–164, 2015.

[25] S. Li, X. Kang, L. Fang, J. Hu, H. Yin, “Pixel-level image fusion: A survey of the state of the art,” Inf. Fusion, 33: 100–112, 2017.

[26] A. Garzelli, “A Review of Image Fusion Algorithms Based on the Super-Resolution Paradigm,” Remote Sens., 8(10): 1, 2016.


Journal of Electrical and Computer Engineering Innovations (JECEI) welcomes letters to the editor for the post-publication discussions and corrections which allows debate post publication on its site, through the Letters to Editor. Letters pertaining to manuscript published in JECEI should be sent to the editorial office of JECEI within three months of either online publication or before printed publication, except for critiques of original research. Following points are to be considering before sending the letters (comments) to the editor.

[1] Letters that include statements of statistics, facts, research, or theories should include appropriate references, although more than three are discouraged.

[2] Letters that are personal attacks on an author rather than thoughtful criticism of the author’s ideas will not be considered for publication.

[3] Letters can be no more than 300 words in length.

[4] Letter writers should include a statement at the beginning of the letter stating that it is being submitted either for publication or not.

[5] Anonymous letters will not be considered.

[6] Letter writers must include their city and state of residence or work.

[7] Letters will be edited for clarity and length.