Home
Call For Papers
Submission
Author
Registration
Publications
About
Contact Us

  Comparative Analysis of Multimodal Image Fusion Using Self Organizing Feature Maps  
  Authors : Dr.Anna Saro Vijendran; G.Paramasivam
  Cite as:

 

Recently, multi-sensor based image fusion system is a challenging task and fundamental to several modern day image processing applications, such as security systems, defence applications, and intelligent machines. Image fusion techniques have been actively investigated and have wide application in various fields. It is often a vital pre-processing procedure to many computer vision and image processing tasks which are dependent on the acquisition of imaging data via sensors, such as IR and visible. A modified method of image fusion algorithm based on self organizing feature map is proposed in this paper, aiming to produce quality images. Image Fusion is to integrate complementary and redundant information from multiple images of the same scene to create a single composite image that contains all the important features of the original images. The resulting fused image will thus be more suitable for human and machine perception or for further image processing tasks. The basic idea is to segment far infrared image only and to add information of each region from segmented image to visual image respectively. Then we determine different fused parameters according different region. At last, we adopt artificial neural network to deal with the problems of different time, because the relationship between fused parameters and image features are nonlinear. It render the fused parameters can be produce automatically according different states. The experimental results present the method we proposed indeed have good adaptive capacity with automatic determined fused parameters. And the architecture can be used for lots of applications. In this paper we compare three method of image fusion to combine IR image with visible image to produce quality images that has some features of both the images.

 

Published In : IJCSN Journal Volume 3, Issue 4

Date of Publication : 01 August 2014

Pages : 207 - 214

Figures : 08

Tables : 05

Publication Link : Comparative Analysis of Multimodal Image Fusion Using Self Organizing Feature Maps

 

 

 

G.Paramasivam : Ph.D (PT) Research scholar. He is currently the Asst Professor, SNR Sons College, Coimbatore, Tamilnadu, India. He has 11 years of teaching experience. His technical interests include Image Fusion and Artificial Neural networks.

Dr. Anna Saro Vijendran : received the Ph.D. degree in Computer Science from Mother Teresa Women’s University, Tamilnadu, India, in 2009. She has 23 years of experience in teaching. She is currently working as the Director, Dept of MCA in SNR Sons College, Coimbatore, Tamilnadu, India. She has presented and published many papers in International and National conferences. She has authored and co-authored more than 30 refereed papers. Her professional interests are Image Processing, Image fusion, Data mining and Artificial Neural Networks.

 

 

 

 

 

 

 

Image Fusion

Neural network

Multi-sensor

Self Organizing Feature Maps

Histogram equalization

Mutual Information

In this paper, a comparative analysis of three fusion method for the visible image and infrared image was proposed. Due to take full account of the characteristics that infrared image can capture accuracy target and visible image has high resolution, the proposed method produces fused image with better performance both the visual effect and performance evaluation. This paper describes an application of artificial neural network to multi-sensor image fusion problem. The advantage of Self organizing Feature Map is that after training, the weight vectors not only represents the image block cluster centroids but also preserves two main features. Topologically neighbour blocks in the input vectors are mapped to that of topologically neighbouring neurons in the code book. Also the distribution of weight vectors of the neurons reflects the distribution of the weight vectors in the input space. Hence their will not be any loss of information in the process.

 

 

 

 

 

 

 

 

 

[1] H. Wei, Z.L. Jing, Pattern Recognition Letters 28 (4) (2007) 493.

[2] S.T. Li, J.T. Kwok, Y.N. Wang, Pattern Recognition Letters 23 (8) (2002) 985.

[3] V. Aslanta, R. Kurban, Expert Systems with Applications 37 (12) (2010) 8861.

[4] Y. Chai, H.F. Li, M.Y. Guo, Optics Communications 248 (5) (2011) 1146.

[5] S.T. Li, B. Yang, Pattern Recognition Letters 29 (9) (2008) 1295

[6] Q. Zhang, B.L. Guo, Signal Processing 89 (2009) 1334.

[7] L. J. Guo and J. M. Moore, “Pixel block intensity modulation: adding spatial detail to TM band 6 thermal imagery,” Int. J. Remote Sensing., vol. 19, no. 13, pp. 2477-2491, 1988.

[8] P. S. Chavez and J. A. Bowell, “Comparison of the spectral information content of Landsat thematic mapper and SPOT for three different sites in the Phoenix, Arizona region,” Photogramm. Eng. Remote Sensing., vol. 54, no.12, pp. 1699-1708, 1988.

[9] A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancement of highly Correlated images-_. Channel ratio and chromaticity transformation Techniques,” Remote Sensing Environment, vol. 22, pp. 343-365, 1987.

[10] J. Sun, J. Li and J. Li, “Multi-source remote sensing image fusion,” INT. J. Remote Sensing, vol. 2, no. 1, pp. 323-328, Feb. 1998.

[11] W. J. Carper, T. M. Lillesand, and R. W. Kiefer, “The use of Intensity-Hue-Saturation transformation for merging SPOT panchromatic and multispectral image data,” Photogramm. Eng. Remote Sensing, vol. 56, no. 4, pp. 459-467, 1990.

[12] K. Edwards and P. A. Davis, “The use of Intensity- Hue-Saturation transformation for producing color shaded-relief images,” Photogramm. Eng. Remote Sensing, vol. 60, no. 11, pp. 1369-1374, 1994.

[13] E. M. Schetselaar, “Fusion by the IHS transform: Should we use cylindrical or Spherical coordinates?,” Int. J. Remote Sensing, vol. 19, no. 4, pp. 759-765, 1998.

[14] J. Zhou, D. L. Civco, and J. A. Silander, “A wavelet transform method to merge Landsat TM and SPOT panchromatic data,” Int. J. Remote Sensing, vol. 19, no. 4, pp. 743-757, 1998.

[15] Bae, K. H.; Choi, S.; Jung, J. H. (2008). Image fusion in infrared image and visual image using normalized mutual information, Signal Processing, Sensor Fusion, and Target Recognition, Proceedings of SPIE, Vol. 6968, 69681Q, 2008

[16] Shuo-Li Hsu, Peng-Wei Gau, I-Lin Wu, and Jyh- Horng Jeng “Region-Based Image Fusion with Artificial Neural Network “International Journal