Abstract:
Image fusion is the process of combining relevant information from several images into one
image. The final output image provides more information than any of the single image. The
benefits of image fusion include: extended spatial and temporal coverage, extended range
of operation, increased reliability, robust system performance, reduced uncertainty, higher
signal to noise ratio, compact representation of information, reduction in the amount of
data to be processed and creation of new images that are more suitable for human/machine
perception, for further image-processing tasks (such as segmentation, object detection or
target recognition) and applications (such as remote sensing, medical imaging, surveillance
systems etc).
This research is concerned with the problem of multisensor pixel-level image fusion. The
aim was to develop reliable schemes that represent the visual information, obtained from
a number of different imaging sensors, in a single fused image without the introduction of
distortion or loss of information. In all, six different pixel-level image fusion schemes are
proposed. The first scheme uses compressive sensing principle to fuse visible and infrared
images with the aim of achieving reasonable fusion performance at very low complexity.
This novel entropy dependent image fusion scheme adaptively adjusts the number of compressive
measurements depending on the amount of information. An improved guided image
fusion for magnetic resonance and computed tomography imaging is proposed as the second
fusion approach. The next three fusion schemes deal with another important aspect in the
design of pixel-level image fusion algorithms i.e. robustness against noise. The aim was
to design multi-focus image fusion algorithms that provide acceptable performance in the
presence of noise. Three different algorithms for three different scenarios of multi-focus
fusion are considered: static, dynamic and all in focus fusion. The proposed schemes not
only preserve the details in the fused image, they are also efficient in reducing noise. The
last fusion scheme uses a guided filter and intensity hue saturation based pan-sharpening
approach to fuse satellite images. The scheme combines the high resolution uni-spectral
and low-resolution multispectral images taking into account the intensity levels and spatial
information.
Simulation results when analysed visually and quantitatively depict the significance of
the proposed schemes compared to existing schemes. The results of all the proposed image
fusion schemes are demonstrated through examples of fused imagery and results of objective
tests against conventional image fusion schemes.