Abstract:
Image fusion techniques merge the complementary information of several images
(multi-focus, multi-exposure and multi-modal). Each of these scenarios poses different
challenges for image fusion techniques, which are being extensively researched.
However, most of these works assume that source images are preregistered, which is a
less practical scenario. Both registered and unregistered image fusion algorithms are
considered in this thesis. The registration involves the geometrical / spatial alignment
of source images taken using different sensors or a sensor in different operating conditions.
This research is concerned with the reliable fusion schemes of several scenario
images (including muti-focus, Infra Red (IR) and visible, Computed Tomography (CT)
and Magnetic Resonance (MR), and multi-exposure images) demonstrating high quality
fused results without loss of useful information.
The first scheme is a textural registration based multi-focus scheme involving the
Gabor filtering (with specific frequency and orientation) for extracting texture features
from the images. The filtered images are aligned/registered using affine transformation.
Noise and blur play an important role in image fusion and need to be classified and
treated for quality image fusion. The next two fusion schemes deal with multi-exposure
noisy (real and synthetic both) and blur images. In the first algorithm, the noisy, blurry
and clean images are classified using Laplacian filter and histogram spread. The noise
is reduced in the frequency domain. Heavy weights are assigned to noise free pixels and
the blur images are passed through the Wiener filter. In the second algorithm, a noise
resistant image fusion scheme for multi-exposure sensors using color dissimilarity (for
motion detection and removal), median and noise maps is proposed. A well exposed
image is obtained as a result of weighted average of multi-exposure source images.
Higher valued weights are assigned to pixels containing low values of noises, high
values of color dissimilarity and median maps.
The next work (two schemes) involve pre-registered visible and IR images. In the
first one, a three stage image fusion scheme using Genetic Algorithm (GA) is presented.
In the first stage, it segments the image into homogeneous regions and generates
segmentation maps. In the second stage, the segmentation maps are combined
by an adaptive weight adjustment procedure. The third stage fuses the input images
and segmentation maps via GA based multi- objective optimization strategy. The second
image fusion scheme uses Un-Decimated Dual Tree Complex Wavelet Transform
(UDTCWT) for astronomical images. The UDTCWT reduces noise effects and improves
object classification due to its inherited shift invariance property. Local standard
deviation and distance transforms are used to extract useful information, especially
small objects.