Abstract:
A scene to be photographed, usually includes objects at varying distances from camera.
Depth-of-field (DOF) of camera is the range of distance, within which object appear to be
sharper in the photograph. One way to effective DOF is, to acquire several images of a scene
with focus on objects at different distances. Thus, this integrates the images in a way that all
objects appears to be sharp in the output image. This process is known as multi-focus image
fusion(MFIF). Before fusion, the constituent images must be registered so that the images
of corresponding objects are overlaid properly. MFIF techniques belong to broad categories:
Pixel-based, block-based or region-based.
The key drive to incorporating MFIF is obtaining an image comprising comprehensive information
of a source scene. The method encompass creating an all-in-focused focal planes by
exploiting useful focal content from different focal planes and then fusing them together into
a unified productive image. The prime objective is to render the ineffectiveness of lens which
has lesser DOF owing to limited acquisition conditions thus not fulfilling the need of all-infocus
scene. Mission of this research is to provide an ultimate efficient and cost-effective
algorithm for this purpose.
In this research, I have proposed and practically demonstrated, an efficacious multi-focus
image fusion technique housing military and commercial applications. The concept of gradients,
morphological filtering ending towards construction of refined focused maps is the
hall-mark to better the quality of focused image. In my algorithm images are passed through
a Pre-Processing stage to meticulously handle the aberrant illumination variance in some
source images. In order to fasten up the development, those not falling in this category
are directly subjected to the stages of calculation of clarity, construction of focus maps and
consistency verification.
Existing techniques being used in industry undermine the accuracy and cost-effectiveness
thereby producing deprived results. Therefore to support the significance of proposed technique
demonstration highlighting quantitative and visual comparison will also be presented,
showing more accurate simulation results compared to existing ones.