Abstract:
Ophthalmological diseases bring different vascular disorders due to unusual variations in vascular pattern in human retina which appears as lesions, exudates or new abnormal vessels etc. Early detection of these diseases can reduce the level of disease severity. Presence of different abnormal structures due to eye diseases makes segmentation of blood vessels very difficult. Abnormal growth of new vessels, exudates, lesions etc. can appear as false positives and can degrade the performance of the diagnosis systems. It can be avoided only when false structures are removed while extracting the blood vessels. Within this framework, an automated retinal image analysis is required which can be achieved only if an accurate segmentation of blood vessels is performed. Segmentation is the pre-requisite process in most of the computer aided diagnosis systems for ophthalmological diseases. Vessel segmentation has been in research since years but the traditional techniques for vessel segmentation result in poor segmentation when applied to diseased images and produces false positives in the presence of lesions. To address this issue, this paper proposes two improved methods for blood vessels segmentation. Method-I is a classification based method, which performs region based analysis of retinal image. This method first extracts eight shape based features and three intensity based feature from the centerlines of regions, then best features are selected from the extracted features and finally SVM classifier categorizes each region into true vessels and false vessels. Method-II is an unsupervised technique of segmentation which mainly aims to reduce false positives. This is achieved by filling the exudate regions using a novel algorithm of Neighborhood based Region Filling (NBRF). The validity of the proposed techniques is tested on our own AFIO database and two publically available databases of STARE and DRIVE. Experimental results show the efficiency of the proposed approaches. The proposed system gives an accuracy of 91.49 %, 95.81% and 96.50% on AFIO, STARE and DRIVE datasets respectively.