Abstract:
COVID-19 has become a significant challenge, with numerous human beings losing their
lives every day. Not only a certain country is involved with this outbreak, but even
the world has suffered because of the corona virus. Computed Tomography (CT) and
X-ray images of lungs are the best resources for COVID-19 screening. It is essential to
quickly and accurately identify and segment COVID-19 from CT and X-rays to aid in
diagnostic and patient monitoring. Technology today has revolutionized the world by
using artificial intelligence to replace manual process with automated machines, which
enable the system to imitate the human brain by making wise decisions based on experience. Motivated by this, our work proposes to use convolutional neural networks (CNN)
based models for designing computer aided diagnosis (CAD) system which differentiates between COVID-19 and normal healthy lungs from both CT and X-ray images.
An automated system has also been proposed which segments the COVID-19 disease
form the radiological images of lungs using deep learning networks. For classification,
two datasets of lungs X-ray and two of CT images have been utilized. Similarly, two
CT lungs image datasets are used for segmentation. Various pre-trained networks are
employed for classification such as VGG (16, 19), Densenet (121), Resnet (50, 50 V2,
101 V2), Mobile net (V2), Xception Inception (V3, Resnet V2), Efficient net (B0) and
Nasnet (Large). For segmentation, main architectures used are: Unet, Link Net, Pyramid Scene Parsing Network (PSP Net), and Feature Pyramid Network (FPN). The
pre-trained feature extraction networks used as encoders with these segmentation architectures are: Efficient Net, MobileNet V2, Seresnet 101, Densenet 121, VGG-19, and
Inception Resnet V2. A thorough testing of all well-known deep architectures has been
done and a comparative analysis of these architectures has also been performed. Resnet
V2 and VGG-16 has proven to be effective in accurately classifying the COVID-19 from
healthy images giving an average accuracy in the range of 95% to 98% on the four X-ray
vi
and CT image datasets. In segmentation, Unet, FPN and Link Net with backbones of
MobileNet V2, Densenet 121 and Inception Resnet V2 have reported highest F1-Scores
of 77% to 98.6% on the two CT image datasets. Our achieved results are competitive
and higher as compared to previous reported results in literature on the four datasets
of classification and two datasets of segmentation.