Abstract:
The most powerful and common machine learning models used for manydifferent
image analysis tasks like 3D analysis, image retrieval, image classification,
object detection are known as deep neural networks (DNNs). Theyhave
achieved performance level near to human level. Based on the successof
DNNs on natural images (e.g. captured images from natural scenes
likeimagenet and CIFAR-10), they have become very popular for tasks such
asmedical image processing, organ/landmark localization, diagnosis of cancer,
diabetic retinopathy detection and Covid-19 identification. Despite the
surgein the development of smart intelligent systems by inculcating deep
struc-tured networks, the vulnerabilities of such smart systems may impede
by amere involvement of an adversarial perturbation. The adversarial
perturba-tions may not be detected by the naked eye, but they can be
seen by deepneural network classifiers. In this study, a novel methodology is
proposedthat crafts perturbations by using a transform domain techniques
and classi-cal approach like FGSM on Covid19 images thereby fooling the
state-of-artDNN models