NUST Institutional Repository

Generating Adversarial Attacks On Medical Images Using Classical And Transform Domain Techniques

Show simple item record

dc.contributor.author Khan, Asad
dc.date.accessioned 2023-08-31T13:10:24Z
dc.date.available 2023-08-31T13:10:24Z
dc.date.issued 2021
dc.identifier.other 319401
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/38048
dc.description Supervisor: Dr. Shahzad Younas en_US
dc.description.abstract The most powerful and common machine learning models used for manydifferent image analysis tasks like 3D analysis, image retrieval, image classification, object detection are known as deep neural networks (DNNs). Theyhave achieved performance level near to human level. Based on the successof DNNs on natural images (e.g. captured images from natural scenes likeimagenet and CIFAR-10), they have become very popular for tasks such asmedical image processing, organ/landmark localization, diagnosis of cancer, diabetic retinopathy detection and Covid-19 identification. Despite the surgein the development of smart intelligent systems by inculcating deep struc-tured networks, the vulnerabilities of such smart systems may impede by amere involvement of an adversarial perturbation. The adversarial perturba-tions may not be detected by the naked eye, but they can be seen by deepneural network classifiers. In this study, a novel methodology is proposedthat crafts perturbations by using a transform domain techniques and classi-cal approach like FGSM on Covid19 images thereby fooling the state-of-artDNN models en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Science, (SEECS), NUST en_US
dc.title Generating Adversarial Attacks On Medical Images Using Classical And Transform Domain Techniques en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [882]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account