NUST Institutional Repository

Impact of Adversarial and Backdoor Attacks on Deep Learning Models on HealthCare 4.0

Show simple item record

dc.contributor.author Imran, Muhammad
dc.date.accessioned 2023-08-31T14:21:27Z
dc.date.available 2023-08-31T14:21:27Z
dc.date.issued 2022
dc.identifier.other 277641
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/38056
dc.description Supervisor: Dr. Hassaan Khaliq Qureshi en_US
dc.description.abstract Recent development in Machine Learning (ML) is giving rebirth to the future of intelligent systems enabling multiple domain applications, but backdoor attacks represent a serious threat to the integrity of Artificial intelligence (AI) technologies. However, a trainer can recognise the presence of poisoned samples. Therefore, a backdoor attack needs to be as elusive as possible. In literature, a lot of work has been done on poisoning the samples, in which most of the authors assumed that the labels of the samples are also poisoned. This compromise the elusiveness of the backdoor attacks because poisoned samples can be identified by visual inspection by finding the mismatch between the labels of the samples. This thesis presents an effective new backdoor attack without poisoning the labels. We also introduce a Hybrid attack composed of FGSM and backdoor attacks than will be highly effective and partially elusive. We experimented with these attacks on a CNN-based system for MRI, which shows that attacks without poisoning the label are indeed possible, thus raising concern regarding using AI in sensitive applications. en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Science (SEECS), NUST en_US
dc.title Impact of Adversarial and Backdoor Attacks on Deep Learning Models on HealthCare 4.0 en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [882]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account