NUST Institutional Repository

Analyzing the Security Vulerabilities of Deep Neural Networks: Attacks and Defenses

Show simple item record

dc.contributor.author Ali, Hassan
dc.date.accessioned 2023-08-18T09:23:09Z
dc.date.available 2023-08-18T09:23:09Z
dc.date.issued 2019
dc.identifier.other 205631
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/36860
dc.description Supervisor: Dr. Rehan Ahmed en_US
dc.description.abstract Over the past few years Machine Learning algorithms, especially, Deep Neural Networks have emerged as a primary solution to handle large amount of data using statistical inference in many applications e.g. autonomous vehicles, image recognition, biometric systems, transportation management etc. However, these DNNs are inherently vulnerable to several security attacks, primarily, because of their high dependency on the training dataset. One of the most common attack used to fool DNNs is data poisoning during inference. Such security attacks can be classified into two large categories: perceptible and imperceptible attacks. Imperceptible Attacks are more critical and crucial because they cannot be inferred through manual inspection. In literature, several imperceptible attacks have been introduced, i.e., Gradient-Based Attacks (by estimating the gradients of the loss function), Decision-Based Attacks and Score-Based Attacks (which analyze the probabilistic behavior of the individual input components to estimate the gradients of the DNNs), etc. To mitigate these attacks, multiple defenses have been proposed based on feature squeezing, adversarial data augmentation and Generative Adversarial Networks (GAN). However, most of the state-of-the-art defenses are computationally very expensive and have been broken by generating stronger attacks using the same approaches. Therefore, in this thesis, first we analyze the state-of-the-art imperceptible attacks from well know DNN attack lib rary, i.e., Cleverhans, to identify the respective limitations. Then, based on these limitations we investigate the simple computationally efficient defenses to increase the robustness of deep neural networks against such attacks. Furthermore, based on this analysis, we explore more security vulnerabilities in DNNs for developing the more powerful attacks. en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Science NUST SEECS en_US
dc.title Analyzing the Security Vulerabilities of Deep Neural Networks: Attacks and Defenses en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [882]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account