NUST Institutional Repository

Adversarial Attacks and Defenses using transform domain techniques

Show simple item record

dc.contributor.author Maryam
dc.date.accessioned 2023-03-30T04:43:27Z
dc.date.available 2023-03-30T04:43:27Z
dc.date.issued 2023
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/32642
dc.description.abstract Deep neural networks (DNNs) have shown impressive performance on a diverse set of visual tasks, when deployed in real-world environments. The study reveals that even state-of-the-art neural networks are susceptible to adversarial attacks, which can com promise their performance and accuracy. It is important to explore the vulnerability of deep neural networks to these attacks. These attacks are small well-constructed pertur bations to alter the prediction of Deep neural networks and are mostly constructed in the spatial or pixel domain. To assess the robustness of these networks, we propose a transformed-based steganography technique to fool deep neural networks on images. Our approach involves embedding a secret image in a natural image to create an adversarial example. It involves two stages; first, we select effective secret images by segregating a large-scale dataset (ImageNet) into different edge regimes. In the second stage, we pro pose the use of two techniques, Discrete Cosine Transform (DCT) and discrete wavelet transform (DWT) as steganography techniques to fool the neural networks. Moreover, we have also combined DWT with PCA and analyzed the results. This technique is ideally used for black-box setting that requires no information about the target model. Unlike other gradient-based attacks, the computed perturbation is computationally ef ficient as it is a non-iterative technique. The findings demonstrate that by embedding adversarial perturbations within DWT and PCA coefficients, the neural networks can be fooled into misclassifying input data. We have showed successful fooling rates using four different State-of-the-Art Models. Overall, this thesis contributes to the ongoing research on adversarial attacks and offers insights into improving the robustness of deep neural networks. en_US
dc.description.sponsorship Dr. Muhammad Shahzad Younis en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Sciences (SEECS) NUST en_US
dc.title Adversarial Attacks and Defenses using transform domain techniques en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [375]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account