Abstract:
Over the past few years Machine Learning algorithms, especially, Deep Neural
Networks have emerged as a primary solution to handle large amount of data
using statistical inference in many applications e.g. autonomous vehicles,
image recognition, biometric systems, transportation management etc. However, these DNNs are inherently vulnerable to several security attacks, primarily, because of their high dependency on the training dataset. One of the most
common attack used to fool DNNs is data poisoning during inference. Such
security attacks can be classified into two large categories: perceptible and
imperceptible attacks. Imperceptible Attacks are more critical and crucial
because they cannot be inferred through manual inspection. In literature,
several imperceptible attacks have been introduced, i.e., Gradient-Based Attacks (by estimating the gradients of the loss function), Decision-Based Attacks and Score-Based Attacks (which analyze the probabilistic behavior of
the individual input components to estimate the gradients of the DNNs), etc.
To mitigate these attacks, multiple defenses have been proposed based on feature squeezing, adversarial data augmentation and Generative Adversarial
Networks (GAN). However, most of the state-of-the-art defenses are computationally very expensive and have been broken by generating stronger
attacks using the same approaches. Therefore, in this thesis, first we analyze
the state-of-the-art imperceptible attacks from well know DNN attack lib rary, i.e., Cleverhans, to identify the respective limitations. Then, based on
these limitations we investigate the simple computationally efficient defenses
to increase the robustness of deep neural networks against such attacks. Furthermore, based on this analysis, we explore more security vulnerabilities in
DNNs for developing the more powerful attacks.