Abstract:
Recent development in Machine Learning (ML) is giving rebirth to the future of
intelligent systems enabling multiple domain applications, but backdoor attacks
represent a serious threat to the integrity of Artificial intelligence (AI) technologies. However, a trainer can recognise the presence of poisoned samples. Therefore, a backdoor attack needs to be as elusive as possible. In literature, a lot of
work has been done on poisoning the samples, in which most of the authors assumed that the labels of the samples are also poisoned. This compromise the
elusiveness of the backdoor attacks because poisoned samples can be identified
by visual inspection by finding the mismatch between the labels of the samples.
This thesis presents an effective new backdoor attack without poisoning the labels.
We also introduce a Hybrid attack composed of FGSM and backdoor attacks than
will be highly effective and partially elusive. We experimented with these attacks
on a CNN-based system for MRI, which shows that attacks without poisoning
the label are indeed possible, thus raising concern regarding using AI in sensitive
applications.