dc.contributor.author |
Imtiaz, Aqsa |
|
dc.date.accessioned |
2023-07-20T11:31:13Z |
|
dc.date.available |
2023-07-20T11:31:13Z |
|
dc.date.issued |
2021 |
|
dc.identifier.other |
206660 |
|
dc.identifier.uri |
http://10.250.8.41:8080/xmlui/handle/123456789/34879 |
|
dc.description |
Supervisor: Dr. Syed Taha Ali |
en_US |
dc.description.abstract |
Major success has been achieved by deep learning in various domains e.g.
medical imaging, self-driving cars, face recognition, industrial automation
and a lot more. But, discovery of adversarial examples has highlighted severe concerns and alarms about the deployment of deep learning systems
in real world. The goal of this study is to explore generation of text based
adversarial examples and the behavior of deep model based OCR and object
detector (Faster FRCNN) against these adversaries. The vulnerabilities of
deep detection and recognition model has been successfully captured against
black-box and white-box adversaries. This study explores the transferability of adversarial attacks among a deep classifier and a deep detector. The
evaluation of the adversarial examples is conducted in two phases. Firstly,
the perturbed image is being passed to to a deep OCR for text recognition.
Secondly, the same adversarial examples are passed to a object detector to
detect and recognize text. Results shows that perturbed image has successfully fooled an OCR but failed against an object detector. This idea supports
the fact that transferability of adversarial attack between a deep model and
object detector is to be explored. |
en_US |
dc.language.iso |
en |
en_US |
dc.publisher |
School of Electrical Engineering and Computer Science (SEECS), NUST |
en_US |
dc.title |
Using Classical tools to counter Adversarial Attacks on Deep learning networks |
en_US |
dc.type |
Thesis |
en_US |