NUST Institutional Repository

Adversarial Attack on Visual Place Recognition and 3D Reconstruction

Show simple item record

dc.contributor.author Hashir, Muhammad
dc.date.accessioned 2024-09-23T10:10:51Z
dc.date.available 2024-09-23T10:10:51Z
dc.date.issued 2024
dc.identifier.other 400036
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/46762
dc.description Supervisor: Dr. Latif Anjum en_US
dc.description.abstract This in-depth research examines the weaknesses in a number of algorithms and systems used widely across computer vision when they are under adversarial attacks. The study highlights three main aspects: handcrafted feature detection, visual place recognition, and 3D reconstruction. We focus on two common feature detection algorithms that have become foundational to many computer vision tasks due to their speediness and reliability when it comes to identifying and matching key points between images; SIFT, and ORB. No matter how popular these algorithms are known to be among the users, they still present vulnerabilities with respect to adversarial perturbations. We also pay emphasis on the images’ place recognition which is essential in various methods including SLAM and navigation. We conduct adversarial attacks on two prominent place recognition systems: FAB-MAP and DLoopDetector. FAB-MAP is based on probabilistic and DLoopDetector is known for its ability to detect places in large scale environment. We intend to interfere with these systems, so that they cannot determine the previously visited locations, by introducing the adversarial noise into the images that are processed by these systems, in order to assess how robust these systems are under adversarial scenarios. Moreover, our research goes on to the field of 3D reconstruction for which we concentrate on COLMAP which is a popular photogrammetric tool that deals with generation of accurate 3D models from image collections. To this end, we introduce noise into a fraction of the image dataset and evaluate the resulting effect on the pose accuracy of the reconstructed models in terms of the added pose error of the adversarial attack. We have used the popular HopSkipJump Attack for introducing noise into images. While the attacks were successful on Handcrafted features and Visual Place Recognition, COLMAP showed robustness to the adversarial noise and didn’t fall for it. en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Science,(SEECS) NUST Islamabad en_US
dc.title Adversarial Attack on Visual Place Recognition and 3D Reconstruction en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search DSpace


Advanced Search

Browse

My Account