dc.description.abstract |
This in-depth research examines the weaknesses in a number of algorithms and systems used
widely across computer vision when they are under adversarial attacks. The study highlights
three main aspects: handcrafted feature detection, visual place recognition, and 3D reconstruction.
We focus on two common feature detection algorithms that have become foundational to
many computer vision tasks due to their speediness and reliability when it comes to identifying
and matching key points between images; SIFT, and ORB. No matter how popular these
algorithms are known to be among the users, they still present vulnerabilities with respect to
adversarial perturbations. We also pay emphasis on the images’ place recognition which is essential
in various methods including SLAM and navigation. We conduct adversarial attacks on
two prominent place recognition systems: FAB-MAP and DLoopDetector. FAB-MAP is based
on probabilistic and DLoopDetector is known for its ability to detect places in large scale environment.
We intend to interfere with these systems, so that they cannot determine the previously
visited locations, by introducing the adversarial noise into the images that are processed by these
systems, in order to assess how robust these systems are under adversarial scenarios. Moreover,
our research goes on to the field of 3D reconstruction for which we concentrate on COLMAP
which is a popular photogrammetric tool that deals with generation of accurate 3D models from
image collections. To this end, we introduce noise into a fraction of the image dataset and
evaluate the resulting effect on the pose accuracy of the reconstructed models in terms of the
added pose error of the adversarial attack. We have used the popular HopSkipJump Attack for
introducing noise into images. While the attacks were successful on Handcrafted features and
Visual Place Recognition, COLMAP showed robustness to the adversarial noise and didn’t fall
for it. |
en_US |