Abstract:
Traditionally, adversarial attacks on handcrafted features have functioned within black box
paradigms, separating the adversarial noise design from the nature of these features. This thesis
presents a novel change by putting forth a white-box adversarial attack against handcrafted
features like ORB, SIFT, SURF, and so forth. By integrating feature detection and descriptor
formulation into the adversarial example generation process, this clever method enriches
the adversarial environment. Although vulnerabilities in deep networks have been made public
by adversarial examples, the shortcomings of handcrafted features in adversarial environments
have gone unnoticed. Through structural-level analysis of these feature algorithms, this work
introduces new adversarial perturbations specifically designed for handcrafted features, revealing
subtle yet powerful changes. The performance of features such as ORB, Fast SIFT, and
SURF, which show generalizability across different features, viewpoints, and lighting conditions,
is severely compromised by these modifications. Our understanding of the difficulties
presented by these features is improved by this research, which also advances white-box adversarial
attacks on handcrafted features. Our adversarial attack, which explores the complexities
of feature detection and descriptor formulation, is an intelligent and valuable investigation that
opens up new avenues for the secure and reliable use of handcrafted features in computer vision
and machine learning fields.
i