dc.description.abstract |
This thesis explores the vulnerability of video recognition models, specifically C3D, P3D, and Q3D, to adversarial attacks using the Crimes Scene dataset. Through rigorous testing involving seven distinct attack strategies, the study investigates the impact on model accuracy, revealing instances where certain attacks consistently lower accuracy and others induce constant effects. Comparative analyses extend to benchmarking the performance of the three models against others within the domain, employing accuracy as a key performance parameter. The findings highlight variations in susceptibility and robustness among the models. Subsequently, it proposes and evaluates defensive strategies aimed at enhancing the resilience of the models against adversarial attacks. This comprehensive examination contributes valuable insights to the field of video recognition model security, offering a nuanced understanding of vulnerabilities, comparative performance, and effective defense mechanisms. |
en_US |