NUST Institutional Repository

Adversarial Attacks on Video Recognition Models

Show simple item record

dc.contributor.author Gul, Namra
dc.date.accessioned 2024-05-02T07:59:15Z
dc.date.available 2024-05-02T07:59:15Z
dc.date.issued 2024
dc.identifier.other 328634
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/43218
dc.description Supervisor: Dr. Arbab Latif en_US
dc.description.abstract This thesis explores the vulnerability of video recognition models, specifically C3D, P3D, and Q3D, to adversarial attacks using the Crimes Scene dataset. Through rigorous testing involving seven distinct attack strategies, the study investigates the impact on model accuracy, revealing instances where certain attacks consistently lower accuracy and others induce constant effects. Comparative analyses extend to benchmarking the performance of the three models against others within the domain, employing accuracy as a key performance parameter. The findings highlight variations in susceptibility and robustness among the models. Subsequently, it proposes and evaluates defensive strategies aimed at enhancing the resilience of the models against adversarial attacks. This comprehensive examination contributes valuable insights to the field of video recognition model security, offering a nuanced understanding of vulnerabilities, comparative performance, and effective defense mechanisms. en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Science,(SEECS) NUST en_US
dc.title Adversarial Attacks on Video Recognition Models en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [881]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account