Abstract:
DF is a relatively newer domain of vision research which emerged in recent years. The actual term "DF" was introduced in late 2017 by a Reddit user, where the authors claimed to develop a learning-based algorithm to incorporate celebrity faces into the pornography videos [3]. Considering his attempt as a success, many malicious users tried to appropriate similar principles to conceive social chaos [4,5]. Later, free to access smartphone applications like ZAO and FaceApp also played a prominent role in popularizing such DF tools. These applications encourage mass users to appropriate such DF applications without any prior experience. Most notably, all of these DF tools used social media as a platform for spreading their conspiracies and misinformation within a short span [1]. The growing interest in DF techniques shocked the research community as well as security concerns. As soon it begins to get popularity, several tech giants like Facebook, Google, and Apple took immediate actions to counter DF generators. Simultaneously, well-known security defenses like the Defense Advanced Research Project Agency (DARPA) and the National Institute of Standards and Technology (NIST) commenced developing DF forensic systems along with arranging DF detection competition like Media Forensics Challenge MFC2018) and the DF Detection Challenge (DFDC). All these attempts encouraged this study to investigate further into the DF domain in such a manner that a substantial push can be achieved into the forensic system(s). This research aims at developing a framework to systematically address the issue for prevention and detection of such DFs. A mechanism will be studied/ developed to prevent use of critical image/video data from use by forgers that use AI algorithms to generate DFs.