dc.description.abstract |
Islamophobia or anti-Muslim antagonism is one prepotent yet dilapidated form of racism
in today’s world. The last couple of years has witnessed an immense surge in Islamophobic
hate speech on social media nurturing and progressing violence and prejudice
against Muslims and Islam. A growingly frequent expression of online hate speech is
multimodal (text + image) in nature and known as a meme. Despite ample literature
on hate speech detection on social media, there are only a few papers on Islamophobic
hate speech detection. Our target is to automatically detect and classify the content of
those memes that are hostile to Islam and transfer extremist thoughts against Muslims.
As detecting memes is a multimodal (relying on both textual and visual cues) problem
thus requiring a holistic understanding of photos, words in photos, and the context
around the multimodal content conveys messages through a combination of images
and text, demanding a need for multifaceted reasoning that encompasses both visual
and linguistic comprehension. Identifying Islamophobic content that employs multiple
modes of communication is inherently complex and remains an open challenge. When
we encounter a meme, for instance, we naturally process the words and images in tandem,
grasping their collective significance. For machines, this presents a formidable
obstacle since they cannot simply analyze the text and images separately. They must
instead synthesize these diverse modalities and discern how the meaning evolves when
presented together. In this work, we seek to advance this line of research and develop
a multimodal framework for the detection of Islamophobic memes. The Data shall be
collected from Facebook and Instagram and shall be manually annotated to train the
system to automatically classify the Islamophobic cyber hate instances into the given
categories. Specific keywords that refer to Islamophobic content shall be considered as
a search criterion, considering different manifestations of hatred against Muslims, such
as terrorists, extremists, stereotyping, objectification, destruction, and violence |
en_US |