Abstract:
This research work delves into the manifestation of hallucination within Large Language Models (LLMs) and its consequential impacts on applications within the domain of mental health. Hallucinations, characterized by the generation of factually
incorrect or contextually irrelevant information, pose substantial risks in high-stakes
applications such as mental health therapy, counseling, and information dissemination. The primary aim of this study is to develop effective strategies to mitigate hallucinations, thereby enhancing the reliability, safety, and trustworthiness of LLMs
in mental health interventions. The research begins by examining the underlying
mechanisms that lead to hallucinations in LLMs, including their reliance on probabilistic language patterns and lack of grounding in factual knowledge. By identifying
these triggers, the study aims to propose targeted solutions that address both the se mantic and factual deficiencies in LLM outputs. A key focus is placed on integrating
retrieval-based approaches, combining vector store retrieval for semantic understanding with knowledge graph retrieval for factual accuracy. Specifically, the research
leverages the GENA (Graph for Enhanced Neuropsychiatric Analysis) knowledge
graph to ground LLM outputs in structured, authoritative mental health information. Through rigorous experimentation and evaluation, this study assesses the effectiveness of the proposed methods in reducing hallucination rates and improving
the contextual and factual relevance of LLM-generated content. By addressing this critical issue, the research aspires to create a robust framework for deploying LLMs
in mental health contexts, ensuring their efficacy in therapeutic processes and their
capability to deliver accurate, trustworthy information to individuals seeking men tal health support. This work contributes to advancing the safe and effective use of
artificial intelligence in sensitive, high-impact domains.