NUST Institutional Repository

Enhancing Deep Learning Models with Automated Knowledge Graphs for Improved Classification Performance and Explainability

Show simple item record

dc.contributor.author Khalid, Mutahira
dc.date.accessioned 2023-02-21T08:18:29Z
dc.date.available 2023-02-21T08:18:29Z
dc.date.issued 2023
dc.identifier.uri http://10.250.8.41:8080/xmlui/handle/123456789/32435
dc.description.abstract Medical coding works by assigning standardized medical codes to clinical records’ di agnoses, prognoses, and prescriptions. These codes are necessary for accurate medical billing and claims processing, both of which are vital for sustaining effective revenue cycles. Computer Assisted Coding (CAC) automates the process of assigning medical codes, with the aid of the Artificial Intelligence (AI) model. Despite the extraordinary results, there are certain limitations. These AI models rely on training data and collapse because they lack domain-specific knowledge, which results in false-positive predictions or just no predictions at all. Apart from this, the users’ ability to trust these AI ap plications is also hampered by the black-box nature of deep learning models. Even the explainable attention mechanism is unable to explain its certain predictions. These limitations can be addressed with the consolidation of Symbolic AI with deep learning leading to explainable and trustable predictions with an overall increase in accuracy. The hybrid AI approach has a number of benefits, but creating knowledge graphs—the brain behind symbolic AI—is a laborious process. Thus, I have automated the construc tion of knowledge graphs using a few processes that include Data preprocessing, ontology mapping, concept enrichment, and Neo4j knowledge graph creation. Additionally, I have suggested two distinct NeuroSymbolic AI approaches to get around some of the deep learning’s drawbacks. The first approach “Domain-specific knowledge infusion” enriches the medical terms leading to an overall increase in classification accuracy of nearly 81%. The second approach of “Explainable Deep Learning Predictions” explains the attention mechanism results by visualizing the word-to-word and word-to-code level connections with an accuracy of 64% and 53%. This research is novel as the knowledge graph cre ation in few and easy steps has not been done before. Additionally, it is the earliest study on knowledge graphs for explainability and domain-specific knowledge infusion to medical coding. en_US
dc.description.sponsorship Dr. Hasan Ali Khattak en_US
dc.language.iso en en_US
dc.publisher School of Electrical Engineering and Computer Sciences (SEECS) NUST en_US
dc.title Enhancing Deep Learning Models with Automated Knowledge Graphs for Improved Classification Performance and Explainability en_US
dc.type Thesis en_US


Files in this item

This item appears in the following Collection(s)

  • MS [432]

Show simple item record

Search DSpace


Advanced Search

Browse

My Account