Abstract:
Recent advancements in Artificial Intelligence (AI) and its sub-fields have shown promising results in almost all fields of life. Owing to the availability of medical data and
high-performance algorithms, AI has reshaped the research in clinical care. The rapid
advancement of AI in healthcare raises an urgent need to understand the decision making process of these models and algorithms. The black-box nature of these models
hindered their application in clinical practices as the healthcare stakeholders (doctors,
practitioners, radiologists, patients, etc.) can only trust such algorithms when the
origin of results is explained. Explainable AI (XAI), a sub-field of AI, promotes the
interpretability of these models allowing healthcare experts to comprehend the decision making process of AI techniques. The explainability of AI algorithms and techniques
is vital in accurately diagnosing and treating many diseases, particularly brain tumors.
A highly accurate knowledge distillation-based architecture has been developed that
could successfully perform multi-class brain tumor classification and provide the visual explanation/interpretation of the classification. We have used the Br35H:2020
(binary class) and Brain Tumor MRI (multi-class) datasets to evaluate our model.
The architecture has achieved 95% accuracy as compared to existing techniques. The
comparative results of our architecture for classification and visual explanation will
help healthcare experts to understand the black-box method, thus fostering trust in
deep learning models, and making accurate diagnoses in brain tumor identification and
treatment.