Abstract:
The rapid upsurge in network intrusions has driven research into AI techniques for intrusion detection systems (IDS). A major challenge is ensuring AI models are understandable to security analysts, leading to the adoption of explainable AI (XAI) methods.
This study presents a framework to evaluate black-box XAI methods for IDS, focusing on global and local interpretability, tested on three well-known intrusion datasets
and AI methods. This research enhances IDS using XAI techniques, specifically LIME
and SHAP, applied to datasets UNSW-NB15, NSL-KDD, CICIDS2019, and a merged
dataset. Preprocessing steps like normalization and feature alignment were used to standardize the data. The findings show that integrating XAI improves IDS interpretability
and trustworthiness, aiding analysts in understanding system decisions. This research
advances more interpretable and resilient IDS, capable of countering evolving cyber
threats, and provides a foundational XAI evaluation tool for the network security community