dc.description.abstract |
The necessity for Computer network intrusion detection systems (IDSs) has grown alongside the proliferation of IoT-based networks. Over the past few years, as traditional cyber security methods become less practical for IoT, IDSs for internet of things (IoT) networks have become increasingly reliant on artificial intelligence (ML) techniques, computations, and models. High-performance intrusion detection systems (IDSs) can be built and implemented with the help of machine learning methods. Many ML implementations are 'black boxes' where human comprehension, transparency, explanation, and logic in predictions outputs is considerably unavailable, which may have hampered the general acceptance and trust of these systems. The UNSW-NB15 is a dataset for studying network traffic based on the Internet of Things, with the goal of differentiating between benign and harmful actions. Decision Trees, Multi-Layer Perceptrons, and XGBoost were the ML classifiers taught with this dataset. The model's performance accuracies show that the ML classifiers and accompanying algorithm for creating a network forensic system using network flow IDs and features that may trace suspicious behavior of botnets are highly effective. After that, we improved the classifiers' explainability by visualizing their decision-making frameworks with established Explainable AI (XAI) methods utilizing the Scikit-Learn, LIME, ELI5, and SHAP libraries. According to the findings, XAI is practical and useful, as there is much to be gained by combining classic ML systems with Explainable AI (XAI) techniques in the field of cyber security. |
en_US |