Résumé:
The advancement of technology has reshaped various domains, particularly cybersecurity,
where increasingly sophisticated cyberattacks pose significant threats.
Explainable Artificial Intelligence (XAI) addresses the crucial need for transparency
in AI systems. This thesis investigates XAI’s application to Intrusion Detection Systems
(IDS) using Machine Learning (ML) and Deep Learning (DL) on Network-based
(NIDS) and Host-based (HIDS) datasets. Specifically, the study utilizes the UNSWNB15
and CIC-IDS2018 datasets to evaluate the performance of Artificial Neural
Networks (ANN) and XGBoost algorithms. Both algorithms have demonstrated significant
effectiveness, achieving high accuracy and robust detection capabilities in
identifying various types of cyberattacks.
The thesis further explores the use of local agnostic LIME (Local Interpretable
Model-agnostic Explanations) and global agnostic SHAP (SHapley Additive exPlanations)
to enhance the interpretability of AI models. These XAI methods provide
detailed insights into model decisions, making the AI-driven processes more transparent
and trustworthy. Empirical evidence shows that LIME and SHAP not only
improve the understanding of model behavior but also highlight the strengths and
weaknesses of ANN and XGBoost in different scenarios.
The study offers valuable insights for cybersecurity professionals and policymakers,
demonstrating that the integration of AI, ML, DL, IDS, and XAI can significantly
improve cybersecurity by making AI-driven decisions more transparent and trustworthy.
These findings underscore the potential of combining advanced algorithms with
interpretability techniques to develop more reliable and effective intrusion detection
systems.
Keywords: Explainable Artificial Intelligence (XAI), Intrusion Detection System (IDS), Machine