|
International Journal of Computer Applications
Foundation of Computer Science (FCS), NY, USA
|
| Volume 187 - Issue 74 |
| Published: January 2026 |
| Authors: Rupal Vitthalbhai Panchal, Rupal Snehkunj, Vinaykumar V. Panchal |
10.5120/ijca2026926262
|
Rupal Vitthalbhai Panchal, Rupal Snehkunj, Vinaykumar V. Panchal . Explainable Artificial Intelligence (XAI) for Intelligent Intrusion Detection Systems and Threat Response Automation. International Journal of Computer Applications. 187, 74 (January 2026), 51-55. DOI=10.5120/ijca2026926262
@article{ 10.5120/ijca2026926262,
author = { Rupal Vitthalbhai Panchal,Rupal Snehkunj,Vinaykumar V. Panchal },
title = { Explainable Artificial Intelligence (XAI) for Intelligent Intrusion Detection Systems and Threat Response Automation },
journal = { International Journal of Computer Applications },
year = { 2026 },
volume = { 187 },
number = { 74 },
pages = { 51-55 },
doi = { 10.5120/ijca2026926262 },
publisher = { Foundation of Computer Science (FCS), NY, USA }
}
%0 Journal Article
%D 2026
%A Rupal Vitthalbhai Panchal
%A Rupal Snehkunj
%A Vinaykumar V. Panchal
%T Explainable Artificial Intelligence (XAI) for Intelligent Intrusion Detection Systems and Threat Response Automation%T
%J International Journal of Computer Applications
%V 187
%N 74
%P 51-55
%R 10.5120/ijca2026926262
%I Foundation of Computer Science (FCS), NY, USA
Artificial Intelligence (AI) and Deep Learning (DL) have elevated Intrusion Detection Systems (IDS) by improving detection accuracy and adaptability to novel attacks. However, the "black-box" nature of many high-performing models reduces operational trust, complicates incident triage, and hinders automated response orchestration. Explainable AI (XAI) offers interpretability methods (e.g., SHAP, LIME, attention mechanisms) that can bridge the gap between high detection performance and human-centered decision making. This article proposes an integrated XAI-driven IDS and Threat Response Automation (XAI-IDR) architecture that couples a hybrid detection engine (feature-aware DL + tree-based learner) with model-agnostic explanation modules and a policy-driven response orchestrator. The proposal is to discuss design considerations, evaluation methodology, how XAI aids SOC analysts and automated playbooks, security and adversarial concerns for XAI pipelines, and an experimental plan using benchmark IDS dataset.