Abstract

Growing sophistication of cyberattacks calls for defense mechanisms beyond conventional approaches. Signature-based security mechanisms are ineffective against new threats such as zero-day exploits and advanced persistent threats. Artificial Intelligence (AI), through machine learning, provides the ability to sift through vast datasets, identify subtle patterns that are suggestive of an attack, and forecast vulnerabilities before exploitation, going beyond reactive defenses. Initial AI deployments, however, were plagued by adaptability and false positive problems. Despite its potential, integrating AI into cybersecurity frameworks is beset by significant challenges in terms of data paucity, model robustness against adversarial attacks, explainability, and ethical aspects such as bias and privacy. Here, we consolidate current research to present a holistic picture of AI's incorporation in cybersecurity, examining its applications, methodologies, ethical considerations, and prospects. Our review indicates that AI significantly strengthens threat detection (including zero-days), automates data analysis, and facilitates predictive security actions, going beyond classical limitations. It emphasizes the importance of explainable AI (XAI) for analyst trust as well as the changing role of human experts interacting with AI systems, while also pointing to lingering issues with data quality and adversarial robustness. An understanding of these dynamics is indispensable for crafting efficacious, robust, and ethical AI-powered cybersecurity approaches, enlightening practitioners and policymakers working in the challenging environment of cyber defense.

Keywords

  • Cybersecurity
  • Artificial Intelligence
  • Machine Learning
  • Data Privacy
  • Human AI Collaboration

References

  1. 1. Buczak, A. L., & Guven, E. (2016). A Survey of Data Mining and Machine Learning Methods for Cyber Security Intrusion Detection. IEEE Communications Surveys & Tutorials, 18(2), 1153–1176. https://doi.org/10.1109/comst.2015.2494502
  2. 2. IBM. (2024). Cost of a Data Breach 2024. IBM. https://www.ibm.com/reports/data-breach
  3. 3. Javaid, A., Niyaz, Q., Sun, W., & Alam, M. (2016). A Deep Learning Approach for Network Intrusion Detection System. Proceedings of the 9th EAI International Conference on Bio-Inspired Information and Communications Technologies (Formerly BIONETICS). https://doi.org/10.4108/eai.3-12-2015.2262516
  4. 4. Goldstein, M., & Uchida, S. (2016). A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data. PLOS ONE, 11(4), e0152173. https://doi.org/10.1371/journal.pone.0152173
  5. 5. Zuech, R., Khoshgoftaar, T. M., & Wald, R. (2015). Intrusion Detection and Big Heterogeneous Data: A Survey. Journal of Big Data, 2(1). https://doi.org/10.1186/s40537-015-0013-4
  6. 6. Muheidat, F., Mallouh, M. A., Al-Saleh, O., Al-Khasawneh, O., & Tawalbeh, L. A. (2024). Applying AI and Machine Learning to Enhance Automated Cybersecurity and Network Threat Identification. Procedia Computer Science, 251, 287–294. https://doi.org/10.1016/j.procs.2024.11.112
  7. 7. Mishra, S. (2023). Exploring the Impact of AI-Based Cyber Security Financial Sector Management. Applied Sciences, 13(10), 5875. MDPI. https://doi.org/10.3390/app13105875
  8. 8. Arefin, S., & Simcox, M. (2024). AI-Driven Solutions for Safeguarding Healthcare Data: Innovations in Cybersecurity. International Business Research, 17(6), 74. https://doi.org/10.5539/ibr.v17n6p74
  9. 9. AI-driven threat detection: Enhancing cybersecurity automation for scalable security operations. (2025). Scilit. https://www.scilit.com/publications/d7612168b9faa77ffcd752db300b291e
  10. 10. Arash Negahdari Kia, Murphy, F., Sheehan, B., & Shannon, D. (2024). A cyber risk prediction model using common vulnerabilities and exposures. Expert Systems with Applications, 237, 121599–121599. https://doi.org/10.1016/j.eswa.2023.121599
  11. 11. Clarke, M., & Martin, K. (2023). Managing cybersecurity risk in healthcare settings. Healthcare Management Forum, 37(1). https://doi.org/10.1177/08404704231195804
  12. 12. Cremer, F., Sheehan, B., Fortmann, M., Kia, A. N., Mullins, M., Murphy, F., & Materne, S. (2022). Cyber risk and cybersecurity: A systematic review of data availability. The Geneva Papers on Risk and Insurance - Issues and Practice, 47(3). https://doi.org/10.1057/s41288-022-00266-6
  13. 13. Cremer, F., Sheehan, B., Fortmann, M., Kia, A. N., Mullins, M., Murphy, F., & Materne, S. (2022). Cyber risk and cybersecurity: A systematic review of data availability. The Geneva Papers on Risk and Insurance - Issues and Practice, 47(3). https://doi.org/10.1057/s41288-022-00266-6
  14. 14. Jalali, M. S., & Kaiser, J. P. (2019). Cybersecurity in Hospitals: A Systematic, Organizational Perspective. Journal of Medical Internet Research, 20(5), e10059. https://doi.org/10.2196/10059
  15. 15. Cs, K., B, F., T, J., & Dk, M. (2017). Cybersecurity in Healthcare: A Systematic Review of Modern Threats and Trends. Technology and Health Care : Official Journal of the European Society for Engineering and Medicine. https://pubmed.ncbi.nlm.nih.gov/27689562/
  16. 16. Wei, B., & Wu, H. (2024). Study of the Distribution of Lumbar Modic Changes in Patients with Low Back Pain and Correlation with Lumbar Degeneration Diseases [Response to Letter]. Journal of Pain Research, Volume 17, 377–378. https://doi.org/10.2147/jpr.s457071
  17. 17. A survey of malware detection techniques. (n.d.). ResearchGate. https://www.researchgate.net/publication/229008321_A_survey_of_malware_detection_techniques
  18. 18. Liao, H.-J., Richard Lin, C.-H., Lin, Y.-C., & Tung, K.-Y. (2013). Intrusion detection system: A comprehensive review. Journal of Network and Computer Applications, 36(1), 16–24. https://doi.org/10.1016/j.jnca.2012.09.004
  19. 19. Kent, K., Chevalier, S., Grance, T., & Dang, H. (2006). Special Publication 800-86 Guide to Integrating Forensic Techniques into Incident Response Recommendations of the National Institute of Standards and Technology. NIST. https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-86.pdf
  20. 20. Das, A. (n.d.). Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey. https://arxiv.org/pdf/2006.11371
  21. 21. IEEE Xplore Full-Text PDF: (2025). Ieee.org. https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9927396
  22. 22. Jimmy, F. (2024). Emerging Threats: The Latest Cybersecurity Risks and the Role of Artificial Intelligence in Enhancing Cybersecurity Defenses. Valley International Journal Digital Library, 9(2), 564–574. https://doi.org/10.18535/ijsrm/v9i2.ec01
  23. 23. Tatineni, S. (2023a, November 11). AI-Infused Threat Detection and Incident Response in Cloud Security. International Journal of Science and Research (IJSR). https://dx.doi.org/10.21275/SR231113063646
  24. 24. Emission Reduction based on International Policies: A Case of Turkey. 2022 International Conference on Decision Aid Sciences and Applications (DASA), 1544–1548. https://doi.org/10.1109/dasa54658.2022.9765123
  25. 25. Fatma Kutlu Gundogdu, Esra Ilbahar, Karasan, A., Kaya, I., & Bestami Ozkaya. (2022). Prioritization of the Potential Sectors for CO2 Emission Reduction based on International Policies: A Case of Turkey. 2022 International Conference on Decision Aid Sciences and Applications (DASA), 1544–1548. https://doi.org/10.1109/dasa54658.2022.9765123
  26. 26. DIGITAL GOVERNMENT: RESEARCH AND PRACTICE Home. (2025). Digital Government: Research and Practice. https://dl.acm.org/journal/dgov
  27. 27. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A Survey on Bias and Fairness in Machine Learning. ACM Computing Surveys, 54(6), 1–35. https://doi.org/10.1145/3457607
  28. 28. Ferrara, E. (2023). Fairness and Bias in Artificial Intelligence: A Brief Survey of Sources, Impacts, and Mitigation Strategies. Sci, 6(1), 3. https://doi.org/10.3390/sci6010003
  29. 29. Rigaki, M., & Garcia, S. (2023). A Survey of Privacy Attacks in Machine Learning. ACM Computing Surveys. https://doi.org/10.1145/3624010
  30. 30. Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI). IEEE Access, 6, 52138–52160. https://doi.org/10.1109/access.2018.2870052
  31. 31. Gangwal, A., Ansari, A., Ahmad, I., Azad, A. K., & Wan Sulaiman, W. M. A. (2024). Current strategies to address data scarcity in artificial intelligence-based drug discovery: A comprehensive review. Computers in Biology and Medicine, 179, 108734. https://doi.org/10.1016/j.compbiomed.2024.108734
  32. 32. Jemili, F., Jouini, K., & Korbaa, O. (2024). Intrusion detection based on concept drift detection and online incremental learning. International Journal of Pervasive Computing and Communications. https://doi.org/10.1108/ijpcc-12-2023-0358
  33. 33. McDaniel, P., Papernot, N., & Celik, Z. B. (2016). Machine Learning in Adversarial Settings. IEEE Security & Privacy, 14(3), 68–72. https://doi.org/10.1109/msp.2016.51
  34. 34. [34] Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84, 317–331. https://doi.org/10.1016/j.patcog.2018.07.023
  35. 35. Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. ArXiv:1702.08608 [Cs, Stat], 2(2). https://arxiv.org/abs/1702.08608
  36. 36. Khan, L. U., Saad, W., Han, Z., Hossain, E., & Hong, C. S. (2020). Federated Learning for Internet of Things: Recent Advances, Taxonomy, and Open Challenges. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.2009.13012
  37. 37. Zhang, Z. (2024). Reinforcement Learning-Based Approaches for Enhancing Security and Resilience in Smart Control: A Survey on Attack and Defense Methods. ArXiv.org. https://arxiv.org/abs/2402.15617
  38. 38. Patki, N., Wedge, R., & Veeramachaneni, K. (2016). The Synthetic Data Vault. 2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA). https://doi.org/10.1109/dsaa.2016.49
  39. 39. Yang, Q., Liu, Y., Chen, T., & Tong, Y. (2019). Federated Machine Learning. ACM Transactions on Intelligent Systems and Technology, 10(2), 1–19. https://doi.org/10.1145/3298981
  40. 40. Christoph Molnar. (2019, August 27). Interpretable Machine Learning. Github.io. https://christophm.github.io/interpretable-ml-book
  41. 41. Sculley, D., Holt, G., Golovin, D., Davydov, E., Phillips, T., Ebner, D., Chaudhary, V., Young, M., Crespo, J.-F., & Dennison, D. (2015). Hidden Technical Debt in Machine Learning Systems. Neural Information Processing Systems; Curran Associates, Inc. https://papers.nips.cc/paper_files/paper/2015/hash/86df7dcfd896fcaf2674f757a2463eba-Abstract.html
  42. 42. Hatim Kagalwala. (2025). AI-Powered FinTech: Revolutionizing Digital Banking and Payment Systems. Journal of Information Systems Engineering and Management, 10(33s), 258–265. https://doi.org/10.52783/jisem.v10i33s.5475
  43. 43. β€œThe Devastator”. (2025). Data Breaches https://www.google.com/url?q=https://www.kaggle.com/datasets/thedevastator/data-breaches-a-comprehensive-list&sa=D&source=docs&ust=1746255506229258&usg=AOvVaw0EWfimAvGQQnFyT1b0rzIm
  44. 44. Rosati, P. (2020). A dataset for accounting, finance and economics research on US data breaches. Mendeley Data, 1. https://doi.org/10.17632/w33nhh3282.1
  45. 45. Khan, L. U., Saad, W., Han, Z., Hossain, E., & Hong, C. S. (2020). Federated Learning for Internet of Things: Recent Advances, Taxonomy, and Open Challenges. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.2009.13012
  46. 46. Salfinger, A. (2019). Framing Situation Prediction as a Sequence Prediction Problem: A Situation Evolution Model Based on Continuous-Time Markov Chains. 2022 25th International Conference on Information Fusion (FUSION), 1–8. https://doi.org/10.23919/fusion43075.2019.9011234