Stacking Ensemble Artificial Intelligence Model for Heart Disease Diagnosis
Personalized treatment plans, predictive analytics, and artificial intelligence (AI)-driven diagnostics are becoming more and more popular as a way to improve decision-making, expedite operations, and improve patient care. But there are still a number of substantial barriers to overcome, which includes but not limited to issues with user adoption, trust, prejudice, and fairness brought on by resistance from healthcare providers and a lack of confidence in the system's recommendations. To overcome these challenges and realize the full potential of AI-driven solutions, the system's accuracy and safety through meticulous testing and validation of AI algorithms becomes indispensable. This research provides a hybrid AI model that blends three base models with a Meta model in order to diagnose heart disease effectively. The essence is to revalidate the existing AI diagnostic models for cardiac diseases diagnostics and to tackle the concerns impeding the full utilization of the available AI diagnostic system. The study makes use of each model's advantages by merging these various and complimentary algorithms into a stacking ensemble model to create a diagnostic system that is more potent. Using publicly available heart disease data, the model performs remarkably well; it achieves 89% accuracy, 85% recall (sensitivity), 92% specificity, and 89% precision. This hybrid model's performance and proven efficacy are expected to boost trust in the system's recommendations and encourage broader implementation in clinical practice.
M. Bagheri, M. Bagheritaba, S. Alizadeh, M. S. Parizi, P. Matoufinia, and Y. Luo, “AI-Driven Decision-Making in Healthcare Information Systems: A Comprehensive Review,” Jun. 12, 2024. doi: 10.20944/preprints202406.0790.v1.
S. H. Bangash, I. Khan, G. Husnain, M. A. Irfan, and A. Iqbal, “Revolutionizing Healthcare with Smarter AI: In-depth Exploration of Advancements, Challenges, and Future Directions,” VFAST trans. softw. eng., vol. 12, no. 1, pp. 152–168, Mar. 2024, doi: 10.21015/vtse.v12i1.1760.
P. Esmaeilzadeh, “Use of AI-based tools for healthcare purposes: a survey study from consumers’ perspectives,” BMC Med Inform Decis Mak, vol. 20, no. 1, p. 170, Dec. 2020, doi: 10.1186/s12911-020-01191-1.
D. Lee and S. N. Yoon, “Application of Artificial Intelligence-Based Technologies in the Healthcare Industry: Opportunities and Challenges,” IJERPH, vol. 18, no. 1, p. 271, Jan. 2021, doi: 10.3390/ijerph18010271.
J. P. Richardson et al., “Patient apprehensions about the use of artificial intelligence in healthcare,” npj Digit. Med., vol. 4, no. 1, p. 140, Sep. 2021, doi: 10.1038/s41746-021-00509-1.
S. U. D. Wani et al., “Utilization of Artificial Intelligence in Disease Prevention: Diagnosis, Treatment, and Implications for the Healthcare Workforce,” Healthcare, vol. 10, no. 4, p. 608, Mar. 2022, doi: 10.3390/healthcare10040608.
M. Alohali, F. Carton, and Y. O’Connor, “Investigating the antecedents of perceived threats and user resistance to health information technology: a case study of a public hospital,” Journal of Decision Systems, vol. 29, no. 1, pp. 27–52, Jan. 2020, doi: 10.1080/12460125.2020.1728988.
Y. Chen, C. Stavropoulou, R. Narasinkan, A. Baker, and H. Scarbrough, “Professionals’ responses to the introduction of AI innovations in radiology and their implications for future adoption: a qualitative study,” BMC Health Serv Res, vol. 21, no. 1, p. 813, Dec. 2021, doi: 10.1186/s12913-021-06861-y.
S. A. Alowais et al., “Revolutionizing healthcare: the role of artificial intelligence in clinical practice,” BMC Med Educ, vol. 23, no. 1, p. 689, Sep. 2023, doi: 10.1186/s12909-023-04698-z.
“Science Magazine - October 25, 2019 - Dissecting racial bias in an algorithm used to manage the health of populations,” 2019.
T. L. Upshaw et al., “Priorities for Artificial Intelligence Applications in Primary Care: A Canadian Deliberative Dialogue with Patients, Providers, and Health System Leaders,” J Am Board Fam Med, vol. 36, no. 2, pp. 210–220, Apr. 2023, doi: 10.3122/jabfm.2022.220171R1.
J. Zhang and Z. Zhang, “Ethics and governance of trustworthy medical artificial intelligence,” BMC Med Inform Decis Mak, vol. 23, no. 1, p. 7, Jan. 2023, doi: 10.1186/s12911-023-02103-9.
A. Lundgard, “Measuring justice in machine learning,” in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Jan. 2020, pp. 680–680. doi: 10.1145/3351095.3372838.
S. Caton and C. Haas, “Fairness in Machine Learning: A Survey,” ACM Comput. Surv., vol. 56, no. 7, pp. 1–38, Jul. 2024, doi: 10.1145/3616865.
E. Wall et al., “Trust Junk and Evil Knobs: Calibrating Trust in AI Visualization,” in 2024 IEEE 17th Pacific Visualization Conference (PacificVis), Tokyo, Japan: IEEE, Apr. 2024, pp. 22–31. doi: 10.1109/PacificVis60374.2024.00012.
C. Gomez, B.-L. Smith, A. Zayas, M. Unberath, and T. Canares, “Explainable AI decision support improves accuracy during telehealth strep throat screening,” Commun Med, vol. 4, no. 1, p. 149, Jul. 2024, doi: 10.1038/s43856-024-00568-x.
C. Metta, A. Beretta, R. Pellungrini, S. Rinzivillo, and F. Giannotti, “Towards Transparent Healthcare: Advancing Local Explanation Methods in Explainable Artificial Intelligence,” Bioengineering, vol. 11, no. 4, p. 369, Apr. 2024, doi: 10.3390/bioengineering11040369.
V. Sivaraman, L. A. Bukowski, J. Levin, J. M. Kahn, and A. Perer, “Ignore, Trust, or Negotiate: Understanding Clinician Acceptance of AI-Based Treatment Recommendations in Health Care,” in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, Hamburg Germany: ACM, Apr. 2023, pp. 1–18. doi: 10.1145/3544548.3581075.
S. Tonekaboni, S. Joshi, M. D. McCradden, and A. Goldenberg, “What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use”.
A. Y. Hannun et al., “Cardiologist-level arrhythmia detection and classification in ambulatory electrocardiograms using a deep neural network,” Nat Med, vol. 25, no. 1, pp. 65–69, Jan. 2019, doi: 10.1038/s41591-018-0268-3.
Y. Jin et al., “Cardiologist-level interpretable knowledge-fused deep neural network for automatic arrhythmia diagnosis,” Commun Med, vol. 4, no. 1, p. 31, Feb. 2024, doi: 10.1038/s43856-024-00464-4.
A. H. Ribeiro et al., “Automatic diagnosis of the 12-lead ECG using a deep neural network,” Nat Commun, vol. 11, no. 1, p. 1760, Apr. 2020, doi: 10.1038/s41467-020-15432-4.
S. W. Smith et al., “A deep neural network learning algorithm outperforms a conventional algorithm for emergency department electrocardiogram interpretation,” Journal of Electrocardiology, vol. 52, pp. 88–95, Jan. 2019, doi: 10.1016/j.jelectrocard.2018.11.013.
D. B. Olawade, N. Aderinto, G. Olatunji, E. Kokori, A. C. David-Olawade, and M. Hadi, “Advancements and applications of Artificial Intelligence in cardiology: Current trends and future prospects,” Journal of Medicine, Surgery, and Public Health, vol. 3, p. 100109, Aug. 2024, doi: 10.1016/j.glmedi.2024.100109.
S. J. Patel et al., “Advancements in Artificial Intelligence for Precision Diagnosis and Treatment of Myocardial Infarction: A Comprehensive Review of Clinical Trials and Randomized Controlled Trials,” Cureus, May 2024, doi: 10.7759/cureus.60119.
A. J. Russak et al., “Machine Learning in Cardiology—Ensuring Clinical Impact Lives Up to the Hype,” J Cardiovasc Pharmacol Ther, vol. 25, no. 5, pp. 379–390, Sep. 2020, doi: 10.1177/1074248420928651.
P. Bidwai, S. Gite, K. Pahuja, and K. Kotecha, “A Systematic Literature Review on Diabetic Retinopathy Using an Artificial Intelligence Approach,” BDCC, vol. 6, no. 4, p. 152, Dec. 2022, doi: 10.3390/bdcc6040152.
X. Qian et al., “The effectiveness of artificial intelligence-based automated grading and training system in education of manual detection of diabetic retinopathy,” Front. Public Health, vol. 10, p. 1025271, Nov. 2022, doi: 10.3389/fpubh.2022.1025271.
M. A. Urina-Triana et al., “Machine Learning and AI Approaches for Analyzing Diabetic and Hypertensive Retinopathy in Ocular Images: A Literature Review,” IEEE Access, vol. 12, pp. 54590–54607, 2024, doi: 10.1109/ACCESS.2024.3378277.
A. H. Alharbi and H. A. Hosni Mahmoud, “Pneumonia Transfer Learning Deep Learning Model from Segmented X-rays,” Healthcare, vol. 10, no. 6, p. 987, May 2022, doi: 10.3390/healthcare10060987.
N. M. Elshennawy and D. M. Ibrahim, “Deep-Pneumonia Framework Using Deep Learning Models Based on Chest X-Ray Images,” Diagnostics, vol. 10, no. 9, p. 649, Aug. 2020, doi: 10.3390/diagnostics10090649.
W. Khan, N. Zaki, and L. Ali, “Intelligent Pneumonia Identification From Chest X-Rays: A Systematic Literature Review,” IEEE Access, vol. 9, pp. 51747–51771, 2021, doi: 10.1109/ACCESS.2021.3069937.
Z. Wang et al., “Automatically discriminating and localizing COVID-19 from community-acquired pneumonia on chest X-rays,” Pattern Recognition, vol. 110, p. 107613, Feb. 2021, doi: 10.1016/j.patcog.2020.107613.
I. Rojek, P. Kotlarz, M. Kozielski, M. Jagodziński, and Z. Królikowski, “Development of AI-Based Prediction of Heart Attack Risk as an Element of Preventive Medicine,” Electronics, vol. 13, no. 2, p. 272, Jan. 2024, doi: 10.3390/electronics13020272.
P. N. Srinivasu, U. Sirisha, K. Sandeep, S. P. Praveen, L. P. Maguluri, and T. Bikku, “An Interpretable Approach with Explainable AI for Heart Stroke Prediction,” 2024.
M. Barrett et al., “Artificial intelligence supported patient self-care in chronic heart failure: a paradigm shift from reactive to predictive, preventive and personalised care,” EPMA Journal, vol. 10, no. 4, pp. 445–464, Dec. 2019, doi: 10.1007/s13167-019-00188-9.
K. Bayoumy et al., “Smart wearable devices in cardiovascular care: where we are and how to move forward,” Nat Rev Cardiol, vol. 18, no. 8, pp. 581–599, Aug. 2021, doi: 10.1038/s41569-021-00522-7.
G. A. Fleming, J. R. Petrie, R. M. Bergenstal, R. W. Holl, A. L. Peters, and L. Heinemann, “Diabetes Digital App Technology: Benefits, Challenges, and Recommendations. A Consensus Report by the European Association for the Study of Diabetes (EASD) and the American Diabetes Association (ADA) Diabetes Technology Working Group,” Diabetes Care, vol. 43, no. 1, pp. 250–260, Jan. 2020, doi: 10.2337/dci19-0062.
R. Genuer, J.-M. Poggi, C. Tuleau-Malot, and N. Villa-Vialaneix, “Random Forests for Big Data,” Big Data Research, vol. 9, pp. 28–46, Sep. 2017, doi: 10.1016/j.bdr.2017.07.003.
D. P. Mohandoss, Y. Shi, and K. Suo, “Outlier Prediction Using Random Forest Classifier,” in 2021 IEEE 11th Annual Computing and Communication Workshop and Conference (CCWC), NV, USA: IEEE, Jan. 2021, pp. 0027–0033. doi: 10.1109/CCWC51732.2021.9376077.
T. Chen and C. Guestrin, “XGBoost: A Scalable Tree Boosting System,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco California USA: ACM, Aug. 2016, pp. 785–794. doi: 10.1145/2939672.2939785.
O. Sagi and L. Rokach, “Approximating XGBoost with an interpretable decision tree,” Information Sciences, vol. 572, pp. 522–542, Sep. 2021, doi: 10.1016/j.ins.2021.05.055.
Copyright (c) 2024 International Journal of Engineering and Computer Science

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.