Abstract

Medical imaging, particularly chest radiography, remains a critical tool for identifying lung-related conditions such as pneumonia, tuberculosis, and COVID-19. However, interpreting chest X-rays can be time-consuming and subject to variability among radiologists. This study introduces PneuVision, an explainable deep learning framework designed to classify chest X-ray images into four severity categories—Normal, Mild, Moderate, and Severe—while maintaining high interpretability. The system uses a pre-trained DenseNet-121 model fine-tuned on a custom-labeled dataset, where Kaggle samples served only as references for defining severity criteria. Grad-CAM visualization is employed to enhance transparency by highlighting lung regions that contribute to each prediction. Experimental findings demonstrate improved performance compared to standard CNN and ResNet models. By integrating explainable AI, PneuVision supports reliable and interpretable automated analysis for radiological applications.

Keywords

  • Explainable AI
  • DenseNet-121
  • Chest X-ray classification
  • Grad-CAM
  • Deep learning
  • medical image ana

References

  1. Rajpurkar, P., Irvin, J., Zhu, K., Yang, B., Mehta, H., Duan, T., ... & Ng, A. Y. (2017). Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning. arXiv preprint arXiv:1711.05225.[1]
  2. Irvin, J., Rajpurkar, P., Ko, M., Yu, Y., Ciurea-Ilcus, S., Chute, C., ... & Ng, A. Y. (2019, July). Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison. In Proceedings of the AAAI conference on artificial intelligence (Vol. 33, No. 01, pp. 590-597).[2]
  3. Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626).[3]
  4. Tjoa, E., & Guan, C. (2020). A survey on explainable artificial intelligence (xai): Toward medical xai. IEEE transactions on neural networks and learning systems, 32(11), 4793-4813.[4]
  5. Liang, G. et al. (2022). Hybrid CNN Models for Pneumonia Detection in Chest X-Ray Images. Computers in Biology and Medicine, 144, 105–113.[5]
  6. Hussain, E. et al. (2022). Explainable Deep Learning-Based COVID-19 Severity Detection Using Chest Radiographs. IEEE Access, 10, 38956–38968.[6]
  7. Xie, J. et al. (2023). Evaluating Grad-CAM Interpretability in Medical Imaging.
  8. Scientific Reports, 13(1421).[7]
  9. Dosovitskiy, A. (2020). An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.[8]
  10. Alsharif, M. et al. (2023). Federated Learning for Medical Imaging: Challenges and Opportunities. IEEE Journal of Biomedical and Health Informatics.[9]