Abstract
The increasing deployment of Artificial Intelligence (AI) models on edge devices—such as Raspberry Pi, NVIDIA Jetson Nano, and Google Coral TPU—has revolutionized real-time decision-making in critical domains including healthcare, autonomous vehicles, and surveillance. However, these edge-based AI systems often function as opaque "black boxes," making it difficult for end-users to understand, verify, or trust their decisions. This lack of interpretability not only undermines user confidence but also poses serious challenges for ethical accountability, regulatory compliance (e.g., GDPR, HIPAA), and safety in mission-critical applications.
To address these limitations, this study proposes a lightweight, modular framework that enables the integration of Explainable AI (XAI) techniques into resource-constrained edge environments. We explore and benchmark several state-of-the-art XAI methods—including SHAP (SHapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), and Saliency Maps—by evaluating their performance in terms of inference latency, memory usage, interpretability score, and user trust across real-world edge devices. Multiple lightweight AI models (such as MobileNetV2, TinyBERT, and XGBoost) are trained and deployed on three benchmark datasets: CIFAR-10, EdgeMNIST, and UCI Human Activity Recognition.
Experimental results demonstrate that while SHAP offers high-quality explanations, it imposes significant computational overhead, making it suitable for moderately powered platforms like Jetson Nano. In contrast, LIME achieves a balanced trade-off between transparency and resource efficiency, making it the most viable option for real-time inference on lower-end devices like Raspberry Pi. Saliency Maps, though computationally lightweight, deliver limited interpretability, particularly for non-visual data tasks.
Furthermore, two real-world case studies—one in smart health monitoring and the other in drone-based surveillance—validate the framework's applicability. In both scenarios, the integration of XAI significantly enhanced user trust and decision reliability without breaching latency thresholds.
Ultimately, this paper contributes a scalable, device-agnostic solution for embedding explainability into edge intelligence, enabling transparent AI decisions at the point of data generation. This advancement is crucial for the future of trustworthy edge AI, particularly in regulated and high-risk environments.
Keywords
- Edge AI
- Explainable AI
- SHAP
- LIME
- Real-Time Inference
- IoT
- Embedded Intelligence
- Model Transparency.
References
- 1. Arthi, R., & Krishnaveni, S. (2024). Optimized Tiny Machine Learning and Explainable AI for Trustable and Energy-Efficient Fog-Enabled Healthcare Decision Support System. International Journal of Computational Intelligence Systems, 17(1), 229.
- 2. Salih, A. M., Raisi‐Estabragh, Z., Galazzo, I. B., Radeva, P., Petersen, S. E., Lekadir, K., & Menegaz, G. (2025). A perspective on explainable artificial intelligence methods: SHAP and LIME. Advanced Intelligent Systems, 7(1), 2400304.
- 3. Zhou, H., Zhang, X., Feng, Y., Zhang, T., & Xiong, L. (2025). Efficient human activity recognition on edge devices using DeepConv LSTM architectures. Scientific Reports, 15(1), 13830.
- 4. Merenda, M., Porcaro, C., & Iero, D. (2020). Edge machine learning for ai-enabled iot devices: A review. Sensors, 20(9), 2533.
- 5. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.
- 6. Rajapakse, V., Karunanayake, I., & Ahmed, N. (2023). Intelligence at the extreme edge: A survey on reformable TinyML. ACM Computing Surveys, 55(13s), 1-30.
- 7. Jagatheesaperumal, S. K., Pham, Q. V., Ruby, R., Yang, Z., Xu, C., & Zhang, Z. (2022). Explainable AI over the Internet of Things (IoT): Overview, state-of-the-art and future directions. IEEE Open Journal of the Communications Society, 3, 2106-2136.
- 8. Wang, C. C., Chiu, C. T., & Chang, J. Y. (2023). Efficientnet-elite: Extremely lightweight and efficient cnn models for edge devices by network candidate search. Journal of Signal Processing Systems, 95(5), 657-669.
- 9. Vimbi, V., Shaffi, N., & Mahmud, M. (2024). Interpreting artificial intelligence models: a systematic review on the application of LIME and SHAP in Alzheimer’s disease detection. Brain Informatics, 11(1), 10.
- 10. Baciu, V. E., Braeken, A., Segers, L., & Silva, B. D. (2025). Secure Tiny Machine Learning on Edge Devices: A Lightweight Dual Attestation Mechanism for Machine Learning. Future Internet, 17(2), 85.
- 11. Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626).
- 12. Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
- 13. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should i trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).
- 14. Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626).
- 15. Gubbi, J., Buyya, R., Marusic, S., & Palaniswami, M. (2013). Internet of Things (IoT): A vision, architectural elements, and future directions. Future generation computer systems, 29(7), 1645-1660.
- 16. Li, X., Xiong, H., Li, X., Wu, X., Zhang, X., Liu, J., ... & Dou, D. (2022). Interpretable deep learning: Interpretation, interpretability, trustworthiness, and beyond. Knowledge and Information Systems, 64(12), 3197-3234.
- 17. Huang, K., & Gao, W. (2022, October). Real-time neural network inference on extremely weak devices: agile offloading with explainable ai. In Proceedings of the 28th Annual International Conference on Mobile Computing And Networking (pp. 200-213).
- 18. Jagatheesaperumal, S. K., Pham, Q. V., Ruby, R., Yang, Z., Xu, C., & Zhang, Z. (2022). Explainable AI over the Internet of Things (IoT): Overview, state-of-the-art and future directions. IEEE Open Journal of the Communications Society, 3, 2106-2136.
- 19. Wang, S., Qureshi, M. A., Miralles-Pechuan, L., Huynh-The, T., Gadekallu, T. R., & Liyanage, M. (2021). Applications of explainable AI for 6G: Technical aspects, use cases, and research challenges. arXiv preprint arXiv:2112.04698.
- 20. Patidar, N., Mishra, S., Jain, R., Prajapati, D., Solanki, A., Suthar, R., ... & Patel, H. (2024). Transparency in AI decision making: A survey of explainable AI methods and applications. Advances of Robotic Technology, 2(1).