Downloads

Keywords:

Edge AI, Explainable AI, SHAP, LIME, Real-Time Inference, IoT, Embedded Intelligence, Model Transparency.

Explainable AI in Edge Devices: A Lightweight Framework for Real-Time Decision Transparency

Authors

Mohammed AlNusif1
Cyber security and studying a masters at Duke 1

Abstract

 

The increasing deployment of Artificial Intelligence (AI) models on edge devices—such as Raspberry Pi, NVIDIA Jetson Nano, and Google Coral TPU—has revolutionized real-time decision-making in critical domains including healthcare, autonomous vehicles, and surveillance. However, these edge-based AI systems often function as opaque "black boxes," making it difficult for end-users to understand, verify, or trust their decisions. This lack of interpretability not only undermines user confidence but also poses serious challenges for ethical accountability, regulatory compliance (e.g., GDPR, HIPAA), and safety in mission-critical applications.

To address these limitations, this study proposes a lightweight, modular framework that enables the integration of Explainable AI (XAI) techniques into resource-constrained edge environments. We explore and benchmark several state-of-the-art XAI methods—including SHAP (SHapley Additive Explanations), LIME (Local Interpretable Model-agnostic Explanations), and Saliency Maps—by evaluating their performance in terms of inference latency, memory usage, interpretability score, and user trust across real-world edge devices. Multiple lightweight AI models (such as MobileNetV2, TinyBERT, and XGBoost) are trained and deployed on three benchmark datasets: CIFAR-10, EdgeMNIST, and UCI Human Activity Recognition.

Experimental results demonstrate that while SHAP offers high-quality explanations, it imposes significant computational overhead, making it suitable for moderately powered platforms like Jetson Nano. In contrast, LIME achieves a balanced trade-off between transparency and resource efficiency, making it the most viable option for real-time inference on lower-end devices like Raspberry Pi. Saliency Maps, though computationally lightweight, deliver limited interpretability, particularly for non-visual data tasks.

Furthermore, two real-world case studies—one in smart health monitoring and the other in drone-based surveillance—validate the framework's applicability. In both scenarios, the integration of XAI significantly enhanced user trust and decision reliability without breaching latency thresholds.

Ultimately, this paper contributes a scalable, device-agnostic solution for embedding explainability into edge intelligence, enabling transparent AI decisions at the point of data generation. This advancement is crucial for the future of trustworthy edge AI, particularly in regulated and high-risk environments.

Article Details

Published

2025-07-02

Section

Articles

License

Copyright (c) 2025 International Journal of Engineering and Computer Science Creative Commons License

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

How to Cite

Explainable AI in Edge Devices: A Lightweight Framework for Real-Time Decision Transparency. (2025). International Journal of Engineering and Computer Science, 14(07), 27447-27472. https://doi.org/10.18535/ijecs.v14i07.5181