LEMS: Optimized Large Model Framework for Edge-AI in Consumer Internet of Things Devices
IEEE Transactions on Consumer Electronics, 2025•ieeexplore.ieee.org
Edge computing plays a critical role in enabling real-time deployment of artificial intelligence
(AI) on resource-constrained consumer IoT devices. These devices face significant
challenges in balancing energy efficiency, latency, accuracy, and security. To address these
issues, an Optimized Lightweight Large Model Framework (LEMS) is proposed that
enhances advanced model compression techniques, including pruning and quantization, to
reduce computational and memory demands. Additionally, LEMS employs a hybrid edge …
(AI) on resource-constrained consumer IoT devices. These devices face significant
challenges in balancing energy efficiency, latency, accuracy, and security. To address these
issues, an Optimized Lightweight Large Model Framework (LEMS) is proposed that
enhances advanced model compression techniques, including pruning and quantization, to
reduce computational and memory demands. Additionally, LEMS employs a hybrid edge …
Edge computing plays a critical role in enabling real-time deployment of artificial intelligence (AI) on resource-constrained consumer IoT devices. These devices face significant challenges in balancing energy efficiency, latency, accuracy, and security. To address these issues, an Optimized Lightweight Large Model Framework (LEMS) is proposed that enhances advanced model compression techniques, including pruning and quantization, to reduce computational and memory demands. Additionally, LEMS employs a hybrid edge-cloud processing architecture that optimizes resource utilization by offloading complex tasks to the cloud while maintaining low-latency performance at the edge. To ensure data security, the framework integrates lightweight cryptographic protocols, ensuring privacy without overwhelming the constrained devices. The LEMS evaluated on various IoT platforms, including Raspberry Pi 4 and ESP32 microcontrollers, using real-world dataset: MIMIC-III. Results show that LEMS reduces model size by up to 40% and cuts energy consumption by 15%, while preserving 91% inference accuracy. Moreover, the hybrid processing reduced latency by 60%, and the security mechanisms incurred less than 5% computational overhead.
ieeexplore.ieee.org
Showing the best result for this search. See all results