Minimizing Delay and Power Consumption at the Edge
Edge computing systems must offer low latency at low cost and low power consumption for sensors and other applications, including the IoT, smart vehicles, smart homes, and 6G. Thus, substantial research has been conducted to identify optimum task allocation schemes in this context using non-linear o...
Saved in:
Main Author: | |
---|---|
Format: | Article |
Language: | English |
Published: |
MDPI AG
2025-01-01
|
Series: | Sensors |
Subjects: | |
Online Access: | https://www.mdpi.com/1424-8220/25/2/502 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832587535072624640 |
---|---|
author | Erol Gelenbe |
author_facet | Erol Gelenbe |
author_sort | Erol Gelenbe |
collection | DOAJ |
description | Edge computing systems must offer low latency at low cost and low power consumption for sensors and other applications, including the IoT, smart vehicles, smart homes, and 6G. Thus, substantial research has been conducted to identify optimum task allocation schemes in this context using non-linear optimization, machine learning, and market-based algorithms. Prior work has mainly focused on two methodologies: (i) formulating non-linear optimizations that lead to NP-hard problems, which are processed via heuristics, and (ii) using AI-based formulations, such as reinforcement learning, that are then tested with simulations. These prior approaches have two shortcomings: (a) there is no guarantee that optimum solutions are achieved, and (b) they do not provide an explicit formula for the fraction of tasks that are allocated to the different servers to achieve a specified optimum. This paper offers a radically different and mathematically based principled method that explicitly computes the optimum fraction of jobs that should be allocated to the different servers to (1) minimize the average latency (delay) of the jobs that are allocated to the edge servers and (2) minimize the average energy consumption of these jobs at the set of edge servers. These results are obtained with a mathematical model of a multiple-server edge system that is managed by a task distribution platform, whose equations are derived and solved using methods from stochastic processes. This approach has low computational cost and provides simple linear complexity formulas to compute the fraction of tasks that should be assigned to the different servers to achieve minimum latency and minimum energy consumption. |
format | Article |
id | doaj-art-ea1c7917b2224000a2176958f3c65c65 |
institution | Kabale University |
issn | 1424-8220 |
language | English |
publishDate | 2025-01-01 |
publisher | MDPI AG |
record_format | Article |
series | Sensors |
spelling | doaj-art-ea1c7917b2224000a2176958f3c65c652025-01-24T13:49:10ZengMDPI AGSensors1424-82202025-01-0125250210.3390/s25020502Minimizing Delay and Power Consumption at the EdgeErol Gelenbe0Institute of Theoretical & Applied Informatics, Polish Academy of Sciences (IITiS-PAN), 44-100 Gliwice, PolandEdge computing systems must offer low latency at low cost and low power consumption for sensors and other applications, including the IoT, smart vehicles, smart homes, and 6G. Thus, substantial research has been conducted to identify optimum task allocation schemes in this context using non-linear optimization, machine learning, and market-based algorithms. Prior work has mainly focused on two methodologies: (i) formulating non-linear optimizations that lead to NP-hard problems, which are processed via heuristics, and (ii) using AI-based formulations, such as reinforcement learning, that are then tested with simulations. These prior approaches have two shortcomings: (a) there is no guarantee that optimum solutions are achieved, and (b) they do not provide an explicit formula for the fraction of tasks that are allocated to the different servers to achieve a specified optimum. This paper offers a radically different and mathematically based principled method that explicitly computes the optimum fraction of jobs that should be allocated to the different servers to (1) minimize the average latency (delay) of the jobs that are allocated to the edge servers and (2) minimize the average energy consumption of these jobs at the set of edge servers. These results are obtained with a mathematical model of a multiple-server edge system that is managed by a task distribution platform, whose equations are derived and solved using methods from stochastic processes. This approach has low computational cost and provides simple linear complexity formulas to compute the fraction of tasks that should be assigned to the different servers to achieve minimum latency and minimum energy consumption.https://www.mdpi.com/1424-8220/25/2/502edge computingsensor networksedge computinglatency minimizationreducing energy consumptionG-networks |
spellingShingle | Erol Gelenbe Minimizing Delay and Power Consumption at the Edge Sensors edge computing sensor networks edge computing latency minimization reducing energy consumption G-networks |
title | Minimizing Delay and Power Consumption at the Edge |
title_full | Minimizing Delay and Power Consumption at the Edge |
title_fullStr | Minimizing Delay and Power Consumption at the Edge |
title_full_unstemmed | Minimizing Delay and Power Consumption at the Edge |
title_short | Minimizing Delay and Power Consumption at the Edge |
title_sort | minimizing delay and power consumption at the edge |
topic | edge computing sensor networks edge computing latency minimization reducing energy consumption G-networks |
url | https://www.mdpi.com/1424-8220/25/2/502 |
work_keys_str_mv | AT erolgelenbe minimizingdelayandpowerconsumptionattheedge |