TY - GEN
T1 - Energy-Efficient Distributed Machine Learning in Cloud Fog Networks
AU - Alenazi, Mohammed M.
AU - Yosuf, Barzan A.
AU - Mohamed, Sanaa H.
AU - El-Gorashi, Taisir E.H.
AU - Elmirghani, Jaafar M.H.
N1 - Funding Information:
ACKNOWLEDGMENT The authors would like to acknowledge funding from the Engineering and Physical Sciences Research Council (EPSRC), INTERNET (EP/H040536/1), STAR (EP/K016873/1) and TOWS (EP/S016570/1) projects. All data are provided in full in the results section of this paper. The first author would like to thank the University of Tabuk for funding his PhD scholarship.
Publisher Copyright:
© 2021 IEEE.
PY - 2021/6/14
Y1 - 2021/6/14
N2 - Massive amounts of data are expected to be generated by the billions of objects that form the Internet of Things (IoT). A variety of automated services such as monitoring will largely depend on the use of different Machine Learning (ML) algorithms. Traditionally, ML models are processed by centralized cloud data centers, where IoT readings are offloaded to the cloud via multiple networking hops in the access, metro, and core layers. This approach will inevitably lead to excessive networking power consumptions as well as Quality-of-Service (QoS) degradation such as increased latency. Instead, in this paper, we propose a distributed ML approach where the processing can take place in intermediary devices such as IoT nodes and fog servers in addition to the cloud. We abstract the ML models into Virtual Service Requests (VSRs) to represent multiple interconnected layers of a Deep Neural Network (DNN). Using Mixed Integer Linear Programming (MILP), we design an optimization model that allocates the layers of a DNN in a Cloud/Fog Network (CFN) in an energy efficient way. We evaluate the impact of DNN input distribution on the performance of the CFN and compare the energy efficiency of this approach to the baseline where all layers of DNNs are processed in the centralized Cloud Data Center (CDC).
AB - Massive amounts of data are expected to be generated by the billions of objects that form the Internet of Things (IoT). A variety of automated services such as monitoring will largely depend on the use of different Machine Learning (ML) algorithms. Traditionally, ML models are processed by centralized cloud data centers, where IoT readings are offloaded to the cloud via multiple networking hops in the access, metro, and core layers. This approach will inevitably lead to excessive networking power consumptions as well as Quality-of-Service (QoS) degradation such as increased latency. Instead, in this paper, we propose a distributed ML approach where the processing can take place in intermediary devices such as IoT nodes and fog servers in addition to the cloud. We abstract the ML models into Virtual Service Requests (VSRs) to represent multiple interconnected layers of a Deep Neural Network (DNN). Using Mixed Integer Linear Programming (MILP), we design an optimization model that allocates the layers of a DNN in a Cloud/Fog Network (CFN) in an energy efficient way. We evaluate the impact of DNN input distribution on the performance of the CFN and compare the energy efficiency of this approach to the baseline where all layers of DNNs are processed in the centralized Cloud Data Center (CDC).
KW - cloud/fog networks
KW - Deep Neural Network (DNN)
KW - energy efficiency
KW - Internet-of-Things (IoT)
KW - Mixed Integer Linear Programming (MILP)
UR - http://www.scopus.com/inward/record.url?scp=85119847363&partnerID=8YFLogxK
UR - https://wfiot2021.iot.ieee.org/
U2 - 10.1109/WF-IoT51360.2021.9595351
DO - 10.1109/WF-IoT51360.2021.9595351
M3 - Conference contribution
AN - SCOPUS:85119847363
SN - 9781665444323
T3 - 7th IEEE World Forum on Internet of Things, WF-IoT 2021
SP - 935
EP - 941
BT - 7th IEEE World Forum on Internet of Things, WF-IoT 2021
A2 - Abdelgawad, Ahmed
A2 - Dutta, Soumya Kanti
A2 - Prasad, RangaRao Venkatesha
PB - IEEE, Institute of Electrical and Electronics Engineers
CY - United States
T2 - 7th IEEE World Forum on Internet of Things, WF-IoT 2021
Y2 - 14 June 2021 through 31 July 2021
ER -