Advanced Search

Show simple item record

dc.contributor.authorHeidari, Arash
dc.contributor.authorNavimipour, Nima Jafari
dc.contributor.authorJamali, Mohammad Ali Jabraeil
dc.contributor.authorAkbarpour, Shahin
dc.date.accessioned2023-10-19T15:11:41Z
dc.date.available2023-10-19T15:11:41Z
dc.date.issued2023
dc.identifier.issn2210-5379
dc.identifier.issn2210-5387
dc.identifier.urihttps://doi.org/10.1016/j.suscom.2023.100859
dc.identifier.urihttps://hdl.handle.net/20.500.12469/5165
dc.description.abstractTo fulfill people's expectations for smart and user-friendly Internet of Things (IoT) applications, the quantity of processing is fast expanding, and task latency constraints are becoming extremely rigorous. On the other hand, the limited battery capacity of IoT objects severely affects the user experience. Energy Harvesting (EH) technology enables green energy to offer a continuous energy supply for IoT objects. It provides a solid assurance for the proper functioning of resource-constrained IoT objects when combined with the maturation of edge platforms and the development of parallel computing. The Markov Decision Process (MDP) and Deep Learning (DL) are used in this work to solve dynamic online/offline IoT-edge offloading scenarios. The suggested system may be used in both offline and online contexts and meets the user's quality of service expectations. Also, we investigate a blockchain scenario in which edge and cloud could work toward task offloading to address the tradeoff between limited processing power and high latency while ensuring data integrity during the offloading process. We provide a double Q-learning solution to the MDP issue that maximizes the acceptable offline offloading methods. During exploration, Transfer Learning (TL) is employed to quicken convergence by reducing pointless exploration. Although the recently promoted Deep Q-Network (DQN) may address this space complexity issue by replacing the huge Q-table in standard Q-learning with a Deep Neural Network (DNN), its learning speed may still be insufficient for IoT apps. In light of this, our work introduces a novel learning algorithm known as deep Post-Decision State (PDS)-learning, which combines the PDS-learning approach with the classic DQN. The system component in the proposed system can be dynamically chosen and modified to decrease object energy usage and delay. On average, the proposed technique outperforms multiple benchmarks in terms of delay by 4.5%, job failure rate by 5.7%, cost by 4.6%, computational overhead by 6.1%, and energy consumption by 3.9%.en_US
dc.language.isoengen_US
dc.publisherElsevieren_US
dc.relation.ispartofSustainable Computing-Informatics & Systemsen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectComputationEn_Us
dc.subjectBlockchainEn_Us
dc.subjectGreen Offloadingen_US
dc.subjectDeep Learningen_US
dc.subjectIoTen_US
dc.subjectSmart Edgeen_US
dc.subjectBlockchainen_US
dc.titleA green, secure, and deep intelligent method for dynamic IoT-edge-cloud offloading scenariosen_US
dc.typearticleen_US
dc.authoridHeidari, Arash/0000-0003-4279-8551
dc.authoridJafari Navimipour, Nima/0000-0002-5514-5536
dc.identifier.volume38en_US
dc.departmentN/Aen_US
dc.identifier.wosWOS:000996894100001en_US
dc.identifier.doi10.1016/j.suscom.2023.100859en_US
dc.identifier.scopus2-s2.0-85148948008en_US
dc.institutionauthorN/A
dc.relation.publicationcategoryMakale - Uluslararası Hakemli Dergi - Kurum Öğretim Elemanıen_US
dc.authorwosidHeidari, Arash/AAK-9761-2021
dc.authorwosidJafari Navimipour, Nima/AAF-5662-2021
dc.khas20231019-WoSen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record