Browsing by Author "Akbarpour, Shahin"
Now showing 1 - 4 of 4
- Results Per Page
- Sort Options
Article Citation - WoS: 34Citation - Scopus: 38Deep Q-Learning Technique for Offloading Offline/Online Computation in Blockchain-Enabled Green Iot-Edge Scenarios(Mdpi, 2022) Heidari, Arash; Jafari Navimipour, Nima; Jamali, Mohammad Ali Jabraeil; Navimipour, Nima Jafari; Akbarpour, ShahinThe number of Internet of Things (IoT)-related innovations has recently increased exponentially, with numerous IoT objects being invented one after the other. Where and how many resources can be transferred to carry out tasks or applications is known as computation offloading. Transferring resource-intensive computational tasks to a different external device in the network, such as a cloud, fog, or edge platform, is the strategy used in the IoT environment. Besides, offloading is one of the key technological enablers of the IoT, as it helps overcome the resource limitations of individual objects. One of the major shortcomings of previous research is the lack of an integrated offloading framework that can operate in an offline/online environment while preserving security. This paper offers a new deep Q-learning approach to address the IoT-edge offloading enabled blockchain problem using the Markov Decision Process (MDP). There is a substantial gap in the secure online/offline offloading systems in terms of security, and no work has been published in this arena thus far. This system can be used online and offline while maintaining privacy and security. The proposed method employs the Post Decision State (PDS) mechanism in online mode. Additionally, we integrate edge/cloud platforms into IoT blockchain-enabled networks to encourage the computational potential of IoT devices. This system can enable safe and secure cloud/edge/IoT offloading by employing blockchain. In this system, the master controller, offloading decision, block size, and processing nodes may be dynamically chosen and changed to reduce device energy consumption and cost. TensorFlow and Cooja's simulation results demonstrated that the method could dramatically boost system efficiency relative to existing schemes. The findings showed that the method beats four benchmarks in terms of cost by 6.6%, computational overhead by 7.1%, energy use by 7.9%, task failure rate by 6.2%, and latency by 5.5% on average.Article Citation - WoS: 50Citation - Scopus: 51A Green, Secure, and Deep Intelligent Method for Dynamic Iot-Edge Offloading Scenarios(Elsevier, 2023) Heidari, Arash; Jafari Navimipour, Nima; Navimipour, Nima Jafari; Jamali, Mohammad Ali Jabraeil; Akbarpour, ShahinTo fulfill people's expectations for smart and user-friendly Internet of Things (IoT) applications, the quantity of processing is fast expanding, and task latency constraints are becoming extremely rigorous. On the other hand, the limited battery capacity of IoT objects severely affects the user experience. Energy Harvesting (EH) technology enables green energy to offer a continuous energy supply for IoT objects. It provides a solid assurance for the proper functioning of resource-constrained IoT objects when combined with the maturation of edge platforms and the development of parallel computing. The Markov Decision Process (MDP) and Deep Learning (DL) are used in this work to solve dynamic online/offline IoT-edge offloading scenarios. The suggested system may be used in both offline and online contexts and meets the user's quality of service expectations. Also, we investigate a blockchain scenario in which edge and cloud could work toward task offloading to address the tradeoff between limited processing power and high latency while ensuring data integrity during the offloading process. We provide a double Q-learning solution to the MDP issue that maximizes the acceptable offline offloading methods. During exploration, Transfer Learning (TL) is employed to quicken convergence by reducing pointless exploration. Although the recently promoted Deep Q-Network (DQN) may address this space complexity issue by replacing the huge Q-table in standard Q-learning with a Deep Neural Network (DNN), its learning speed may still be insufficient for IoT apps. In light of this, our work introduces a novel learning algorithm known as deep Post-Decision State (PDS)-learning, which combines the PDS-learning approach with the classic DQN. The system component in the proposed system can be dynamically chosen and modified to decrease object energy usage and delay. On average, the proposed technique outperforms multiple benchmarks in terms of delay by 4.5%, job failure rate by 5.7%, cost by 4.6%, computational overhead by 6.1%, and energy consumption by 3.9%.Article Citation - WoS: 37Citation - Scopus: 44A hybrid approach for latency and battery lifetime optimization in IoT devices through offloading and CNN learning(Elsevier, 2023) Jafari Navimipour, Nima; Navimipour, Nima Jafari; Jamali, Mohammad Ali Jabraeil; Akbarpour, ShahinOffloading assists in overcoming the resource constraints of specific elements, making it one of the primary technical enablers of the Internet of Things (IoT). IoT devices with low battery capacities can use the edge to offload some of the operations, which can significantly reduce latency and lengthen battery lifetime. Due to their restricted battery capacity, deep learning (DL) techniques are more energy-intensive to utilize in IoT devices. Because many IoT devices lack such modules, numerous research employed energy harvester modules that are not available to IoT devices in real-world circumstances. Using the Markov Decision Process (MDP), we describe the offloading problem in this study. Next, to facilitate partial offloading in IoT devices, we develop a Deep Reinforcement learning (DRL) method that can efficiently learn the policy by adjusting to network dynamics. Convolutional Neural Network (CNN) is then offered and implemented on Mobile Edge Computing (MEC) devices to expedite learning. These two techniques operate together to offer the proper offloading approach throughout the length of the system's operation. Moreover, transfer learning was employed to initialize the Qtable values, which increased the system's effectiveness. The simulation in this article, which employed Cooja and TensorFlow, revealed that the strategy outperformed five benchmarks in terms of latency by 4.1%, IoT device efficiency by 2.9%, energy utilization by 3.6%, and job failure rate by 2.6% on average.Article Securing and Optimizing Iot Offloading With Blockchain and Deep Reinforcement Learning in Multi-User Environments(Springer, 2025) Heidari, Arash; Navimipour, Nima Jafari; Jamali, Mohammad Ali Jabraeil; Akbarpour, ShahinThe growth of the Internet of Things (IoT)-related innovations has resulted in the invention of numerous IoT objects. However, the resource limitations of individual items remain a challenge that can be overcome through offloading. A key limitation of previous research is the absence of an integrated offloading framework that can operate securely in offline/online environments. The security and calculated online/offline offloading issues in a multi-user IoT-fog-cloud system with blockchain are investigated in this article at the same time. First, we provide a reliable access control system utilizing blockchain to enhance offloading security. This technique can guard cloud resources against unauthorized offloading practices. Next, we define a computation offloading issue by optimizing the offloading decisions, allocating computing resources and radio bandwidth, and intelligent contract use to address the computation management of authorized mobile devices. This optimization challenge focuses on the long-term system costs of latency, energy use, and intelligent contract charge among all mobile devices. We create a new Deep Reinforcement Learning (DRL) technique employing a double-dueling Q-network to address the suggested offloading problem. We provide a Markov Decision Process (MDP)-based DRL solution to the IoT offloading-enabled blockchain dilemma. The supposed system works in both online and offline settings, and when operating online, we use the Post Decision State (PDS) method. The contributions of this work include a new integrated offloading framework that can operate in offline/online environments while preserving security and a novel approach that incorporates fog platforms into IoT blockchain-enabled networks for improved system efficiency. Our method outperforms four benchmarks in cost by 5.1%, computational overhead by 4.1%, energy use by 3.3%, task failure rate by 3.6%, and latency by 3.9% on average.