Browsing by Author "Heidari, Arash"
Now showing 1 - 20 of 27
- Results Per Page
- Sort Options
Review Citation Count: 24Adventures in data analysis: a systematic review of Deep Learning techniques for pattern recognition in cyber-physical-social systems(Springer, 2023) Amiri, Zahra; Heidari, Arash; Navimipour, Nima Jafari; Unal, Mehmet; Mousavi, AliMachine Learning (ML) and Deep Learning (DL) have achieved high success in many textual, auditory, medical imaging, and visual recognition patterns. Concerning the importance of ML/DL in recognizing patterns due to its high accuracy, many researchers argued for many solutions for improving pattern recognition performance using ML/DL methods. Due to the importance of the required intelligent pattern recognition of machines needed in image processing and the outstanding role of big data in generating state-of-the-art modern and classical approaches to pattern recognition, we conducted a thorough Systematic Literature Review (SLR) about DL approaches for big data pattern recognition. Therefore, we have discussed different research issues and possible paths in which the abovementioned techniques might help materialize the pattern recognition notion. Similarly, we have classified 60 of the most cutting-edge articles put forward pattern recognition issues into ten categories based on the DL/ML method used: Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Generative Adversarial Network (GAN), Autoencoder (AE), Ensemble Learning (EL), Reinforcement Learning (RL), Random Forest (RF), Multilayer Perception (MLP), Long-Short Term Memory (LSTM), and hybrid methods. SLR method has been used to investigate each one in terms of influential properties such as the main idea, advantages, disadvantages, strategies, simulation environment, datasets, and security issues. The results indicate most of the articles were published in 2021. Moreover, some important parameters such as accuracy, adaptability, fault tolerance, security, scalability, and flexibility were involved in these investigations.Article Citation Count: 30The applications of machine learning techniques in medical data processing based on distributed computing and the Internet of Things(Elsevier Ireland Ltd, 2023) Aminizadeh, Sarina; Heidari, Arash; Toumaj, Shiva; Darbandi, Mehdi; Navimipour, Nima Jafari; Rezaei, Mahsa; Talebi, SamiraMedical data processing has grown into a prominent topic in the latest decades with the primary goal of maintaining patient data via new information technologies, including the Internet of Things (IoT) and sensor technologies, which generate patient indexes in hospital data networks. Innovations like distributed computing, Machine Learning (ML), blockchain, chatbots, wearables, and pattern recognition can adequately enable the collection and processing of medical data for decision-making in the healthcare era. Particularly, to assist experts in the disease diagnostic process, distributed computing is beneficial by digesting huge volumes of data swiftly and producing personalized smart suggestions. On the other side, the current globe is confronting an outbreak of COVID-19, so an early diagnosis technique is crucial to lowering the fatality rate. ML systems are beneficial in aiding radiologists in examining the incredible amount of medical images. Nevertheless, they demand a huge quantity of training data that must be unified for processing. Hence, developing Deep Learning (DL) confronts multiple issues, such as conventional data collection, quality assurance, knowledge exchange, privacy preservation, administrative laws, and ethical considerations. In this research, we intend to convey an inclusive analysis of the most recent studies in distributed computing platform applications based on five categorized platforms, including cloud computing, edge, fog, IoT, and hybrid platforms. So, we evaluated 27 articles regarding the usage of the proposed framework, deployed methods, and applications, noting the advantages, drawbacks, and the applied dataset and screening the security mechanism and the presence of the Transfer Learning (TL) method. As a result, it was proved that most recent research (about 43%) used the IoT platform as the environment for the proposed architecture, and most of the studies (about 46%) were done in 2021. In addition, the most popular utilized DL algorithm was the Convolutional Neural Network (CNN), with a percentage of 19.4%. Hence, despite how technology changes, delivering appropriate therapy for patients is the primary aim of healthcare-associated departments. Therefore, further studies are recommended to develop more functional architectures based on DL and distributed environments and better evaluate the present healthcare data analysis models.Review Citation Count: 80Applications of ML/DL in the management of smart cities and societies based on new trends in information technologies: A systematic literature review(Elsevier, 2022) Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe goal of managing smart cities and societies is to maximize the efficient use of finite resources while enhancing the quality of life. To establish a sustainable urban existence, smart cities use some new technologies such as the Internet of Things (IoT), Internet of Drones (IoD), and Internet of Vehicles (IoV). The created data by these technologies are submitted to analytics to obtain new information for increasing the smart societies and cities' efficiency and effectiveness. Also, smart traffic management, smart power, and energy management, city surveillance, smart buildings, and patient healthcare monitoring are the most common applications in smart cities. However, the Artificial intelligence (AI), Machine Learning (ML), and Deep Learning (DL) approach all hold a lot of promise for managing automated activities in smart cities. Therefore, we discuss different research issues and possible research paths in which the aforementioned techniques might help materialize the smart city notion. The goal of this research is to offer a better understanding of (1) the fundamentals of smart city and society management, (2) the most recent developments and breakthroughs in this field, (3) the benefits and drawbacks of existing methods, and (4) areas that require further investigation and consideration. IoT, cloud computing, edge computing, fog computing, IoD, IoV, and hybrid models are the seven key emerging de-velopments in information technology that, in this paper, are considered to categorize the state-of-the-art techniques. The results indicate that the Conventional Neural Network (CNN) and Long Short-Term Memory (LSTM) are the most commonly used ML method in the publications. According to research, the majority of papers are about smart cities' power and energy management. Furthermore, most papers have concentrated on improving only one parameter, where the accuracy parameter obtains the most attention. In addition, Python is the most frequently used language, which was used in 69.8% of the papers.Article Citation Count: 0The applications of nature-inspired algorithms in Internet of Things-based healthcare service: A systematic literature review(Wiley, 2024) Amiri, Zahra; Heidari, Arash; Zavvar, Mohammad; Navimipour, Nima Jafari; Esmaeilpour, MansourNature-inspired algorithms revolve around the intersection of nature-inspired algorithms and the IoT within the healthcare domain. This domain addresses the emerging trends and potential synergies between nature-inspired computational approaches and IoT technologies for advancing healthcare services. Our research aims to fill gaps in addressing algorithmic integration challenges, real-world implementation issues, and the efficacy of nature-inspired algorithms in IoT-based healthcare. We provide insights into the practical aspects and limitations of such applications through a systematic literature review. Specifically, we address the need for a comprehensive understanding of the applications of nature-inspired algorithms in IoT-based healthcare, identifying gaps such as the lack of standardized evaluation metrics and studies on integration challenges and security considerations. By bridging these gaps, our paper offers insights and directions for future research in this domain, exploring the diverse landscape of nature-inspired algorithms in healthcare. Our chosen methodology is a Systematic Literature Review (SLR) to investigate related papers rigorously. Categorizing these algorithms into groups such as genetic algorithms, particle swarm optimization, cuckoo algorithms, ant colony optimization, other approaches, and hybrid methods, we employ meticulous classification based on critical criteria. MATLAB emerges as the predominant programming language, constituting 37.9% of cases, showcasing a prevalent choice among researchers. Our evaluation emphasizes adaptability as the paramount parameter, accounting for 18.4% of considerations. By shedding light on attributes, limitations, and potential directions for future research and development, this review aims to contribute to a comprehensive understanding of nature-inspired algorithms in the dynamic landscape of IoT-based healthcare services. Providing a complete overview of the current issues associated with nature-inspired algorithms in IoT-based healthcare services. Providing a thorough overview of present methodologies for IoT-based healthcare services in research studies; Evaluating each region that tailored nature-inspired algorithms with many perspectives such as advantages, restrictions, datasets, security involvement, and simulation stings; Outlining the critical aspects that motivate the cited approaches to enhance future research; Illustrating descriptions of certain IoT-based healthcare services used in various studies. imageReview Botnets Unveiled: a Comprehensive Survey on Evolving Threats and Defense Strategies(Wiley, 2024) Asadi, Mehdi; Jamali, Mohammad Ali Jabraeil; Heidari, Arash; Navimipour, Nima JafariBotnets have emerged as a significant internet security threat, comprising networks of compromised computers under the control of command and control (C&C) servers. These malevolent entities enable a range of malicious activities, from denial of service (DoS) attacks to spam distribution and phishing. Each bot operates as a malicious binary code on vulnerable hosts, granting remote control to attackers who can harness the combined processing power of these compromised hosts for synchronized, highly destructive attacks while maintaining anonymity. This survey explores botnets and their evolution, covering aspects such as their life cycles, C&C models, botnet communication protocols, detection methods, the unique environments botnets operate in, and strategies to evade detection tools. It analyzes research challenges and future directions related to botnets, with a particular focus on evasion and detection techniques, including methods like encryption and the use of covert channels for detection and the reinforcement of botnets. By reviewing existing research, the survey provides a comprehensive overview of botnets, from their origins to their evolving tactics, and evaluates how botnets evade detection and how to counteract their activities. Its primary goal is to inform the research community about the changing landscape of botnets and the challenges in combating these threats, offering guidance on addressing security concerns effectively through the highlighting of evasion and detection methods. The survey concludes by presenting future research directions, including using encryption and covert channels for detection and strategies to strengthen botnets. This aims to guide researchers in developing more robust security measures to combat botnets effectively. Exploring botnets: evolution, tactics, countermeasures. This survey dives into botnets, covering life cycles, communication, and evasion tactics. It highlights challenges and future strategies for combating cyber threats. imageArticle Citation Count: 0Comprehensive survey of artificial intelligence techniques and strategies for climate change mitigation(Pergamon-elsevier Science Ltd, 2024) Amiri, Zahra; Heidari, Arash; Navimipour, Nima JafariWith the galloping progress of the changing climates all around the world, Machine Learning (ML) approaches have been prevalently studied in many types of research in this area. ML is a robust tool for acquiring perspectives from data. In this paper, we elaborate on climate change mitigation issues and ML approaches leveraged to solve these issues and aid in the improvement and function of sustainable energy systems. ML has been employed in multiple applications and many scopes of climate subjects such as ecosystems, agriculture, buildings and cities, industry, and transportation. So, a Systematic Literature Review (SLR) is applied to explore and evaluate findings from related research. In this paper, we propose a novel taxonomy of Deep Learning (DL) method applications for climate change mitigation, a comprehensive analysis that has not been conducted before. We evaluated these methods based on critical parameters such as accuracy, scalability, and interpretability and quantitatively compared their results. This analysis provides new insights into the effectiveness and reliability of DL methods in addressing climate change challenges. We classified climate change ML methods into six key customizable groups: ecosystems, industry, buildings and cities, transportation, agriculture, and hybrid applications. Afterward, state-of-the-art research on ML mechanisms and applications for climate change mitigation issues has been highlighted. In addition, many problems and issues related to ML implementation for climate change have been mapped, which are predicted to stimulate more researchers to manage the future disastrous effects of climate change. Based on the findings, most of the papers utilized Python as the most common simulation environment 38.5 % of the time. In addition, most of the methods were analyzed and evaluated in terms of some parameters, namely accuracy, latency, adaptability, and scalability, respectively. Lastly, classification is the most frequent ML task within climate change mitigation, accounting for 40 % of the total. Furthermore, Convolutional Neural Networks (CNNs) are the most widely utilized approach for a variety of applications.Article Citation Count: 1The deep learning applications in IoT-based bio- and medical informatics: a systematic literature review(Springer London Ltd, 2024) Amiri, Zahra; Heidari, Arash; Navimipour, Nima Jafari; Esmaeilpour, Mansour; Yazdani, YaldaNowadays, machine learning (ML) has attained a high level of achievement in many contexts. Considering the significance of ML in medical and bioinformatics owing to its accuracy, many investigators discussed multiple solutions for developing the function of medical and bioinformatics challenges using deep learning (DL) techniques. The importance of DL in Internet of Things (IoT)-based bio- and medical informatics lies in its ability to analyze and interpret large amounts of complex and diverse data in real time, providing insights that can improve healthcare outcomes and increase efficiency in the healthcare industry. Several applications of DL in IoT-based bio- and medical informatics include diagnosis, treatment recommendation, clinical decision support, image analysis, wearable monitoring, and drug discovery. The review aims to comprehensively evaluate and synthesize the existing body of the literature on applying deep learning in the intersection of the IoT with bio- and medical informatics. In this paper, we categorized the most cutting-edge DL solutions for medical and bioinformatics issues into five categories based on the DL technique utilized: convolutional neural network, recurrent neural network, generative adversarial network, multilayer perception, and hybrid methods. A systematic literature review was applied to study each one in terms of effective properties, like the main idea, benefits, drawbacks, methods, simulation environment, and datasets. After that, cutting-edge research on DL approaches and applications for bioinformatics concerns was emphasized. In addition, several challenges that contributed to DL implementation for medical and bioinformatics have been addressed, which are predicted to motivate more studies to develop medical and bioinformatics research progressively. According to the findings, most articles are evaluated using features like accuracy, sensitivity, specificity, F-score, latency, adaptability, and scalability.Article Citation Count: 27Deep Q-Learning Technique for Offloading Offline/Online Computation in Blockchain-Enabled Green IoT-Edge Scenarios(Mdpi, 2022) Heidari, Arash; Jamali, Mohammad Ali Jabraeil; Navimipour, Nima Jafari; Akbarpour, ShahinThe number of Internet of Things (IoT)-related innovations has recently increased exponentially, with numerous IoT objects being invented one after the other. Where and how many resources can be transferred to carry out tasks or applications is known as computation offloading. Transferring resource-intensive computational tasks to a different external device in the network, such as a cloud, fog, or edge platform, is the strategy used in the IoT environment. Besides, offloading is one of the key technological enablers of the IoT, as it helps overcome the resource limitations of individual objects. One of the major shortcomings of previous research is the lack of an integrated offloading framework that can operate in an offline/online environment while preserving security. This paper offers a new deep Q-learning approach to address the IoT-edge offloading enabled blockchain problem using the Markov Decision Process (MDP). There is a substantial gap in the secure online/offline offloading systems in terms of security, and no work has been published in this arena thus far. This system can be used online and offline while maintaining privacy and security. The proposed method employs the Post Decision State (PDS) mechanism in online mode. Additionally, we integrate edge/cloud platforms into IoT blockchain-enabled networks to encourage the computational potential of IoT devices. This system can enable safe and secure cloud/edge/IoT offloading by employing blockchain. In this system, the master controller, offloading decision, block size, and processing nodes may be dynamically chosen and changed to reduce device energy consumption and cost. TensorFlow and Cooja's simulation results demonstrated that the method could dramatically boost system efficiency relative to existing schemes. The findings showed that the method beats four benchmarks in terms of cost by 6.6%, computational overhead by 7.1%, energy use by 7.9%, task failure rate by 6.2%, and latency by 5.5% on average.Review Citation Count: 5Deepfake detection using deep learning methods: A systematic and comprehensive review(Wiley Periodicals, inc, 2024) Dağ, Hasan; Navimipour, Nima Jafari; Dag, Hasan; Unal, MehmetDeep Learning (DL) has been effectively utilized in various complicated challenges in healthcare, industry, and academia for various purposes, including thyroid diagnosis, lung nodule recognition, computer vision, large data analytics, and human-level control. Nevertheless, developments in digital technology have been used to produce software that poses a threat to democracy, national security, and confidentiality. Deepfake is one of those DL-powered apps that has lately surfaced. So, deepfake systems can create fake images primarily by replacement of scenes or images, movies, and sounds that humans cannot tell apart from real ones. Various technologies have brought the capacity to change a synthetic speech, image, or video to our fingers. Furthermore, video and image frauds are now so convincing that it is hard to distinguish between false and authentic content with the naked eye. It might result in various issues and ranging from deceiving public opinion to using doctored evidence in a court. For such considerations, it is critical to have technologies that can assist us in discerning reality. This study gives a complete assessment of the literature on deepfake detection strategies using DL-based algorithms. We categorize deepfake detection methods in this work based on their applications, which include video detection, image detection, audio detection, and hybrid multimedia detection. The objective of this paper is to give the reader a better knowledge of (1) how deepfakes are generated and identified, (2) the latest developments and breakthroughs in this realm, (3) weaknesses of existing security methods, and (4) areas requiring more investigation and consideration. The results suggest that the Conventional Neural Networks (CNN) methodology is the most often employed DL method in publications. According to research, the majority of the articles are on the subject of video deepfake detection. The majority of the articles focused on enhancing only one parameter, with the accuracy parameter receiving the most attention. This article is categorized under:Technologies > Machine LearningAlgorithmic Development > MultimediaApplication Areas > Science and TechnologyArticle Citation Count: 17An Efficient Design of Multiplier for Using in Nano-Scale IoT Systems Using Atomic Silicon(IEEE-Inst Electrical Electronics Engineers Inc, 2023) Ahmadpour, Seyed-Sajad; Heidari, Arash; Navimpour, Nima Jafari; Asadi, Mohammad-Ali; Yalcin, SenayBecause of recent technological developments, such as Internet of Things (IoT) devices, power consumption has become a major issue. Atomic silicon quantum dot (ASiQD) is one of the most impressive technologies for developing low-power processing circuits, which are critical for efficient transmission and power management in micro IoT devices. On the other hand, multipliers are essential computational circuits used in a wide range of digital circuits. Therefore, the multiplier design with a low occupied area and low energy consumption is the most critical expected goal in designing any micro IoT circuits. This article introduces a low-power atomic silicon-based multiplier circuit for effective power management in the micro IoT. Based on this design, a $4\times 4$ -bit multiplier array with low power consumption and size is presented. The suggested circuit is also designed and validated using the SiQAD simulation tool. The proposed ASiQD-based circuit significantly reduces energy consumption and area consumed in the micro IoT compared to most recent designs.Publication Citation Count: 0Everything you wanted to know about ChatGPT: Components, capabilities, applications, and opportunities(John Wiley & Sons Ltd, 2024) Heidari, Arash; Navimipour, Nima Jafari; Zeadally, Sherali; Chamola, VinayConversational Artificial Intelligence (AI) and Natural Language Processing have advanced significantly with the creation of a Generative Pre-trained Transformer (ChatGPT) by OpenAI. ChatGPT uses deep learning techniques like transformer architecture and self-attention mechanisms to replicate human speech and provide coherent and appropriate replies to the situation. The model mainly depends on the patterns discovered in the training data, which might result in incorrect or illogical conclusions. In the context of open-domain chats, we investigate the components, capabilities constraints, and potential applications of ChatGPT along with future opportunities. We begin by describing the components of ChatGPT followed by a definition of chatbots. We present a new taxonomy to classify them. Our taxonomy includes rule-based chatbots, retrieval-based chatbots, generative chatbots, and hybrid chatbots. Next, we describe the capabilities and constraints of ChatGPT. Finally, we present potential applications of ChatGPT and future research opportunities. The results showed that ChatGPT, a transformer-based chatbot model, utilizes encoders to produce coherent responses.Article Citation Count: 15A Fuzzy-Based Method for Objects Selection in Blockchain-Enabled Edge-IoT Platforms Using a Hybrid Multi-Criteria Decision-Making Model(Mdpi, 2022) Gardas, Bhaskar B.; Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe broad availability of connected and intelligent devices has increased the demand for Internet of Things (IoT) applications that require more intense data storage and processing. However, cloud-based IoT systems are typically located far from end-users and face several issues, including high cloud server load, slow response times, and a lack of global mobility. Some of these flaws can be addressed with edge computing. In addition, node selection helps avoid common difficulties related to IoT, including network lifespan, allocation of resources, and trust in the acquired data by selecting the correct nodes at a suitable period. On the other hand, the IoT's interconnection of edge and blockchain technologies gives a fresh perspective on access control framework design. This article provides a novel node selection approach for blockchain-enabled edge IoT that provides a quick and dependable node selection. Moreover, fuzzy logic to approximation logic was used to manage numerical and linguistic data simultaneously. In addition, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), a powerful tool for examining Multi-Criteria Decision-Making (MCDM) problems, is used. The suggested fuzzy-based technique employs three input criteria to select the correct IoT node for a given mission in IoT-edge situations. The outcomes of the experiments indicate that the proposed framework enhances the parameters under consideration.Article Citation Count: 28A green, secure, and deep intelligent method for dynamic IoT-edge-cloud offloading scenarios(Elsevier, 2023) Heidari, Arash; Navimipour, Nima Jafari; Jamali, Mohammad Ali Jabraeil; Akbarpour, ShahinTo fulfill people's expectations for smart and user-friendly Internet of Things (IoT) applications, the quantity of processing is fast expanding, and task latency constraints are becoming extremely rigorous. On the other hand, the limited battery capacity of IoT objects severely affects the user experience. Energy Harvesting (EH) technology enables green energy to offer a continuous energy supply for IoT objects. It provides a solid assurance for the proper functioning of resource-constrained IoT objects when combined with the maturation of edge platforms and the development of parallel computing. The Markov Decision Process (MDP) and Deep Learning (DL) are used in this work to solve dynamic online/offline IoT-edge offloading scenarios. The suggested system may be used in both offline and online contexts and meets the user's quality of service expectations. Also, we investigate a blockchain scenario in which edge and cloud could work toward task offloading to address the tradeoff between limited processing power and high latency while ensuring data integrity during the offloading process. We provide a double Q-learning solution to the MDP issue that maximizes the acceptable offline offloading methods. During exploration, Transfer Learning (TL) is employed to quicken convergence by reducing pointless exploration. Although the recently promoted Deep Q-Network (DQN) may address this space complexity issue by replacing the huge Q-table in standard Q-learning with a Deep Neural Network (DNN), its learning speed may still be insufficient for IoT apps. In light of this, our work introduces a novel learning algorithm known as deep Post-Decision State (PDS)-learning, which combines the PDS-learning approach with the classic DQN. The system component in the proposed system can be dynamically chosen and modified to decrease object energy usage and delay. On average, the proposed technique outperforms multiple benchmarks in terms of delay by 4.5%, job failure rate by 5.7%, cost by 4.6%, computational overhead by 6.1%, and energy consumption by 3.9%.Article Citation Count: 1A GSO-based multi-objective technique for performance optimization of blockchain-based industrial Internet of things(Wiley, 2024) Zanbouri, Kouros; Darbandi, Mehdi; Nassr, Mohammad; Heidari, Arash; Navimipour, Nima Jafari; Yalcin, SenayThe latest developments in the industrial Internet of things (IIoT) have opened up a collection of possibilities for many industries. To solve the massive IIoT data security and efficiency problems, a potential approach is considered to satisfy the main needs of IIoT, such as high throughput, high security, and high efficiency, which is named blockchain. The blockchain mechanism is considered a significant approach to boosting data protection and performance. In the quest to amplify the capabilities of blockchain-based IIoT, a pivotal role is accorded to the Glowworm Swarm Optimization (GSO) algorithm. Inspired by the collaborative brilliance of glowworms in nature, the GSO algorithm offers a unique approach to harmonizing these conflicting aims. This paper proposes a new approach to improve the performance optimization of blockchain-based IIoT using the GSO algorithm due to the blockchain's contradictory objectives. The proposed blockchain-based IIoT system using the GSO algorithm addresses scalability challenges typically associated with blockchain technology by efficiently managing interactions among nodes and dynamically adapting to network demands. The GSO algorithm optimizes the allocation of resources and decision-making, reducing inefficiencies and bottlenecks. The method demonstrates considerable performance improvements through extensive simulations compared to traditional algorithms, offering a more scalable and efficient solution for industrial applications in the context of the IIoT. The extensive simulation and computational study have shown that the proposed method using GSO considerably improves the objective function and blockchain-based IIoT systems' performance compared to traditional algorithms. It provides more efficient and secure systems for industries and corporations. We introduced a blockchain-based IIoT using a glowworm swarm optimization algorithm motivated by glowworms' behavior, movements' probability toward each other, and luciferin quantity. The proposed approach significantly improves four-way trade-offs such as scalability, decentralization, cost, and latency. imageReview Citation Count: 10The History of Computing in Iran (Persia)-Since the Achaemenid Empire(Mdpi, 2022) Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetPersia was the early name for the territory that is currently recognized as Iran. Iran's proud history starts with the Achaemenid Empire, which began in the 6th century BCE (c. 550). The Iranians provided numerous innovative ideas in breakthroughs and technologies that are often taken for granted today or whose origins are mostly unknown from the Achaemenid Empire's early days. To recognize the history of computing systems in Iran, we must pay attention to everything that can perform computing. Because of Iran's historical position in the ancient ages, studying the history of computing in this country is an exciting subject. The history of computing in Iran started very far from the digital systems of the 20th millennium. The Achaemenid Empire can be mentioned as the first recorded sign of using computing systems in Persia. The history of computing in Iran started with the invention of mathematical theories and methods for performing simple calculations. This paper also attempts to shed light on Persia's computing heritage elements, dating back to 550 BC. We look at both the ancient and current periods of computing. In the ancient section, we will go through the history of computing in the Achaemenid Empire, followed by a description of the tools used for calculations. Additionally, the transition to the Internet era, the formation of a computer-related educational system, the evolution of data networks, the growth of the software and hardware industry, cloud computing, and the Internet of Things (IoT) are all discussed in the modern section. We highlighted the findings in each period that involve vital sparks of computing evolution, such as the gradual growth of computing in Persia from its early stages to the present. The findings indicate that the development of computing and related technologies has been rapidly accelerating recently.Article Citation Count: 23A hybrid approach for latency and battery lifetime optimization in IoT devices through offloading and CNN learning(Elsevier, 2023) Heidari, Arash; Navimipour, Nima Jafari; Jamali, Mohammad Ali Jabraeil; Akbarpour, ShahinOffloading assists in overcoming the resource constraints of specific elements, making it one of the primary technical enablers of the Internet of Things (IoT). IoT devices with low battery capacities can use the edge to offload some of the operations, which can significantly reduce latency and lengthen battery lifetime. Due to their restricted battery capacity, deep learning (DL) techniques are more energy-intensive to utilize in IoT devices. Because many IoT devices lack such modules, numerous research employed energy harvester modules that are not available to IoT devices in real-world circumstances. Using the Markov Decision Process (MDP), we describe the offloading problem in this study. Next, to facilitate partial offloading in IoT devices, we develop a Deep Reinforcement learning (DRL) method that can efficiently learn the policy by adjusting to network dynamics. Convolutional Neural Network (CNN) is then offered and implemented on Mobile Edge Computing (MEC) devices to expedite learning. These two techniques operate together to offer the proper offloading approach throughout the length of the system's operation. Moreover, transfer learning was employed to initialize the Qtable values, which increased the system's effectiveness. The simulation in this article, which employed Cooja and TensorFlow, revealed that the strategy outperformed five benchmarks in terms of latency by 4.1%, IoT device efficiency by 2.9%, energy utilization by 3.6%, and job failure rate by 2.6% on average.Article Citation Count: 13Implementation of a Product-Recommender System in an IoT-Based Smart Shopping Using Fuzzy Logic and Apriori Algorithm(IEEE-Inst Electrical Electronics Engineers Inc, 2022) Yan, Shu-Rong; Pirooznia, Sina; Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe Internet of Things (IoT) has recently become important in accelerating various functions, from manufacturing and business to healthcare and retail. A recommender system can handle the problem of information and data buildup in IoT-based smart commerce systems. These technologies are designed to determine users' preferences and filter out irrelevant information. Identifying items and services that customers might be interested in and then convincing them to buy is one of the essential parts of effective IoT-based smart shopping systems. Due to the relevance of product-recommender systems from both the consumer and shop perspectives, this article presents a new IoT-based smart product-recommender system based on an apriori algorithm and fuzzy logic. The suggested technique employs association rules to display the interdependencies and linkages among many data objects. The most common use of association rule discovery is shopping cart analysis. Customers' buying habits and behavior are studied based on the numerous goods they place in their shopping carts. As a result, the association rules are generated using a fuzzy system. The apriori algorithm then selects the product based on the provided fuzzy association rules. The results revealed that the suggested technique had achieved acceptable results in terms of mean absolute error, root-mean-square error, precision, recall, diversity, novelty, and catalog coverage when compared to cutting-edge methods. Finally, themethod helps increase recommender systems' diversity in IoT-based smart shopping.Review Citation Count: 48Machine learning applications for COVID-19 outbreak management(Springer London Ltd, 2022) Heidari, Arash; Navimipour, Nima Jafari; Unal, Mehmet; Toumaj, ShivaRecently, the COVID-19 epidemic has resulted in millions of deaths and has impacted practically every area of human life. Several machine learning (ML) approaches are employed in the medical field in many applications, including detecting and monitoring patients, notably in COVID-19 management. Different medical imaging systems, such as computed tomography (CT) and X-ray, offer ML an excellent platform for combating the pandemic. Because of this need, a significant quantity of study has been carried out; thus, in this work, we employed a systematic literature review (SLR) to cover all aspects of outcomes from related papers. Imaging methods, survival analysis, forecasting, economic and geographical issues, monitoring methods, medication development, and hybrid apps are the seven key uses of applications employed in the COVID-19 pandemic. Conventional neural networks (CNNs), long short-term memory networks (LSTM), recurrent neural networks (RNNs), generative adversarial networks (GANs), autoencoders, random forest, and other ML techniques are frequently used in such scenarios. Next, cutting-edge applications related to ML techniques for pandemic medical issues are discussed. Various problems and challenges linked with ML applications for this pandemic were reviewed. It is expected that additional research will be conducted in the upcoming to limit the spread and catastrophe management. According to the data, most papers are evaluated mainly on characteristics such as flexibility and accuracy, while other factors such as safety are overlooked. Also, Keras was the most often used library in the research studied, accounting for 24.4 percent of the time. Furthermore, medical imaging systems are employed for diagnostic reasons in 20.4 percent of applications.Review Citation Count: 35Machine Learning Applications in Internet-of-Drones: Systematic Review, Recent Deployments, and Open Issues(Assoc Computing Machinery, 2023) Heidari, Arash; Navimipour, Nima Jafari; Unal, Mehmet; Zhang, GuodaoDeep Learning (DL) and Machine Learning (ML) are effectively utilized in various complicated challenges in healthcare, industry, and academia. The Internet of Drones (IoD) has lately cropped up due to high adjustability to a broad range of unpredictable circumstances. In addition, Unmanned Aerial Vehicles ( UAVs) could be utilized efficiently in a multitude of scenarios, including rescue missions and search, farming, mission-critical services, surveillance systems, and so on, owing to technical and realistic benefits such as low movement, the capacity to lengthen wireless coverage zones, and the ability to attain places unreachable to human beings. In many studies, IoD and UAV are utilized interchangeably. Besides, drones enhance the efficiency aspects of various network topologies, including delay, throughput, interconnectivity, and dependability. Nonetheless, the deployment of drone systems raises various challenges relating to the inherent unpredictability of the wireless medium, the high mobility degrees, and the battery life that could result in rapid topological changes. In this paper, the IoD is originally explained in terms of potential applications and comparative operational scenarios. Then, we classify ML in the IoD-UAV world according to its applications, including resource management, surveillance and monitoring, object detection, power control, energy management, mobility management, and security management. This research aims to supply the readers with a better understanding of (1) the fundamentals of IoD/UAV, (2) the most recent developments and breakthroughs in this field, (3) the benefits and drawbacks of existing methods, and (4) areas that need further investigation and consideration. The results suggest that the Convolutional Neural Networks (CNN) method is the most often employed ML method in publications. According to research, most papers are on resource and mobility management. Most articles have focused on enhancing only one parameter, with the accuracy parameter receiving the most attention. Also, Python is the most commonly used language in papers, accounting for 90% of the time. Also, in 2021, it has the most papers published.Article Citation Count: 29A new lung cancer detection method based on the chest CT images using Federated Learning and blockchain systems(Elsevier, 2023) Heidari, Arash; Javaheri, Danial; Toumaj, Shiva; Navimipour, Nima Jafari; Rezaei, Mahsa; Unal, MehmetWith an estimated five million fatal cases each year, lung cancer is one of the significant causes of death worldwide. Lung diseases can be diagnosed with a Computed Tomography (CT) scan. The scarcity and trustworthiness of human eyes is the fundamental issue in diagnosing lung cancer patients. The main goal of this study is to detect malignant lung nodules in a CT scan of the lungs and categorize lung cancer according to severity. In this work, cutting-edge Deep Learning (DL) algorithms were used to detect the location of cancerous nodules. Also, the real-life issue is sharing data with hospitals around the world while bearing in mind the organizations' privacy issues. Besides, the main problems for training a global DL model are creating a collaborative model and maintaining privacy. This study presented an approach that takes a modest amount of data from multiple hospitals and uses blockchain-based Federated Learning (FL) to train a global DL model. The data were authenticated using blockchain technology, and FL trained the model internationally while maintaining the organization's anonymity. First, we presented a data normalization approach that addresses the variability of data obtained from various institutions using various CT scanners. Furthermore, using a CapsNets method, we classified lung cancer patients in local mode. Finally, we devised a way to train a global model cooperatively utilizing blockchain technology and FL while maintaining anonymity. We also gathered data from real-life lung cancer patients for testing purposes. The suggested method was trained and tested on the Cancer Imaging Archive (CIA) dataset, Kaggle Data Science Bowl (KDSB), LUNA 16, and the local dataset. Finally, we performed extensive experiments with Python and its well-known libraries, such as Scikit-Learn and TensorFlow, to evaluate the suggested method. The findings showed that the method effectively detects lung cancer patients. The technique delivered 99.69 % accuracy with the smallest possible categorization error.