Browsing by Author "Heidari, Arash"
Now showing 1 - 20 of 37
- Results Per Page
- Sort Options
Article Citation - WoS: 32Citation - Scopus: 37Implementation of a Product-Recommender System in an Iot-Based Smart Shopping Using Fuzzy Logic and Apriori Algorithm(IEEE-Inst Electrical Electronics Engineers Inc, 2022) Yan, Shu-Rong; Pirooznia, Sina; Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe Internet of Things (IoT) has recently become important in accelerating various functions, from manufacturing and business to healthcare and retail. A recommender system can handle the problem of information and data buildup in IoT-based smart commerce systems. These technologies are designed to determine users' preferences and filter out irrelevant information. Identifying items and services that customers might be interested in and then convincing them to buy is one of the essential parts of effective IoT-based smart shopping systems. Due to the relevance of product-recommender systems from both the consumer and shop perspectives, this article presents a new IoT-based smart product-recommender system based on an apriori algorithm and fuzzy logic. The suggested technique employs association rules to display the interdependencies and linkages among many data objects. The most common use of association rule discovery is shopping cart analysis. Customers' buying habits and behavior are studied based on the numerous goods they place in their shopping carts. As a result, the association rules are generated using a fuzzy system. The apriori algorithm then selects the product based on the provided fuzzy association rules. The results revealed that the suggested technique had achieved acceptable results in terms of mean absolute error, root-mean-square error, precision, recall, diversity, novelty, and catalog coverage when compared to cutting-edge methods. Finally, themethod helps increase recommender systems' diversity in IoT-based smart shopping.Article Citation - WoS: 49Citation - Scopus: 56A hybrid approach for latency and battery lifetime optimization in IoT devices through offloading and CNN learning(Elsevier, 2023) Heidari, Arash; Navimipour, Nima Jafari; Jamali, Mohammad Ali Jabraeil; Akbarpour, ShahinOffloading assists in overcoming the resource constraints of specific elements, making it one of the primary technical enablers of the Internet of Things (IoT). IoT devices with low battery capacities can use the edge to offload some of the operations, which can significantly reduce latency and lengthen battery lifetime. Due to their restricted battery capacity, deep learning (DL) techniques are more energy-intensive to utilize in IoT devices. Because many IoT devices lack such modules, numerous research employed energy harvester modules that are not available to IoT devices in real-world circumstances. Using the Markov Decision Process (MDP), we describe the offloading problem in this study. Next, to facilitate partial offloading in IoT devices, we develop a Deep Reinforcement learning (DRL) method that can efficiently learn the policy by adjusting to network dynamics. Convolutional Neural Network (CNN) is then offered and implemented on Mobile Edge Computing (MEC) devices to expedite learning. These two techniques operate together to offer the proper offloading approach throughout the length of the system's operation. Moreover, transfer learning was employed to initialize the Qtable values, which increased the system's effectiveness. The simulation in this article, which employed Cooja and TensorFlow, revealed that the strategy outperformed five benchmarks in terms of latency by 4.1%, IoT device efficiency by 2.9%, energy utilization by 3.6%, and job failure rate by 2.6% on average.Review Citation - WoS: 90Citation - Scopus: 115Adventures in Data Analysis: a Systematic Review of Deep Learning Techniques for Pattern Recognition in Cyber-Physical Systems(Springer, 2023) Amiri, Zahra; Heidari, Arash; Navimipour, Nima Jafari; Unal, Mehmet; Mousavi, AliMachine Learning (ML) and Deep Learning (DL) have achieved high success in many textual, auditory, medical imaging, and visual recognition patterns. Concerning the importance of ML/DL in recognizing patterns due to its high accuracy, many researchers argued for many solutions for improving pattern recognition performance using ML/DL methods. Due to the importance of the required intelligent pattern recognition of machines needed in image processing and the outstanding role of big data in generating state-of-the-art modern and classical approaches to pattern recognition, we conducted a thorough Systematic Literature Review (SLR) about DL approaches for big data pattern recognition. Therefore, we have discussed different research issues and possible paths in which the abovementioned techniques might help materialize the pattern recognition notion. Similarly, we have classified 60 of the most cutting-edge articles put forward pattern recognition issues into ten categories based on the DL/ML method used: Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Generative Adversarial Network (GAN), Autoencoder (AE), Ensemble Learning (EL), Reinforcement Learning (RL), Random Forest (RF), Multilayer Perception (MLP), Long-Short Term Memory (LSTM), and hybrid methods. SLR method has been used to investigate each one in terms of influential properties such as the main idea, advantages, disadvantages, strategies, simulation environment, datasets, and security issues. The results indicate most of the articles were published in 2021. Moreover, some important parameters such as accuracy, adaptability, fault tolerance, security, scalability, and flexibility were involved in these investigations.Article Citation - WoS: 185Citation - Scopus: 217A Secure Intrusion Detection Platform Using Blockchain and Radial Basis Function Neural Networks for Internet of Drones(IEEE-Inst Electrical Electronics Engineers Inc, 2023) Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe Internet of Drones (IoD) is built on the Internet of Things (IoT) by replacing Things with Drones while retaining incomparable features. Because of its vital applications, IoD technologies have attracted much attention in recent years. Nevertheless, gaining the necessary degree of public acceptability of IoD without demonstrating safety and security for human life is exceedingly difficult. In addition, intrusion detection systems (IDSs) in IoD confront several obstacles because of the dynamic network architecture, particularly in balancing detection accuracy and efficiency. To increase the performance of the IoD network, we proposed a blockchain-based radial basis function neural networks (RBFNNs) model in this article. The proposed method can improve data integrity and storage for smart decision-making across different IoDs. We discussed the usage of blockchain to create decentralized predictive analytics and a model for effectively applying and sharing deep learning (DL) methods in a decentralized fashion. We also assessed the model using a variety of data sets to demonstrate the viability and efficacy of implementing the blockchain-based DL technique in IoD contexts. The findings showed that the suggested model is an excellent option for developing classifiers while adhering to the constraints placed by network intrusion detection. Furthermore, the proposed model can outperform the cutting-edge methods in terms of specificity, F1, recall, precision, and accuracy.Article Citation - WoS: 27Citation - Scopus: 33A Fuzzy-Based Method for Objects Selection in Blockchain-Enabled Edge-Iot Platforms Using a Hybrid Multi-Criteria Decision-Making Model(Mdpi, 2022) Gardas, Bhaskar B.; Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe broad availability of connected and intelligent devices has increased the demand for Internet of Things (IoT) applications that require more intense data storage and processing. However, cloud-based IoT systems are typically located far from end-users and face several issues, including high cloud server load, slow response times, and a lack of global mobility. Some of these flaws can be addressed with edge computing. In addition, node selection helps avoid common difficulties related to IoT, including network lifespan, allocation of resources, and trust in the acquired data by selecting the correct nodes at a suitable period. On the other hand, the IoT's interconnection of edge and blockchain technologies gives a fresh perspective on access control framework design. This article provides a novel node selection approach for blockchain-enabled edge IoT that provides a quick and dependable node selection. Moreover, fuzzy logic to approximation logic was used to manage numerical and linguistic data simultaneously. In addition, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), a powerful tool for examining Multi-Criteria Decision-Making (MCDM) problems, is used. The suggested fuzzy-based technique employs three input criteria to select the correct IoT node for a given mission in IoT-edge situations. The outcomes of the experiments indicate that the proposed framework enhances the parameters under consideration.Article Citation - WoS: 81Citation - Scopus: 94The Applications of Nature-Inspired Algorithms in Internet of Things-Based Healthcare Service: a Systematic Literature Review(Wiley, 2024) Amiri, Zahra; Heidari, Arash; Zavvar, Mohammad; Navimipour, Nima Jafari; Esmaeilpour, MansourNature-inspired algorithms revolve around the intersection of nature-inspired algorithms and the IoT within the healthcare domain. This domain addresses the emerging trends and potential synergies between nature-inspired computational approaches and IoT technologies for advancing healthcare services. Our research aims to fill gaps in addressing algorithmic integration challenges, real-world implementation issues, and the efficacy of nature-inspired algorithms in IoT-based healthcare. We provide insights into the practical aspects and limitations of such applications through a systematic literature review. Specifically, we address the need for a comprehensive understanding of the applications of nature-inspired algorithms in IoT-based healthcare, identifying gaps such as the lack of standardized evaluation metrics and studies on integration challenges and security considerations. By bridging these gaps, our paper offers insights and directions for future research in this domain, exploring the diverse landscape of nature-inspired algorithms in healthcare. Our chosen methodology is a Systematic Literature Review (SLR) to investigate related papers rigorously. Categorizing these algorithms into groups such as genetic algorithms, particle swarm optimization, cuckoo algorithms, ant colony optimization, other approaches, and hybrid methods, we employ meticulous classification based on critical criteria. MATLAB emerges as the predominant programming language, constituting 37.9% of cases, showcasing a prevalent choice among researchers. Our evaluation emphasizes adaptability as the paramount parameter, accounting for 18.4% of considerations. By shedding light on attributes, limitations, and potential directions for future research and development, this review aims to contribute to a comprehensive understanding of nature-inspired algorithms in the dynamic landscape of IoT-based healthcare services. Providing a complete overview of the current issues associated with nature-inspired algorithms in IoT-based healthcare services. Providing a thorough overview of present methodologies for IoT-based healthcare services in research studies; Evaluating each region that tailored nature-inspired algorithms with many perspectives such as advantages, restrictions, datasets, security involvement, and simulation stings; Outlining the critical aspects that motivate the cited approaches to enhance future research; Illustrating descriptions of certain IoT-based healthcare services used in various studies. imageArticle Citation - WoS: 8Citation - Scopus: 11A Nano-Scale Design of Vedic Multiplier for Electrocardiogram Signal Processing Based on a Quantum Technology(Aip Publishing, 2025) Wang, Yuyao; Darbandi, Mehdi; Ahmadpour, Seyed-Sajad; Navimipour, Nima Jafari; Navin, Ahmad Habibizad; Heidari, Arash; Anbar, MohammadAn electrocardiogram (ECG) measures the electric signals from the heartbeat to diagnose various heart issues; nevertheless, it is susceptible to noise. ECG signal noise must be removed because it significantly affects ECG signal characteristics. In addition, speed and occupied area play a fundamental role in ECG structures. The Vedic multiplier is an essential part of signal processing and is necessary for various applications, such as ECG, clusters, and finite impulse response filter architectures. All ECGs have a Vedic multiplier circuit unit that is necessary for signal processing. The Vedic multiplier circuit always performs multiplication and accumulation steps to execute continuous and complex operations in signal processing programs. Conversely, in the Vedic multiplier framework, the circuit speed and occupied area are the main limitations. Fixing these significant defects can drastically improve the performance of this crucial circuit. The use of quantum technologies is one of the most popular solutions to overcome all previous shortcomings, such as the high occupied area and speed. In other words, a unique quantum technology like quantum dot cellular automata (QCA) can easily overcome all previous shortcomings. Thus, based on quantum technology, this paper proposes a multiplier for ECG using carry skip adder, half-adder, and XOR circuits. All suggested frameworks utilized a single-layer design without rotated cells to increase their operability in complex architectures. All designs have been proposed with a coplanar configuration in view, having an impact on the circuits' durability and stability. All proposed architectures have been designed and validated with the tool QCADesigner 2.0.3. All designed circuits showed a simple structure with minimum quantum cells, minimum area, and minimum delay with respect to state-of-the-art structures.Publication Citation - WoS: 46Citation - Scopus: 53Everything You Wanted To Know About Chatgpt: Components, Capabilities, Applications, and Opportunities(John Wiley & Sons Ltd, 2024) Heidari, Arash; Navimipour, Nima Jafari; Zeadally, Sherali; Chamola, VinayConversational Artificial Intelligence (AI) and Natural Language Processing have advanced significantly with the creation of a Generative Pre-trained Transformer (ChatGPT) by OpenAI. ChatGPT uses deep learning techniques like transformer architecture and self-attention mechanisms to replicate human speech and provide coherent and appropriate replies to the situation. The model mainly depends on the patterns discovered in the training data, which might result in incorrect or illogical conclusions. In the context of open-domain chats, we investigate the components, capabilities constraints, and potential applications of ChatGPT along with future opportunities. We begin by describing the components of ChatGPT followed by a definition of chatbots. We present a new taxonomy to classify them. Our taxonomy includes rule-based chatbots, retrieval-based chatbots, generative chatbots, and hybrid chatbots. Next, we describe the capabilities and constraints of ChatGPT. Finally, we present potential applications of ChatGPT and future research opportunities. The results showed that ChatGPT, a transformer-based chatbot model, utilizes encoders to produce coherent responses.Article Citation - WoS: 117Citation - Scopus: 183Opportunities and Challenges of Artificial Intelligence and Distributed Systems To Improve the Quality of Healthcare Service(Elsevier, 2024) Aminizadeh, Sarina; Heidari, Arash; Dehghan, Mahshid; Toumaj, Shiva; Rezaei, Mahsa; Navimipour, Nima Jafari; Unal, MehmetThe healthcare sector, characterized by vast datasets and many diseases, is pivotal in shaping community health and overall quality of life. Traditional healthcare methods, often characterized by limitations in disease prevention, predominantly react to illnesses after their onset rather than proactively averting them. The advent of Artificial Intelligence (AI) has ushered in a wave of transformative applications designed to enhance healthcare services, with Machine Learning (ML) as a noteworthy subset of AI. ML empowers computers to analyze extensive datasets, while Deep Learning (DL), a specific ML methodology, excels at extracting meaningful patterns from these data troves. Despite notable technological advancements in recent years, the full potential of these applications within medical contexts remains largely untapped, primarily due to the medical community's cautious stance toward novel technologies. The motivation of this paper lies in recognizing the pivotal role of the healthcare sector in community well-being and the necessity for a shift toward proactive healthcare approaches. To our knowledge, there is a notable absence of a comprehensive published review that delves into ML, DL and distributed systems, all aimed at elevating the Quality of Service (QoS) in healthcare. This study seeks to bridge this gap by presenting a systematic and organized review of prevailing ML, DL, and distributed system algorithms as applied in healthcare settings. Within our work, we outline key challenges that both current and future developers may encounter, with a particular focus on aspects such as approach, data utilization, strategy, and development processes. Our study findings reveal that the Internet of Things (IoT) stands out as the most frequently utilized platform (44.3 %), with disease diagnosis emerging as the predominant healthcare application (47.8 %). Notably, discussions center significantly on the prevention and identification of cardiovascular diseases (29.2 %). The studies under examination employ a diverse range of ML and DL methods, along with distributed systems, with Convolutional Neural Networks (CNNs) being the most commonly used (16.7 %), followed by Long Short -Term Memory (LSTM) networks (14.6 %) and shallow learning networks (12.5 %). In evaluating QoS, the predominant emphasis revolves around the accuracy parameter (80 %). This study highlights how ML, DL, and distributed systems reshape healthcare. It contributes to advancing healthcare quality, bridging the gap between technology and medical adoption, and benefiting practitioners and patients.Article Citation - WoS: 37Citation - Scopus: 45Applications of Deep Learning in Alzheimer's Disease: a Systematic Literature Review of Current Trends, Methodologies, Challenges, Innovations, and Future Directions(Springer, 2024) Toumaj, Shiva; Heidari, Arash; Shahhosseini, Reza; Navimipour, Nima JafariAlzheimer's Disease (AD) constitutes a significant global health issue. In the next 40 years, it is expected to affect 106 million people. Although more and more people are getting AD, there are still no effective drugs to treat it. Insightful information about how important it is to find and treat AD quickly. Recently, Deep Learning (DL) techniques have been used more and more to diagnose AD. They claim better accuracy in drug reuse, medication recognition, and labeling. This essay meticulously examines the works that have talked about using DL with Alzheimer's disease. Some of the methods are Natural Language Processing (NLP), drug reuse, classification, and identification. Concerning these methods, we examine their pros and cons, paying special attention to how easily they can be explained, how safe they are, and how they can be used in medical situations. One important finding is that Convolutional Neural Networks (CNNs) are most often used for AD research and Python is most often used for DL issues. Some security problems, like data protection and model stability, are not looked at enough in the present research, according to us. This study thoroughly examines present methods and also points out areas that need more work, like better data integration and AI systems that can be explained. The findings should help guide more research and speed up the creation of DL-based AD identification tools in the future.Book Part Citation - Scopus: 2Machine/Deep Learning Techniques for Multimedia Security(inst Engineering Tech-iet, 2023) Heidari, Arash; Navimipour, Nima Jafari; Azad, PoupakMultimedia security based on Machine Learning (ML)/ Deep Learning (DL) is a field of study that focuses on using ML/DL techniques to protect multimedia data such as images, videos, and audio from unauthorized access, manipulation, or theft. Developing and implementing algorithms and systems that use ML/DL techniques to detect and prevent security breaches in multimedia data is the main subject of this field. These systems use techniques like watermarking, encryption, and digital signature verification to protect multimedia data. The advantages of using ML/DL in multimedia security include improved accuracy, scalability, and automation. ML/DL algorithms can improve the accuracy of detecting security threats and help identify multimedia data vulnerabilities. Additionally, ML models can be scaled up to handle large amounts of multimedia data, making them helpful in protecting big datasets. Finally, ML/DL algorithms can automate the process of multimedia security, making it easier and more efficient to protect multimedia data. The disadvantages of using ML/DL in multimedia security include data availability, complexity, and black box models. ML and DL algorithms require large amounts of data to train the models, which can sometimes be challenging. Developing and implementing ML algorithms can also be complex, requiring specialized skills and knowledge. Finally, ML/DL models are often black box models, which means it can be difficult to understand how they make their decisions. This can be a challenge when explaining the decisions to stakeholders or auditors. Overall, multimedia security based on ML/DL is a promising area of research with many potential benefits. However, it also presents challenges that must be addressed to ensure the security and privacy of multimedia data.Review Citation - WoS: 119Citation - Scopus: 152Machine Learning Applications in Internet-Of Systematic Review, Recent Deployments, and Open Issues(Assoc Computing Machinery, 2023) Heidari, Arash; Navimipour, Nima Jafari; Unal, Mehmet; Zhang, GuodaoDeep Learning (DL) and Machine Learning (ML) are effectively utilized in various complicated challenges in healthcare, industry, and academia. The Internet of Drones (IoD) has lately cropped up due to high adjustability to a broad range of unpredictable circumstances. In addition, Unmanned Aerial Vehicles ( UAVs) could be utilized efficiently in a multitude of scenarios, including rescue missions and search, farming, mission-critical services, surveillance systems, and so on, owing to technical and realistic benefits such as low movement, the capacity to lengthen wireless coverage zones, and the ability to attain places unreachable to human beings. In many studies, IoD and UAV are utilized interchangeably. Besides, drones enhance the efficiency aspects of various network topologies, including delay, throughput, interconnectivity, and dependability. Nonetheless, the deployment of drone systems raises various challenges relating to the inherent unpredictability of the wireless medium, the high mobility degrees, and the battery life that could result in rapid topological changes. In this paper, the IoD is originally explained in terms of potential applications and comparative operational scenarios. Then, we classify ML in the IoD-UAV world according to its applications, including resource management, surveillance and monitoring, object detection, power control, energy management, mobility management, and security management. This research aims to supply the readers with a better understanding of (1) the fundamentals of IoD/UAV, (2) the most recent developments and breakthroughs in this field, (3) the benefits and drawbacks of existing methods, and (4) areas that need further investigation and consideration. The results suggest that the Convolutional Neural Networks (CNN) method is the most often employed ML method in publications. According to research, most papers are on resource and mobility management. Most articles have focused on enhancing only one parameter, with the accuracy parameter receiving the most attention. Also, Python is the most commonly used language in papers, accounting for 90% of the time. Also, in 2021, it has the most papers published.Review Citation - WoS: 157Citation - Scopus: 222Applications of Ml/Dl in the Management of Smart Cities and Societies Based on New Trends in Information Technologies: a Systematic Literature Review(Elsevier, 2022) Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe goal of managing smart cities and societies is to maximize the efficient use of finite resources while enhancing the quality of life. To establish a sustainable urban existence, smart cities use some new technologies such as the Internet of Things (IoT), Internet of Drones (IoD), and Internet of Vehicles (IoV). The created data by these technologies are submitted to analytics to obtain new information for increasing the smart societies and cities' efficiency and effectiveness. Also, smart traffic management, smart power, and energy management, city surveillance, smart buildings, and patient healthcare monitoring are the most common applications in smart cities. However, the Artificial intelligence (AI), Machine Learning (ML), and Deep Learning (DL) approach all hold a lot of promise for managing automated activities in smart cities. Therefore, we discuss different research issues and possible research paths in which the aforementioned techniques might help materialize the smart city notion. The goal of this research is to offer a better understanding of (1) the fundamentals of smart city and society management, (2) the most recent developments and breakthroughs in this field, (3) the benefits and drawbacks of existing methods, and (4) areas that require further investigation and consideration. IoT, cloud computing, edge computing, fog computing, IoD, IoV, and hybrid models are the seven key emerging de-velopments in information technology that, in this paper, are considered to categorize the state-of-the-art techniques. The results indicate that the Conventional Neural Network (CNN) and Long Short-Term Memory (LSTM) are the most commonly used ML method in the publications. According to research, the majority of papers are about smart cities' power and energy management. Furthermore, most papers have concentrated on improving only one parameter, where the accuracy parameter obtains the most attention. In addition, Python is the most frequently used language, which was used in 69.8% of the papers.Article Citation - WoS: 49Citation - Scopus: 54Deep Q-Learning Technique for Offloading Offline/Online Computation in Blockchain-Enabled Green Iot-Edge Scenarios(Mdpi, 2022) Heidari, Arash; Jamali, Mohammad Ali Jabraeil; Navimipour, Nima Jafari; Akbarpour, ShahinThe number of Internet of Things (IoT)-related innovations has recently increased exponentially, with numerous IoT objects being invented one after the other. Where and how many resources can be transferred to carry out tasks or applications is known as computation offloading. Transferring resource-intensive computational tasks to a different external device in the network, such as a cloud, fog, or edge platform, is the strategy used in the IoT environment. Besides, offloading is one of the key technological enablers of the IoT, as it helps overcome the resource limitations of individual objects. One of the major shortcomings of previous research is the lack of an integrated offloading framework that can operate in an offline/online environment while preserving security. This paper offers a new deep Q-learning approach to address the IoT-edge offloading enabled blockchain problem using the Markov Decision Process (MDP). There is a substantial gap in the secure online/offline offloading systems in terms of security, and no work has been published in this arena thus far. This system can be used online and offline while maintaining privacy and security. The proposed method employs the Post Decision State (PDS) mechanism in online mode. Additionally, we integrate edge/cloud platforms into IoT blockchain-enabled networks to encourage the computational potential of IoT devices. This system can enable safe and secure cloud/edge/IoT offloading by employing blockchain. In this system, the master controller, offloading decision, block size, and processing nodes may be dynamically chosen and changed to reduce device energy consumption and cost. TensorFlow and Cooja's simulation results demonstrated that the method could dramatically boost system efficiency relative to existing schemes. The findings showed that the method beats four benchmarks in terms of cost by 6.6%, computational overhead by 7.1%, energy use by 7.9%, task failure rate by 6.2%, and latency by 5.5% on average.Article Citation - WoS: 6Citation - Scopus: 4Leveraging Explainable Artificial Intelligence for Transparent and Trustworthy Cancer Detection Systems(Elsevier, 2025) Toumaj, Shiva; Heidari, Arash; Navimipour, Nima JafariTimely detection of cancer is essential for enhancing patient outcomes. Artificial Intelligence (AI), especially Deep Learning (DL), demonstrates significant potential in cancer diagnostics; however, its opaque nature presents notable concerns. Explainable AI (XAI) mitigates these issues by improving transparency and interpretability. This study provides a systematic review of recent applications of XAI in cancer detection, categorizing the techniques according to cancer type, including breast, skin, lung, colorectal, brain, and others. It emphasizes interpretability methods, dataset utilization, simulation environments, and security considerations. The results indicate that Convolutional Neural Networks (CNNs) account for 31 % of model usage, SHAP is the predominant interpretability framework at 44.4 %, and Python is the leading programming language at 32.1 %. Only 7.4 % of studies address security issues. This study identifies significant challenges and gaps, guiding future research in trustworthy and interpretable AI within oncology.Article Citation - WoS: 12Citation - Scopus: 11A New a Flow-Based Approach for Enhancing Botnet Detection Using Convolutional Neural Network and Long Short-Term Memory(Springer London Ltd, 2025) Asadi, Mehdi; Heidari, Arash; Navimipour, Nima JafariDespite the growing research and development of botnet detection tools, an ever-increasing spread of botnets and their victims is being witnessed. Due to the frequent adaptation of botnets to evolving responses offered by host-based and network-based detection mechanisms, traditional methods are found to lack adequate defense against botnet threats. In this regard, the suggestion is made to employ flow-based detection methods and conduct behavioral analysis of network traffic. To enhance the performance of these approaches, this paper proposes utilizing a hybrid deep learning method that combines convolutional neural network (CNN) and long short-term memory (LSTM) methods. CNN efficiently extracts spatial features from network traffic, such as patterns in flow characteristics, while LSTM captures temporal dependencies critical to detecting sequential patterns in botnet behaviors. Experimental results reveal the effectiveness of the proposed CNN-LSTM method in classifying botnet traffic. In comparison with the results obtained by the leading method on the identical dataset, the proposed approach showcased noteworthy enhancements, including a 0.61% increase in precision, a 0.03% augmentation in accuracy, a 0.42% enhancement in the recall, a 0.51% improvement in the F1-score, and a 0.10% reduction in the false-positive rate. Moreover, the utilization of the CNN-LSTM framework exhibited robust overall performance and notable expeditiousness in the realm of botnet traffic identification. Additionally, we conducted an evaluation concerning the impact of three widely recognized adversarial attacks on the Information Security Centre of Excellence dataset and the Information Security and Object Technology dataset. The findings underscored the proposed method's propensity for delivering a promising performance in the face of these adversarial challenges.Article Citation - WoS: 36Citation - Scopus: 42An Efficient Design of Multiplier for Using in Nano-Scale Iot Systems Using Atomic Silicon(IEEE-Inst Electrical Electronics Engineers Inc, 2023) Ahmadpour, Seyed-Sajad; Heidari, Arash; Navimpour, Nima Jafari; Asadi, Mohammad-Ali; Yalcin, SenayBecause of recent technological developments, such as Internet of Things (IoT) devices, power consumption has become a major issue. Atomic silicon quantum dot (ASiQD) is one of the most impressive technologies for developing low-power processing circuits, which are critical for efficient transmission and power management in micro IoT devices. On the other hand, multipliers are essential computational circuits used in a wide range of digital circuits. Therefore, the multiplier design with a low occupied area and low energy consumption is the most critical expected goal in designing any micro IoT circuits. This article introduces a low-power atomic silicon-based multiplier circuit for effective power management in the micro IoT. Based on this design, a $4\times 4$ -bit multiplier array with low power consumption and size is presented. The suggested circuit is also designed and validated using the SiQAD simulation tool. The proposed ASiQD-based circuit significantly reduces energy consumption and area consumed in the micro IoT compared to most recent designs.Review Citation - WoS: 49Citation - Scopus: 64Resilient and Dependability Management in Distributed Environments: a Systematic and Comprehensive Literature Review(Springer, 2023) Amiri, Zahra; Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetWith the galloping progress of the Internet of Things (IoT) and related technologies in multiple facets of science, distribution environments, namely cloud, edge, fog, Internet of Drones (IoD), and Internet of Vehicles (IoV), carry special attention due to their providing a resilient infrastructure in which users can be sure of a secure connection among smart devices in the network. By considering particular parameters which overshadow the resiliency in distributed environments, we found several gaps in the investigated review papers that did not comprehensively touch on significantly related topics as we did. So, based on the resilient and dependable management approaches, we put forward a beneficial evaluation in this regard. As a novel taxonomy of distributed environments, we presented a well-organized classification of distributed systems. At the terminal stage, we selected 37 papers in the research process. We classified our categories into seven divisions and separately investigated each one their main ideas, advantages, challenges, and strategies, checking whether they involved security issues or not, simulation environments, datasets, and their environments to draw a cohesive taxonomy of reliable methods in terms of qualitative in distributed computing environments. This well-performed comparison enables us to evaluate all papers comprehensively and analyze their advantages and drawbacks. The SLR review indicated that security, latency, and fault tolerance are the most frequent parameters utilized in studied papers that show they play pivotal roles in the resiliency management of distributed environments. Most of the articles reviewed were published in 2020 and 2021. Besides, we proposed several future works based on existing deficiencies that can be considered for further studies.Article Citation - WoS: 82Citation - Scopus: 112A new lung cancer detection method based on the chest CT images using Federated Learning and blockchain systems(Elsevier, 2023) Heidari, Arash; Javaheri, Danial; Toumaj, Shiva; Navimipour, Nima Jafari; Rezaei, Mahsa; Unal, MehmetWith an estimated five million fatal cases each year, lung cancer is one of the significant causes of death worldwide. Lung diseases can be diagnosed with a Computed Tomography (CT) scan. The scarcity and trustworthiness of human eyes is the fundamental issue in diagnosing lung cancer patients. The main goal of this study is to detect malignant lung nodules in a CT scan of the lungs and categorize lung cancer according to severity. In this work, cutting-edge Deep Learning (DL) algorithms were used to detect the location of cancerous nodules. Also, the real-life issue is sharing data with hospitals around the world while bearing in mind the organizations' privacy issues. Besides, the main problems for training a global DL model are creating a collaborative model and maintaining privacy. This study presented an approach that takes a modest amount of data from multiple hospitals and uses blockchain-based Federated Learning (FL) to train a global DL model. The data were authenticated using blockchain technology, and FL trained the model internationally while maintaining the organization's anonymity. First, we presented a data normalization approach that addresses the variability of data obtained from various institutions using various CT scanners. Furthermore, using a CapsNets method, we classified lung cancer patients in local mode. Finally, we devised a way to train a global model cooperatively utilizing blockchain technology and FL while maintaining anonymity. We also gathered data from real-life lung cancer patients for testing purposes. The suggested method was trained and tested on the Cancer Imaging Archive (CIA) dataset, Kaggle Data Science Bowl (KDSB), LUNA 16, and the local dataset. Finally, we performed extensive experiments with Python and its well-known libraries, such as Scikit-Learn and TensorFlow, to evaluate the suggested method. The findings showed that the method effectively detects lung cancer patients. The technique delivered 99.69 % accuracy with the smallest possible categorization error.Review Citation - WoS: 24Citation - Scopus: 23The History of Computing in Iran (persia)-Since the Achaemenid Empire(Mdpi, 2022) Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetPersia was the early name for the territory that is currently recognized as Iran. Iran's proud history starts with the Achaemenid Empire, which began in the 6th century BCE (c. 550). The Iranians provided numerous innovative ideas in breakthroughs and technologies that are often taken for granted today or whose origins are mostly unknown from the Achaemenid Empire's early days. To recognize the history of computing systems in Iran, we must pay attention to everything that can perform computing. Because of Iran's historical position in the ancient ages, studying the history of computing in this country is an exciting subject. The history of computing in Iran started very far from the digital systems of the 20th millennium. The Achaemenid Empire can be mentioned as the first recorded sign of using computing systems in Persia. The history of computing in Iran started with the invention of mathematical theories and methods for performing simple calculations. This paper also attempts to shed light on Persia's computing heritage elements, dating back to 550 BC. We look at both the ancient and current periods of computing. In the ancient section, we will go through the history of computing in the Achaemenid Empire, followed by a description of the tools used for calculations. Additionally, the transition to the Internet era, the formation of a computer-related educational system, the evolution of data networks, the growth of the software and hardware industry, cloud computing, and the Internet of Things (IoT) are all discussed in the modern section. We highlighted the findings in each period that involve vital sparks of computing evolution, such as the gradual growth of computing in Persia from its early stages to the present. The findings indicate that the development of computing and related technologies has been rapidly accelerating recently.

