Browsing by Author "Unal, Mehmet"
Now showing 1 - 16 of 16
- Results Per Page
- Sort Options
Review Citation Count: 24Adventures in data analysis: a systematic review of Deep Learning techniques for pattern recognition in cyber-physical-social systems(Springer, 2023) Amiri, Zahra; Heidari, Arash; Navimipour, Nima Jafari; Unal, Mehmet; Mousavi, AliMachine Learning (ML) and Deep Learning (DL) have achieved high success in many textual, auditory, medical imaging, and visual recognition patterns. Concerning the importance of ML/DL in recognizing patterns due to its high accuracy, many researchers argued for many solutions for improving pattern recognition performance using ML/DL methods. Due to the importance of the required intelligent pattern recognition of machines needed in image processing and the outstanding role of big data in generating state-of-the-art modern and classical approaches to pattern recognition, we conducted a thorough Systematic Literature Review (SLR) about DL approaches for big data pattern recognition. Therefore, we have discussed different research issues and possible paths in which the abovementioned techniques might help materialize the pattern recognition notion. Similarly, we have classified 60 of the most cutting-edge articles put forward pattern recognition issues into ten categories based on the DL/ML method used: Convolutional Neural Network (CNN), Recurrent Neural Network (RNN), Generative Adversarial Network (GAN), Autoencoder (AE), Ensemble Learning (EL), Reinforcement Learning (RL), Random Forest (RF), Multilayer Perception (MLP), Long-Short Term Memory (LSTM), and hybrid methods. SLR method has been used to investigate each one in terms of influential properties such as the main idea, advantages, disadvantages, strategies, simulation environment, datasets, and security issues. The results indicate most of the articles were published in 2021. Moreover, some important parameters such as accuracy, adaptability, fault tolerance, security, scalability, and flexibility were involved in these investigations.Review Citation Count: 80Applications of ML/DL in the management of smart cities and societies based on new trends in information technologies: A systematic literature review(Elsevier, 2022) Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe goal of managing smart cities and societies is to maximize the efficient use of finite resources while enhancing the quality of life. To establish a sustainable urban existence, smart cities use some new technologies such as the Internet of Things (IoT), Internet of Drones (IoD), and Internet of Vehicles (IoV). The created data by these technologies are submitted to analytics to obtain new information for increasing the smart societies and cities' efficiency and effectiveness. Also, smart traffic management, smart power, and energy management, city surveillance, smart buildings, and patient healthcare monitoring are the most common applications in smart cities. However, the Artificial intelligence (AI), Machine Learning (ML), and Deep Learning (DL) approach all hold a lot of promise for managing automated activities in smart cities. Therefore, we discuss different research issues and possible research paths in which the aforementioned techniques might help materialize the smart city notion. The goal of this research is to offer a better understanding of (1) the fundamentals of smart city and society management, (2) the most recent developments and breakthroughs in this field, (3) the benefits and drawbacks of existing methods, and (4) areas that require further investigation and consideration. IoT, cloud computing, edge computing, fog computing, IoD, IoV, and hybrid models are the seven key emerging de-velopments in information technology that, in this paper, are considered to categorize the state-of-the-art techniques. The results indicate that the Conventional Neural Network (CNN) and Long Short-Term Memory (LSTM) are the most commonly used ML method in the publications. According to research, the majority of papers are about smart cities' power and energy management. Furthermore, most papers have concentrated on improving only one parameter, where the accuracy parameter obtains the most attention. In addition, Python is the most frequently used language, which was used in 69.8% of the papers.Article Citation Count: 1A cloud service composition method using a fuzzy-based particle swarm optimization algorithm(Springer, 2023) Nazif, Habibeh; Nassr, Mohammad; Al-Khafaji, Hamza Mohammed Ridha; Navimipour, Nima Jafari; Unal, MehmetIn today's dynamic business landscape, organizations heavily rely on cloud computing to leverage the power of virtualization and resource sharing. Service composition plays a vital role in cloud computing, combining multiple cloud services to fulfill complex user requests. Service composition in cloud computing presents several challenges. These include service heterogeneity, dynamic service availability, QoS (Quality of Service) constraints, and scalability issues. Traditional approaches often struggle to handle these challenges efficiently, leading to suboptimal resource utilization and poor service performance. This work presents a fuzzy-based strategy for composing cloud services to overcome these obstacles. The fact that service composition is NP-hard has prompted the use of a range of metaheuristic algorithms in numerous papers. Therefore, Particle Swarm Optimization (PSO) has been applied in this paper to solve the problem. Implementing a fuzzy-based PSO for service composition requires defining the fuzzy membership functions and rules based on the specific service domain. Once the fuzzy logic components are established, they can be integrated into the PSO algorithm. The simulation results have shown the high efficiency of the proposed method in decreasing the latency, cost, and response time.Review Citation Count: 5Deepfake detection using deep learning methods: A systematic and comprehensive review(Wiley Periodicals, inc, 2024) Dağ, Hasan; Navimipour, Nima Jafari; Dag, Hasan; Unal, MehmetDeep Learning (DL) has been effectively utilized in various complicated challenges in healthcare, industry, and academia for various purposes, including thyroid diagnosis, lung nodule recognition, computer vision, large data analytics, and human-level control. Nevertheless, developments in digital technology have been used to produce software that poses a threat to democracy, national security, and confidentiality. Deepfake is one of those DL-powered apps that has lately surfaced. So, deepfake systems can create fake images primarily by replacement of scenes or images, movies, and sounds that humans cannot tell apart from real ones. Various technologies have brought the capacity to change a synthetic speech, image, or video to our fingers. Furthermore, video and image frauds are now so convincing that it is hard to distinguish between false and authentic content with the naked eye. It might result in various issues and ranging from deceiving public opinion to using doctored evidence in a court. For such considerations, it is critical to have technologies that can assist us in discerning reality. This study gives a complete assessment of the literature on deepfake detection strategies using DL-based algorithms. We categorize deepfake detection methods in this work based on their applications, which include video detection, image detection, audio detection, and hybrid multimedia detection. The objective of this paper is to give the reader a better knowledge of (1) how deepfakes are generated and identified, (2) the latest developments and breakthroughs in this realm, (3) weaknesses of existing security methods, and (4) areas requiring more investigation and consideration. The results suggest that the Conventional Neural Networks (CNN) methodology is the most often employed DL method in publications. According to research, the majority of the articles are on the subject of video deepfake detection. The majority of the articles focused on enhancing only one parameter, with the accuracy parameter receiving the most attention. This article is categorized under:Technologies > Machine LearningAlgorithmic Development > MultimediaApplication Areas > Science and TechnologyArticle Citation Count: 5A Fire Evacuation and Control System in Smart Buildings Based on the Internet of Things and a Hybrid Intelligent Algorithm(Mdpi, 2023) Jafari Navimipour, Nima; Fakhruldeen, Hassan Falah; Meqdad, Maytham N.; Ibrahim, Banar Fareed; Jafari Navimipour, Nima; Unal, MehmetConcerns about fire risk reduction and rescue tactics have been raised in light of recent incidents involving flammable cladding systems and fast fire spread in high-rise buildings worldwide. Thus, governments, engineers, and building designers should prioritize fire safety. During a fire event, an emergency evacuation system is indispensable in large buildings, which guides evacuees to exit gates as fast as possible by dynamic and safe routes. Evacuation plans should evaluate whether paths inside the structures are appropriate for evacuations, considering the building's electric power, electric controls, energy usage, and fire/smoke protection. On the other hand, the Internet of Things (IoT) is emerging as a catalyst for creating and optimizing the supply and consumption of intelligent services to achieve an efficient system. Smart buildings use IoT sensors for monitoring indoor environmental parameters, such as temperature, humidity, luminosity, and air quality. This research proposes a new way for a smart building fire evacuation and control system based on the IoT to direct individuals along an evacuation route during fire incidents efficiently. This research utilizes a hybrid nature-inspired optimization approach, Emperor Penguin Colony, and Particle Swarm Optimization (EPC-PSO). The EPC algorithm is regulated by the penguins' body heat radiation and spiral-like movement inside their colony. The behavior of emperor penguins improves the PSO algorithm for sooner convergences. The method also uses a particle idea of PSO to update the penguins' positions. Experimental results showed that the proposed method was executed accurately and effectively by cost, energy consumption, and execution time-related challenges to ensure minimum life and resource causalities. The method has decreased the execution time and cost by 10.41% and 25% compared to other algorithms. Moreover, to achieve a sustainable system, the proposed method has decreased energy consumption by 11.90% compared to other algorithms.Article Citation Count: 15A Fuzzy-Based Method for Objects Selection in Blockchain-Enabled Edge-IoT Platforms Using a Hybrid Multi-Criteria Decision-Making Model(Mdpi, 2022) Gardas, Bhaskar B.; Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe broad availability of connected and intelligent devices has increased the demand for Internet of Things (IoT) applications that require more intense data storage and processing. However, cloud-based IoT systems are typically located far from end-users and face several issues, including high cloud server load, slow response times, and a lack of global mobility. Some of these flaws can be addressed with edge computing. In addition, node selection helps avoid common difficulties related to IoT, including network lifespan, allocation of resources, and trust in the acquired data by selecting the correct nodes at a suitable period. On the other hand, the IoT's interconnection of edge and blockchain technologies gives a fresh perspective on access control framework design. This article provides a novel node selection approach for blockchain-enabled edge IoT that provides a quick and dependable node selection. Moreover, fuzzy logic to approximation logic was used to manage numerical and linguistic data simultaneously. In addition, the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), a powerful tool for examining Multi-Criteria Decision-Making (MCDM) problems, is used. The suggested fuzzy-based technique employs three input criteria to select the correct IoT node for a given mission in IoT-edge situations. The outcomes of the experiments indicate that the proposed framework enhances the parameters under consideration.Review Citation Count: 10The History of Computing in Iran (Persia)-Since the Achaemenid Empire(Mdpi, 2022) Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetPersia was the early name for the territory that is currently recognized as Iran. Iran's proud history starts with the Achaemenid Empire, which began in the 6th century BCE (c. 550). The Iranians provided numerous innovative ideas in breakthroughs and technologies that are often taken for granted today or whose origins are mostly unknown from the Achaemenid Empire's early days. To recognize the history of computing systems in Iran, we must pay attention to everything that can perform computing. Because of Iran's historical position in the ancient ages, studying the history of computing in this country is an exciting subject. The history of computing in Iran started very far from the digital systems of the 20th millennium. The Achaemenid Empire can be mentioned as the first recorded sign of using computing systems in Persia. The history of computing in Iran started with the invention of mathematical theories and methods for performing simple calculations. This paper also attempts to shed light on Persia's computing heritage elements, dating back to 550 BC. We look at both the ancient and current periods of computing. In the ancient section, we will go through the history of computing in the Achaemenid Empire, followed by a description of the tools used for calculations. Additionally, the transition to the Internet era, the formation of a computer-related educational system, the evolution of data networks, the growth of the software and hardware industry, cloud computing, and the Internet of Things (IoT) are all discussed in the modern section. We highlighted the findings in each period that involve vital sparks of computing evolution, such as the gradual growth of computing in Persia from its early stages to the present. The findings indicate that the development of computing and related technologies has been rapidly accelerating recently.Article Citation Count: 13Implementation of a Product-Recommender System in an IoT-Based Smart Shopping Using Fuzzy Logic and Apriori Algorithm(IEEE-Inst Electrical Electronics Engineers Inc, 2022) Yan, Shu-Rong; Pirooznia, Sina; Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe Internet of Things (IoT) has recently become important in accelerating various functions, from manufacturing and business to healthcare and retail. A recommender system can handle the problem of information and data buildup in IoT-based smart commerce systems. These technologies are designed to determine users' preferences and filter out irrelevant information. Identifying items and services that customers might be interested in and then convincing them to buy is one of the essential parts of effective IoT-based smart shopping systems. Due to the relevance of product-recommender systems from both the consumer and shop perspectives, this article presents a new IoT-based smart product-recommender system based on an apriori algorithm and fuzzy logic. The suggested technique employs association rules to display the interdependencies and linkages among many data objects. The most common use of association rule discovery is shopping cart analysis. Customers' buying habits and behavior are studied based on the numerous goods they place in their shopping carts. As a result, the association rules are generated using a fuzzy system. The apriori algorithm then selects the product based on the provided fuzzy association rules. The results revealed that the suggested technique had achieved acceptable results in terms of mean absolute error, root-mean-square error, precision, recall, diversity, novelty, and catalog coverage when compared to cutting-edge methods. Finally, themethod helps increase recommender systems' diversity in IoT-based smart shopping.Review Citation Count: 48Machine learning applications for COVID-19 outbreak management(Springer London Ltd, 2022) Heidari, Arash; Navimipour, Nima Jafari; Unal, Mehmet; Toumaj, ShivaRecently, the COVID-19 epidemic has resulted in millions of deaths and has impacted practically every area of human life. Several machine learning (ML) approaches are employed in the medical field in many applications, including detecting and monitoring patients, notably in COVID-19 management. Different medical imaging systems, such as computed tomography (CT) and X-ray, offer ML an excellent platform for combating the pandemic. Because of this need, a significant quantity of study has been carried out; thus, in this work, we employed a systematic literature review (SLR) to cover all aspects of outcomes from related papers. Imaging methods, survival analysis, forecasting, economic and geographical issues, monitoring methods, medication development, and hybrid apps are the seven key uses of applications employed in the COVID-19 pandemic. Conventional neural networks (CNNs), long short-term memory networks (LSTM), recurrent neural networks (RNNs), generative adversarial networks (GANs), autoencoders, random forest, and other ML techniques are frequently used in such scenarios. Next, cutting-edge applications related to ML techniques for pandemic medical issues are discussed. Various problems and challenges linked with ML applications for this pandemic were reviewed. It is expected that additional research will be conducted in the upcoming to limit the spread and catastrophe management. According to the data, most papers are evaluated mainly on characteristics such as flexibility and accuracy, while other factors such as safety are overlooked. Also, Keras was the most often used library in the research studied, accounting for 24.4 percent of the time. Furthermore, medical imaging systems are employed for diagnostic reasons in 20.4 percent of applications.Review Citation Count: 35Machine Learning Applications in Internet-of-Drones: Systematic Review, Recent Deployments, and Open Issues(Assoc Computing Machinery, 2023) Heidari, Arash; Navimipour, Nima Jafari; Unal, Mehmet; Zhang, GuodaoDeep Learning (DL) and Machine Learning (ML) are effectively utilized in various complicated challenges in healthcare, industry, and academia. The Internet of Drones (IoD) has lately cropped up due to high adjustability to a broad range of unpredictable circumstances. In addition, Unmanned Aerial Vehicles ( UAVs) could be utilized efficiently in a multitude of scenarios, including rescue missions and search, farming, mission-critical services, surveillance systems, and so on, owing to technical and realistic benefits such as low movement, the capacity to lengthen wireless coverage zones, and the ability to attain places unreachable to human beings. In many studies, IoD and UAV are utilized interchangeably. Besides, drones enhance the efficiency aspects of various network topologies, including delay, throughput, interconnectivity, and dependability. Nonetheless, the deployment of drone systems raises various challenges relating to the inherent unpredictability of the wireless medium, the high mobility degrees, and the battery life that could result in rapid topological changes. In this paper, the IoD is originally explained in terms of potential applications and comparative operational scenarios. Then, we classify ML in the IoD-UAV world according to its applications, including resource management, surveillance and monitoring, object detection, power control, energy management, mobility management, and security management. This research aims to supply the readers with a better understanding of (1) the fundamentals of IoD/UAV, (2) the most recent developments and breakthroughs in this field, (3) the benefits and drawbacks of existing methods, and (4) areas that need further investigation and consideration. The results suggest that the Convolutional Neural Networks (CNN) method is the most often employed ML method in publications. According to research, most papers are on resource and mobility management. Most articles have focused on enhancing only one parameter, with the accuracy parameter receiving the most attention. Also, Python is the most commonly used language in papers, accounting for 90% of the time. Also, in 2021, it has the most papers published.Article Citation Count: 29A new lung cancer detection method based on the chest CT images using Federated Learning and blockchain systems(Elsevier, 2023) Heidari, Arash; Javaheri, Danial; Toumaj, Shiva; Navimipour, Nima Jafari; Rezaei, Mahsa; Unal, MehmetWith an estimated five million fatal cases each year, lung cancer is one of the significant causes of death worldwide. Lung diseases can be diagnosed with a Computed Tomography (CT) scan. The scarcity and trustworthiness of human eyes is the fundamental issue in diagnosing lung cancer patients. The main goal of this study is to detect malignant lung nodules in a CT scan of the lungs and categorize lung cancer according to severity. In this work, cutting-edge Deep Learning (DL) algorithms were used to detect the location of cancerous nodules. Also, the real-life issue is sharing data with hospitals around the world while bearing in mind the organizations' privacy issues. Besides, the main problems for training a global DL model are creating a collaborative model and maintaining privacy. This study presented an approach that takes a modest amount of data from multiple hospitals and uses blockchain-based Federated Learning (FL) to train a global DL model. The data were authenticated using blockchain technology, and FL trained the model internationally while maintaining the organization's anonymity. First, we presented a data normalization approach that addresses the variability of data obtained from various institutions using various CT scanners. Furthermore, using a CapsNets method, we classified lung cancer patients in local mode. Finally, we devised a way to train a global model cooperatively utilizing blockchain technology and FL while maintaining anonymity. We also gathered data from real-life lung cancer patients for testing purposes. The suggested method was trained and tested on the Cancer Imaging Archive (CIA) dataset, Kaggle Data Science Bowl (KDSB), LUNA 16, and the local dataset. Finally, we performed extensive experiments with Python and its well-known libraries, such as Scikit-Learn and TensorFlow, to evaluate the suggested method. The findings showed that the method effectively detects lung cancer patients. The technique delivered 99.69 % accuracy with the smallest possible categorization error.Article Citation Count: 3A Novel Blockchain-Based Deepfake Detection Method Using Federated and Deep Learning Models(Springer, 2024) Dağ, Hasan; Navimipour, Nima Jafari; Dag, Hasan; Talebi, Samira; Unal, MehmetIn recent years, the proliferation of deep learning (DL) techniques has given rise to a significant challenge in the form of deepfake videos, posing a grave threat to the authenticity of media content. With the rapid advancement of DL technology, the creation of convincingly realistic deepfake videos has become increasingly prevalent, raising serious concerns about the potential misuse of such content. Deepfakes have the potential to undermine trust in visual media, with implications for fields as diverse as journalism, entertainment, and security. This study presents an innovative solution by harnessing blockchain-based federated learning (FL) to address this issue, focusing on preserving data source anonymity. The approach combines the strengths of SegCaps and convolutional neural network (CNN) methods for improved image feature extraction, followed by capsule network (CN) training to enhance generalization. A novel data normalization technique is introduced to tackle data heterogeneity stemming from diverse global data sources. Moreover, transfer learning (TL) and preprocessing methods are deployed to elevate DL performance. These efforts culminate in collaborative global model training zfacilitated by blockchain and FL while maintaining the utmost confidentiality of data sources. The effectiveness of our methodology is rigorously tested and validated through extensive experiments. These experiments reveal a substantial improvement in accuracy, with an impressive average increase of 6.6% compared to six benchmark models. Furthermore, our approach demonstrates a 5.1% enhancement in the area under the curve (AUC) metric, underscoring its ability to outperform existing detection methods. These results substantiate the effectiveness of our proposed solution in countering the proliferation of deepfake content. In conclusion, our innovative approach represents a promising avenue for advancing deepfake detection. By leveraging existing data resources and the power of FL and blockchain technology, we address a critical need for media authenticity and security. As the threat of deepfake videos continues to grow, our comprehensive solution provides an effective means to protect the integrity and trustworthiness of visual media, with far-reaching implications for both industry and society. This work stands as a significant step toward countering the deepfake menace and preserving the authenticity of visual content in a rapidly evolving digital landscape.Article Citation Count: 4Opportunities and challenges of artificial intelligence and distributed systems to improve the quality of healthcare service(Elsevier, 2024) Aminizadeh, Sarina; Heidari, Arash; Dehghan, Mahshid; Toumaj, Shiva; Rezaei, Mahsa; Navimipour, Nima Jafari; Unal, MehmetThe healthcare sector, characterized by vast datasets and many diseases, is pivotal in shaping community health and overall quality of life. Traditional healthcare methods, often characterized by limitations in disease prevention, predominantly react to illnesses after their onset rather than proactively averting them. The advent of Artificial Intelligence (AI) has ushered in a wave of transformative applications designed to enhance healthcare services, with Machine Learning (ML) as a noteworthy subset of AI. ML empowers computers to analyze extensive datasets, while Deep Learning (DL), a specific ML methodology, excels at extracting meaningful patterns from these data troves. Despite notable technological advancements in recent years, the full potential of these applications within medical contexts remains largely untapped, primarily due to the medical community's cautious stance toward novel technologies. The motivation of this paper lies in recognizing the pivotal role of the healthcare sector in community well-being and the necessity for a shift toward proactive healthcare approaches. To our knowledge, there is a notable absence of a comprehensive published review that delves into ML, DL and distributed systems, all aimed at elevating the Quality of Service (QoS) in healthcare. This study seeks to bridge this gap by presenting a systematic and organized review of prevailing ML, DL, and distributed system algorithms as applied in healthcare settings. Within our work, we outline key challenges that both current and future developers may encounter, with a particular focus on aspects such as approach, data utilization, strategy, and development processes. Our study findings reveal that the Internet of Things (IoT) stands out as the most frequently utilized platform (44.3 %), with disease diagnosis emerging as the predominant healthcare application (47.8 %). Notably, discussions center significantly on the prevention and identification of cardiovascular diseases (29.2 %). The studies under examination employ a diverse range of ML and DL methods, along with distributed systems, with Convolutional Neural Networks (CNNs) being the most commonly used (16.7 %), followed by Long Short -Term Memory (LSTM) networks (14.6 %) and shallow learning networks (12.5 %). In evaluating QoS, the predominant emphasis revolves around the accuracy parameter (80 %). This study highlights how ML, DL, and distributed systems reshape healthcare. It contributes to advancing healthcare quality, bridging the gap between technology and medical adoption, and benefiting practitioners and patients.Article Citation Count: 39A privacy-aware method for COVID-19 detection in chest CT images using lightweight deep conventional neural network and blockchain(Pergamon-Elsevier Science Ltd, 2022) Heidari, Arash; Toumaj, Shiva; Navimipour, Nima Jafari; Unal, MehmetWith the global spread of the COVID-19 epidemic, a reliable method is required for identifying COVID-19 victims. The biggest issue in detecting the virus is a lack of testing kits that are both reliable and affordable. Due to the virus's rapid dissemination, medical professionals have trouble finding positive patients. However, the next real-life issue is sharing data with hospitals around the world while considering the organizations' privacy concerns. The primary worries for training a global Deep Learning (DL) model are creating a collaborative platform and personal confidentiality. Another challenge is exchanging data with health care institutions while protecting the organizations' confidentiality. The primary concerns for training a universal DL model are creating a collaborative platform and preserving privacy. This paper provides a model that receives a small quantity of data from various sources, like organizations or sections of hospitals, and trains a global DL model utilizing blockchain-based Convolutional Neural Networks (CNNs). In addition, we use the Transfer Learning (TL) technique to initialize layers rather than initialize randomly and discover which layers should be removed before selection. Besides, the blockchain system verifies the data, and the DL method trains the model globally while keeping the institution's confidentiality. Furthermore, we gather the actual and novel COVID-19 patients. Finally, we run extensive experiments utilizing Python and its libraries, such as Scikit-Learn and TensorFlow, to assess the proposed method. We evaluated works using five different datasets, including Boukan Dr. Shahid Gholipour hospital, Tabriz Emam Reza hospital, Mahabad Emam Khomeini hospital, Maragheh Dr.Beheshti hospital, and Miandoab Abbasi hospital datasets, and our technique outperform state-of-the-art methods on average in terms of precision (2.7%), recall (3.1%), F1 (2.9%), and accuracy (2.8%).Review Citation Count: 23Resilient and dependability management in distributed environments: a systematic and comprehensive literature review(Springer, 2023) Amiri, Zahra; Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetWith the galloping progress of the Internet of Things (IoT) and related technologies in multiple facets of science, distribution environments, namely cloud, edge, fog, Internet of Drones (IoD), and Internet of Vehicles (IoV), carry special attention due to their providing a resilient infrastructure in which users can be sure of a secure connection among smart devices in the network. By considering particular parameters which overshadow the resiliency in distributed environments, we found several gaps in the investigated review papers that did not comprehensively touch on significantly related topics as we did. So, based on the resilient and dependable management approaches, we put forward a beneficial evaluation in this regard. As a novel taxonomy of distributed environments, we presented a well-organized classification of distributed systems. At the terminal stage, we selected 37 papers in the research process. We classified our categories into seven divisions and separately investigated each one their main ideas, advantages, challenges, and strategies, checking whether they involved security issues or not, simulation environments, datasets, and their environments to draw a cohesive taxonomy of reliable methods in terms of qualitative in distributed computing environments. This well-performed comparison enables us to evaluate all papers comprehensively and analyze their advantages and drawbacks. The SLR review indicated that security, latency, and fault tolerance are the most frequent parameters utilized in studied papers that show they play pivotal roles in the resiliency management of distributed environments. Most of the articles reviewed were published in 2020 and 2021. Besides, we proposed several future works based on existing deficiencies that can be considered for further studies.Article Citation Count: 36A Secure Intrusion Detection Platform Using Blockchain and Radial Basis Function Neural Networks for Internet of Drones(IEEE-Inst Electrical Electronics Engineers Inc, 2023) Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe Internet of Drones (IoD) is built on the Internet of Things (IoT) by replacing Things with Drones while retaining incomparable features. Because of its vital applications, IoD technologies have attracted much attention in recent years. Nevertheless, gaining the necessary degree of public acceptability of IoD without demonstrating safety and security for human life is exceedingly difficult. In addition, intrusion detection systems (IDSs) in IoD confront several obstacles because of the dynamic network architecture, particularly in balancing detection accuracy and efficiency. To increase the performance of the IoD network, we proposed a blockchain-based radial basis function neural networks (RBFNNs) model in this article. The proposed method can improve data integrity and storage for smart decision-making across different IoDs. We discussed the usage of blockchain to create decentralized predictive analytics and a model for effectively applying and sharing deep learning (DL) methods in a decentralized fashion. We also assessed the model using a variety of data sets to demonstrate the viability and efficacy of implementing the blockchain-based DL technique in IoD contexts. The findings showed that the suggested model is an excellent option for developing classifiers while adhering to the constraints placed by network intrusion detection. Furthermore, the proposed model can outperform the cutting-edge methods in terms of specificity, F1, recall, precision, and accuracy.