Browsing by Author "Navimipour, Nima Jafari"
Now showing 1 - 20 of 28
- Results Per Page
- Sort Options
Review Citation - WoS: 5Citation - Scopus: 5Blockchain Systems in Embedded Internet of Things: Systematic Literature Review, Challenges Analysis, and Future Direction Suggestions(Mdpi, 2022) Darbandi, Mehdi; Al-Khafaji, Hamza Mohammed Ridha; Nasab, Seyed Hamid Hosseini; AlHamad, Ahmad Qasim Mohammad; Ergashevich, Beknazarov Zafarjon; Navimipour, Nima Jafari; Jafari Navimipour, Nima; Hosseini Nasab, Seyed HamidInternet of Things (IoT) environments can extensively use embedded devices. Without the participation of consumers; tiny IoT devices will function and interact with one another, but their operations must be reliable and secure from various threats. The introduction of cutting-edge data analytics methods for linked IoT devices, including blockchain, may lower costs and boost the use of cloud platforms. In a peer-to-peer network such as blockchain, no one has to be trusted because each peer is in charge of their task, and there is no central server. Because blockchain is tamper-proof, it is connected to IoT to increase security. However, the technology is still developing and faces many challenges, such as power consumption and execution time. This article discusses blockchain technology and embedded devices in distant areas where IoT devices may encounter network shortages and possible cyber threats. This study aims to examine existing research while also outlining prospective areas for future work to use blockchains in smart settings. Finally, the efficiency of the blockchain is evaluated through performance parameters, such as latency, throughput, storage, and bandwidth. The obtained results showed that blockchain technology provides security and privacy for the IoT.Review Citation - WoS: 41Citation - Scopus: 57A Comprehensive and Systematic Literature Review on the Big Data Management Techniques in the Internet of Things(Springer, 2023) NaghibnAff, Arezou; Navimipour, Nima Jafari; Hosseinzadeh, Mehdi; Sharifi, Arash; Naghib, Arezou; Jafari Navimipour, NimaThe Internet of Things (IoT) is a communication paradigm and a collection of heterogeneous interconnected devices. It produces large-scale distributed, and diverse data called big data. Big Data Management (BDM) in IoT is used for knowledge discovery and intelligent decision-making and is one of the most significant research challenges today. There are several mechanisms and technologies for BDM in IoT. This paper aims to study the important mechanisms in this area systematically. This paper studies articles published between 2016 and August 2022. Initially, 751 articles were identified, but a paper selection process reduced the number of articles to 110 significant studies. Four categories to study BDM mechanisms in IoT include BDM processes, BDM architectures/frameworks, quality attributes, and big data analytics types. Also, this paper represents a detailed comparison of the mechanisms in each category. Finally, the development challenges and open issues of BDM in IoT are discussed. As a result, predictive analysis and classification methods are used in many articles. On the other hand, some quality attributes such as confidentiality, accessibility, and sustainability are less considered. Also, none of the articles use key-value databases for data storage. This study can help researchers develop more effective BDM in IoT methods in a complex environment.Article Citation - WoS: 3Citation - Scopus: 6An Energy-Aware Resource Management Strategy Based on Spark and YARN in Heterogeneous Environments(Ieee-inst Electrical Electronics Engineers inc, 2024) Shabestari, Fatemeh; Navimipour, Nima Jafari; Jafari Navimipour, NimaApache Spark is a popular framework for processing big data. Running Spark on Hadoop YARN allows it to schedule Spark workloads alongside other data-processing frameworks on Hadoop. When an application is deployed in a YARN cluster, its resources are given without considering energy efficiency. Furthermore, there is no way to enforce any user-specified deadline constraints. To address these issues, we propose a new deadline-aware resource management system and a scheduling algorithm to minimize the total energy consumption in Spark on YARN for heterogeneous clusters. First, a deadline-aware energy-efficient model for the considered problem is proposed. Then, using a locality-aware method, executors are assigned to applications. This algorithm sorts the nodes based on the performance per watt (PPW) metric, the number of application data blocks on nodes, and the rack locality. It also offers three ways to choose executors from different machines: greedy, random, and Pareto-based. Finally, the proposed heuristic task scheduler schedules tasks on executors to minimize total energy and tardiness. We evaluated the performance of the suggested algorithm regarding energy efficiency and satisfying the Service Level Agreement (SLA). The results showed that the method outperforms the popular algorithms regarding energy consumption and meeting deadlines.Article Citation - WoS: 14Citation - Scopus: 16Evaluating the Effect of Human Factors on Big Data Analytics and Cloud of Things Adoption in the Manufacturing Micro, Small, and Medium Enterprises(IEEE Computer Soc, 2022) Kavre, Mahesh S.; Gardas, Bhaskar B.; Narwane, Vaibhav S.; Navimipour, Nima Jafari; Yalcin, Senay; Jafari Navimipour, NimaThe purpose of the study is to explore and analyze human factors that influence big data analytics and the cloud of things adoption across Indian micro, small, and medium enterprises (MSMEs). The human factors were identified through a literature survey and experts' opinions. In order to develop a hierarchical structural model of identified human factors indicating the mutual relationship and classify the factors into cause-effect groups, a hybrid ISM-DEMATEL approach has been employed. Results of the study stated that Lack of training and development programs (HF11), Lack of vision of top management and ineffective corporate governance (HF13), and Communication barrier between management and workforce (HF4) are the most significant factors. The study's findings would be helpful to human resource managers and decision-makers of the firm to understand human-related factors responsible for technology adoption. Further, results can be validated with the investigation in other emerging economies.Letter Citation - WoS: 49Citation - Scopus: 55Everything You Wanted To Know About Chatgpt: Components, Capabilities, Applications, and Opportunities(John Wiley & Sons Ltd, 2024) Heidari, Arash; Navimipour, Nima Jafari; Zeadally, Sherali; Chamola, VinayConversational Artificial Intelligence (AI) and Natural Language Processing have advanced significantly with the creation of a Generative Pre-trained Transformer (ChatGPT) by OpenAI. ChatGPT uses deep learning techniques like transformer architecture and self-attention mechanisms to replicate human speech and provide coherent and appropriate replies to the situation. The model mainly depends on the patterns discovered in the training data, which might result in incorrect or illogical conclusions. In the context of open-domain chats, we investigate the components, capabilities constraints, and potential applications of ChatGPT along with future opportunities. We begin by describing the components of ChatGPT followed by a definition of chatbots. We present a new taxonomy to classify them. Our taxonomy includes rule-based chatbots, retrieval-based chatbots, generative chatbots, and hybrid chatbots. Next, we describe the capabilities and constraints of ChatGPT. Finally, we present potential applications of ChatGPT and future research opportunities. The results showed that ChatGPT, a transformer-based chatbot model, utilizes encoders to produce coherent responses.Review Citation - WoS: 15Citation - Scopus: 29Fault-Tolerant Load Balancing in Cloud Computing: A Systematic Literature Review(IEEE-Inst Electrical Electronics Engineers Inc, 2022) Mohammadian, Vahid; Navimipour, Nima Jafari; Hosseinzadeh, Mehdi; Darwesh, AsoNowadays, cloud computing is growing daily and has been developed as an effective and flexible paradigm in solving large-scale problems. It has been known as an Internet-based computing model in which computing and virtual resources, such as services, applications, storage, servers, and networks, are shared among numerous cloud users. Since the number of cloud users and their requests are increasing rapidly, the loads on the cloud systems may be underloaded or overloaded. These situations cause different problems, such as high response time and power consumption. To handle the mentioned problems and improve the performance of cloud servers, load balancing methods have a significant impact. Generally, a load balancing method aims to identify under-loaded and overloaded nodes and balance the load among them. In the recent decade, this problem has attracted a lot of interest among researchers, and several solutions have been proposed. Considering the important role of fault-tolerant in load balancing algorithms, there is a lack of an organized and in-depth study in this field yet. This gap prompted us to provide the current study aimed to collect and review the available papers in the field of fault tolerance load balancing methods in cloud computing. The existing algorithms are divided into two categories, namely, centralized and distributed, and reviewed based on vital qualitative parameters, such as scalability, makespan, reliability, resource utilization, throughput, and overhead. In this regard, other criteria such as the type of detected faults and adopted simulation tools are taken into account.Article Citation - WoS: 53Citation - Scopus: 58A hybrid approach for latency and battery lifetime optimization in IoT devices through offloading and CNN learning(Elsevier, 2023) Heidari, Arash; Navimipour, Nima Jafari; Jamali, Mohammad Ali Jabraeil; Akbarpour, ShahinOffloading assists in overcoming the resource constraints of specific elements, making it one of the primary technical enablers of the Internet of Things (IoT). IoT devices with low battery capacities can use the edge to offload some of the operations, which can significantly reduce latency and lengthen battery lifetime. Due to their restricted battery capacity, deep learning (DL) techniques are more energy-intensive to utilize in IoT devices. Because many IoT devices lack such modules, numerous research employed energy harvester modules that are not available to IoT devices in real-world circumstances. Using the Markov Decision Process (MDP), we describe the offloading problem in this study. Next, to facilitate partial offloading in IoT devices, we develop a Deep Reinforcement learning (DRL) method that can efficiently learn the policy by adjusting to network dynamics. Convolutional Neural Network (CNN) is then offered and implemented on Mobile Edge Computing (MEC) devices to expedite learning. These two techniques operate together to offer the proper offloading approach throughout the length of the system's operation. Moreover, transfer learning was employed to initialize the Qtable values, which increased the system's effectiveness. The simulation in this article, which employed Cooja and TensorFlow, revealed that the strategy outperformed five benchmarks in terms of latency by 4.1%, IoT device efficiency by 2.9%, energy utilization by 3.6%, and job failure rate by 2.6% on average.Article Citation - WoS: 33Citation - Scopus: 38Implementation of a Product-Recommender System in an Iot-Based Smart Shopping Using Fuzzy Logic and Apriori Algorithm(IEEE-Inst Electrical Electronics Engineers Inc, 2022) Yan, Shu-Rong; Pirooznia, Sina; Heidari, Arash; Navimipour, Nima Jafari; Unal, MehmetThe Internet of Things (IoT) has recently become important in accelerating various functions, from manufacturing and business to healthcare and retail. A recommender system can handle the problem of information and data buildup in IoT-based smart commerce systems. These technologies are designed to determine users' preferences and filter out irrelevant information. Identifying items and services that customers might be interested in and then convincing them to buy is one of the essential parts of effective IoT-based smart shopping systems. Due to the relevance of product-recommender systems from both the consumer and shop perspectives, this article presents a new IoT-based smart product-recommender system based on an apriori algorithm and fuzzy logic. The suggested technique employs association rules to display the interdependencies and linkages among many data objects. The most common use of association rule discovery is shopping cart analysis. Customers' buying habits and behavior are studied based on the numerous goods they place in their shopping carts. As a result, the association rules are generated using a fuzzy system. The apriori algorithm then selects the product based on the provided fuzzy association rules. The results revealed that the suggested technique had achieved acceptable results in terms of mean absolute error, root-mean-square error, precision, recall, diversity, novelty, and catalog coverage when compared to cutting-edge methods. Finally, themethod helps increase recommender systems' diversity in IoT-based smart shopping.Doctoral Thesis Kuantum Noktalarına Dayalı Enerji Verimli Elektronik Cihazlar için Nano Ölçekli Aritmetik ve Mantık Birimi Tasarımı(2025) Zohaib, Muhammad; Navimipour, Nima Jafari; Aydemir, Mehmet TimurElektronik, modern teknolojilerin temel bileşenidir ve transistörler, diyotlar, kapasitörler ve sensörler gibi basit bileşenlerin yardımıyla elektriksel bilginin iletilmesini sağlar. Akımı kontrol ederek, temel sinyal işleme fonksiyonları olan amplifikasyon, anahtarlama ve modülasyon gibi önemli işlevleri yerine getirirler. Mevcut yüksek performanslı sinyal işleme uygulamaları, bu sistemleri daha hızlı, daha küçük ve daha az enerji tüketen hale getiren malzeme bilimi ve nanoteknolojideki güncel gelişmeler sayesinde mümkün olmaktadır. Sinyal işleme, modern yaşamın birçok unsurunun telekomünikasyon, eğitim, sağlık, endüstri ve güvenlik gibi gelişiminde önemli bir etki yaratmıştır. Yarı iletken endüstrisi, sinyal işleme inovasyonunun başlıca itici gücü olup, küresel talebe yanıt olarak giderek daha sofistike elektronik cihazlar ve devreler üretmektedir. Ayrıca, merkezi işlem birimi (CPU), bilgisayarların ve tüm elektronik cihazların ve sinyal işlemenin 'beyni' olarak tanımlanır. CPU, bellek, çarpan, toplayıcı gibi hayati bileşenleri içeren kritik bir elektronik aygıttır. CPU'nun temel bileşenlerinden biri de aritmetik ve mantık birimidir (ALU); toplama, çarpma ve çıkarma gibi tüm CPU işlemleri içinde aritmetik ve mantıksal işlemleri gerçekleştirmektedir. Ancak ALU devrelerinde gecikme, kapladığı alan ve enerji tüketimi önemli parametrelerdir. Mevcut ALU tasarımları yüksek gecikme, fazla alan kullanımı ve yüksek enerji tüketimi gibi sorunlarla karşılaştığı için, yeni teknolojiye dayalı elektronik devrelerin uygulanması; mikrodenetleyiciler, mikroişlemciler ve baskılı cihazlar gibi tüm sinyal işleme aygıtlarının performansını yüksek hız ve düşük alan kullanımı ile önemli ölçüde artırabilir. Kuantum Nokta Hücreli Otomatlar (QCA), bu eksiklikleri gidermek için tüm elektronik devreler ve sinyal işleme uygulamalarında etkili bir teknolojidir. Bu teknoloji, CMOS ve VLSI gibi yerleşik teknolojilere alternatif olarak araştırılmakta olup, ultra düşük güç tüketimi, yüksek cihaz yoğunluğu, THz seviyesinde hızlı çalışma hızı ve azaltılmış devre karmaşıklığı gibi avantajlara sahiptir. Bu araştırma, gelişmiş QCA nanoteknolojisini uygulayarak mikrodenetleyiciler gibi elektronik cihazları geliştiren yenilikçi bir ALU tasarımı önermektedir. Temel amaç, QCA nanoteknolojisinin potansiyelinden tam anlamıyla yararlanan özgün bir ALU mimarisi sunmaktır. Yeni ve verimli bir yaklaşımla, temel mantık kapıları döndürülmemiş tek hücreye dayalı eş düzlemli bir düzenle ustalıkla kullanılmaktadır. Ayrıca bu çalışma, kuantum nokta hücreli otomata teknolojisinde geliştirilmiş 1-bit ve 2-bit aritmetik ve mantık birimi sunmaktadır. Önerilen tasarım; mantık işlemleri, aritmetik işlemler, tam toplayıcı (FA) tasarımı ve çoklayıcıları içermektedir. Tüm önerilen tasarımlar güçlü simülasyon aracı QCADesigner kullanılarak değerlendirilmiş ve doğrulanmıştır. Simülasyon sonuçları, önerilen ALU'nun hücre sayısı ve toplam kaplanan alan açısından en iyi tek katmanlı ve çok katmanlı önceki tasarımlara kıyasla sırasıyla %42.48 ve %64.28 oranında iyileştirme sağladığını göstermektedir.Master Thesis Kuantum Teknolojisine Dayalı Görüntü Steganografisi(2025) Salahov, Huseyn; Navimipour, Nima JafariSteganografi, bilgilerin bir örtü ortamında gizlenerek tespit edilmeden saklandığı bir veri gizleme tekniğidir. Bu tür tekniklerin performansını değerlendiren önemli bir ölçüt, gizli mesajın tespit edilmesine karşı direnç, yani güvenliktir. Güvenli steganografi tekniklerinden biri, görüntü maskelenmesidir. Bu yöntemde, bir görüntü önce rastgele bir anahtar ile şifrelenerek şifreli bir görüntü elde edilir. Ardından, bu şifreli görüntü, orijinal görüntü kullanılarak tekrar şifrelenir ve anahtarın yerine geçen bir maske üretilir. Bu süreç, anahtarın gizli kalmasını sağlar ve yöntemin güvenliğini artırır. Bu algoritmalar, kırmızı, yeşil ve mavi (RGB) kanalları ayrı ayrı işlenerek renkli görüntüler üzerinde gerçekleştirilecek ve üç şifreli kanal ile üç maske kanalı elde edilecektir. Geleneksel olarak, steganografi, tamamlayıcı metal-oksit-yarı iletken (CMOS) transistörleri ve çok büyük ölçekli tümleşik devre (VLSI) donanımı kullanılarak uygulanır. Ancak, VLSI'nin yonga yoğunluğundan kaynaklanan aşırı ısınma gibi doğal sorunları nedeniyle, kuantum teknolojileri, steganografide VLSI'nin yerini alabilecek yeni nesil teknolojiler olarak değerlendirilmektedir. Alternatif olarak, kuantum nokta hücresel otomataları (QCA), steganografik sistemleri güç analizi saldırılarına karşı korumak için kritik olan yüksek hız, bütünlük ve düşük güç tüketimi sunar. Bu çalışmada, hem şifreleme hem de maske üretimi için kullanılan XOR kapısı temel yapı taşı olan, QCA tabanlı bir görüntü maskesi nano-tasarımı öneriyoruz. Tasarım, QCADesigner 2.0.3 yazılımı kullanılarak geliştirilmiş, şifreleme mantığı ise Python diliyle yazılmıştır. Tasarım, döndürülmemiş hücreler içeren tek katmanlı bir yapı kullanır. Görüntü kalitesini değerlendirmek için Yapısal Benzerlik İndeksi (SSIM) ve Yapısal Farklılık İndeksi (DSSIM) kullanılmıştır. Sonuçlarımız, önceki QCA tabanlı tasarımlara kıyasla hücre sayısında %57,3 ve alanda %40,7 azalma ile iyileşmeler gösterdi. Güvenlik analizleri, diferansiyel saldırılar dışında çeşitli saldırılara karşı artırılmış direnç sağlandığını ortaya koymuştur. Anahtar Sözcükler: Steganografi, Görüntü Maskelenmesi, QCA, XOR Kapısı, Görüntü Şifreleme, RGB, Kriptografi.Article Citation - WoS: 13Citation - Scopus: 12Leveraging Explainable Artificial Intelligence for Transparent and Trustworthy Cancer Detection Systems(Elsevier, 2025) Toumaj, Shiva; Heidari, Arash; Navimipour, Nima Jafari; Jafari Navimipour, NimaTimely detection of cancer is essential for enhancing patient outcomes. Artificial Intelligence (AI), especially Deep Learning (DL), demonstrates significant potential in cancer diagnostics; however, its opaque nature presents notable concerns. Explainable AI (XAI) mitigates these issues by improving transparency and interpretability. This study provides a systematic review of recent applications of XAI in cancer detection, categorizing the techniques according to cancer type, including breast, skin, lung, colorectal, brain, and others. It emphasizes interpretability methods, dataset utilization, simulation environments, and security considerations. The results indicate that Convolutional Neural Networks (CNNs) account for 31 % of model usage, SHAP is the predominant interpretability framework at 44.4 %, and Python is the leading programming language at 32.1 %. Only 7.4 % of studies address security issues. This study identifies significant challenges and gaps, guiding future research in trustworthy and interpretable AI within oncology.Book Part Citation - Scopus: 2Machine/Deep Learning Techniques for Multimedia Security(inst Engineering Tech-iet, 2023) Heidari, Arash; Navimipour, Nima Jafari; Azad, Poupak; Heidar, ArashMultimedia security based on Machine Learning (ML)/ Deep Learning (DL) is a field of study that focuses on using ML/DL techniques to protect multimedia data such as images, videos, and audio from unauthorized access, manipulation, or theft. Developing and implementing algorithms and systems that use ML/DL techniques to detect and prevent security breaches in multimedia data is the main subject of this field. These systems use techniques like watermarking, encryption, and digital signature verification to protect multimedia data. The advantages of using ML/DL in multimedia security include improved accuracy, scalability, and automation. ML/DL algorithms can improve the accuracy of detecting security threats and help identify multimedia data vulnerabilities. Additionally, ML models can be scaled up to handle large amounts of multimedia data, making them helpful in protecting big datasets. Finally, ML/DL algorithms can automate the process of multimedia security, making it easier and more efficient to protect multimedia data. The disadvantages of using ML/DL in multimedia security include data availability, complexity, and black box models. ML and DL algorithms require large amounts of data to train the models, which can sometimes be challenging. Developing and implementing ML algorithms can also be complex, requiring specialized skills and knowledge. Finally, ML/DL models are often black box models, which means it can be difficult to understand how they make their decisions. This can be a challenge when explaining the decisions to stakeholders or auditors. Overall, multimedia security based on ML/DL is a promising area of research with many potential benefits. However, it also presents challenges that must be addressed to ensure the security and privacy of multimedia data.Article Citation - WoS: 2Citation - Scopus: 7Multimedia big data computing mechanisms: a bibliometric analysis(Springer, 2023) Rivai, Faradillah Amalia; Navimipour, Nima Jafari; Yalcin, SenayMassive multimedia data are being created due to the rising amount of the Internet and user-generated content, low-cost commodity devices with cameras (like cellphones, surveillance systems, and so on), and the proliferation of social networks, forming a unique type of big data. Several studies have been conducted in this research area using a survey and event analysis approach; however, none has been conducted to investigate the status of knowledge, its features, evolution, and emerging trend of multimedia big data. Therefore, in this paper, a bibliometric study using VOSviewer software is carried out with 1,865 documents from 2008 to 2020. Based on the result, 2013 is the starting year where the total publication excess of 100 articles and the configuration of leading countries, productive organizations, and authors are investigated. The most cited journals, popular publications venues, and hot research topics are also included in the investigations. Our investigation uncovered useful information, such as annual publishing patterns, the hottest research topic, the top 10 important authors and articles, and the most helpful funding organizations and venues.Article Citation - WoS: 1Citation - Scopus: 1A Nano-Design of Image Masking and Steganography Structure Based on Quantum Technology(Elsevier, 2025) Salahov, Huseyn; Ahmadpour, Seyed-Sajad; Navimipour, Nima Jafari; Das, Jadav Chandra; Rasmi, HadiSecure image storage and transmission require sound encryption methods that resist key exposure while maintaining high image quality. Various encryption approaches have been developed to protect image content and its transmission from unauthorized access. One such method is image masking, where a special mask is generated to conceal information within the original image. Instead of hiding the image visually, the mask creates an intermediate layer that obfuscates the encryption key, eliminating the need to transmit it directly. However, implementing such masking techniques efficiently at the hardware level poses particular challenges. Traditional Complementary Metal-Oxide-Semiconductor (CMOS)-based Very-Large-Scale-Integration (VLSI) systems face scalability issues, excessive heat, and high-power consumption. To overcome these challenges, this study utilizes a nano-scale image masking architecture based on Quantum-dot Cellular Automata (QCA), offering reduced area, lower power dissipation, and faster processing. The core operations utilize a three-input XOR gate, designed as a single-layer QCA structure without rotated cells. While QCA-based approaches improve hardware efficiency, most existing implementations focus only on grayscale images, leaving a gap in colorful image encryption. To address this, the work presents a QCA-based encryption and masking architecture for colored images. The method encrypts an image using a random key to generate a cipher image, which is then XORed with the original image to produce a mask. This process, applied independently to each RGB channel, produces three cipher-mask pairs, embedding steganographic property by concealing key information within the image. The keys are generated using a true random number generator (TRNG) based on cross-coupled loops and crossoriented structures, ensuring high entropy. The design was modeled in QCADesigner 2.0.3, with the encryption/decryption algorithms implemented in Python. Experimental results demonstrated a meaningful reduction in cell count and consumed area compared to the prior designs. Image quality and security analysis confirmed visual fidelity and improved robustness.Article Citation - WoS: 10Citation - Scopus: 13Nano-Design of Ultra-Efficient Reversible Block Based on Quantum-Dot Cellular Automata(Zhejiang Univ Press, 2023) Ahmadpour, Seyed Sajad; Navimipour, Nima Jafari; Mosleh, Mohammad; Yalcin, SenayReversible logic has recently gained significant interest due to its inherent ability to reduce energy dissipation, which is the primary need for low-power digital circuits. One of the newest areas of relevant study is reversible logic, which has applications in many areas, including nanotechnology, DNA computing, quantum computing, fault tolerance, and low-power complementary metal-oxide-semiconductor (CMOS). An electrical circuit is classified as reversible if it has an equal number of inputs and outputs, and a one-to-one relationship. A reversible circuit is conservative if the EXOR of the inputs and the EXOR of the outputs are equivalent. In addition, quantum-dot cellular automata (QCA) is one of the state-of-the-art approaches that can be used as an alternative to traditional technologies. Hence, we propose an efficient conservative gate with low power demand and high speed in this paper. First, we present a reversible gate called ANG (Ahmadpour Navimipour Gate). Then, two non-resistant QCA ANG and reversible fault-tolerant ANG structures are implemented in QCA technology. The suggested reversible gate is realized through the Miller algorithm. Subsequently, reversible fault-tolerant ANG is implemented by the 2DW clocking scheme. Furthermore, the power consumption of the suggested ANG is assessed under different energy ranges (0.5Ek, 1.0Ek, and 1.5Ek). Simulations of the structures and analysis of their power consumption are performed using QCADesigner 2.0.03 and QCAPro software. The proposed gate shows great improvements compared to recent designs.Article Citation - WoS: 19Citation - Scopus: 20A Nano-Scale Design of Arithmetic and Logic Unit for Energy-Efficient Signal Processing Devices Based on a Quantum-Based Technology(Springer, 2025) Zohaib, Muhammad; Navimipour, Nima Jafari; Aydemir, Mehmet Timur; Ahmadpour, Seyed-SajadSignal processing had a significant impact on the development of many elements of modern life, including telecommunications, education, healthcare, industry, and security. The semiconductor industry is the primary driver of signal processing innovation, producing ever-more sophisticated electronic devices and circuits in response to global demand. In addition, the central processing unit (CPU) is described as the "brain" of a computer or all electronic devices and signal processing. CPU is a critical electronic device that includes vital components such as memory, multiplier, adder, etc. Also, one of the essential components of the CPU is the arithmetic and logic unit (ALU), which executes the arithmetic and logical operations within all types of CPU operations, such as addition, multiplication, and subtraction. However, delay, occupied areas, and energy consumption are essential parameters in ALU circuits. Since the recent ALU designs experienced problems like high delay, high occupied area, and high energy consumption, implementing electronic circuits based on new technology can significantly boost the performance of entire signal processing devices, including microcontrollers, microprocessors, and printed devices, with high-speed and low occupied space. Quantum dot cellular automata (QCA) is an effective technology for implementing all electronic circuits and signal processing applications to solve these shortcomings. It is a transistor-less nanotechnology being explored as a successor to established technologies like CMOS and VLSI due to its ultra-low power dissipation, high device density, fast operating speed in THz, and reduced circuit complexity. This research proposes a ground-breaking ALU that upgrades electrical devices such as microcontrollers by applying cutting-edge QCA nanotechnology. The primary goal is to offer a novel ALU architecture that fully utilizes the potential of QCA nanotechnology. Using a new and efficient approach, the fundamental gates are skillfully utilized with a coplanar layout based on a single cell not rotated. Furthermore, this work presents an enhanced 1-bit and 2-bit arithmetic logic unit in quantum dot cellular automata. The recommended design includes logic, arithmetic operations, full adder (FA) design, and multiplexers. Using the powerful simulation tools QCADesigner, all proposed designs are evaluated and verified. The simulation outcomes indicates that the suggested ALU has 42.48 and 64.28% improvements concerning cell count and total occupied area in comparison to the best earlier single-layer and multi-layer designs.Article Citation - WoS: 11Citation - Scopus: 13A Nano-Scale Design of Vedic Multiplier for Electrocardiogram Signal Processing Based on a Quantum Technology(Aip Publishing, 2025) Wang, Yuyao; Darbandi, Mehdi; Ahmadpour, Seyed-Sajad; Navimipour, Nima Jafari; Navin, Ahmad Habibizad; Heidari, Arash; Anbar, MohammadAn electrocardiogram (ECG) measures the electric signals from the heartbeat to diagnose various heart issues; nevertheless, it is susceptible to noise. ECG signal noise must be removed because it significantly affects ECG signal characteristics. In addition, speed and occupied area play a fundamental role in ECG structures. The Vedic multiplier is an essential part of signal processing and is necessary for various applications, such as ECG, clusters, and finite impulse response filter architectures. All ECGs have a Vedic multiplier circuit unit that is necessary for signal processing. The Vedic multiplier circuit always performs multiplication and accumulation steps to execute continuous and complex operations in signal processing programs. Conversely, in the Vedic multiplier framework, the circuit speed and occupied area are the main limitations. Fixing these significant defects can drastically improve the performance of this crucial circuit. The use of quantum technologies is one of the most popular solutions to overcome all previous shortcomings, such as the high occupied area and speed. In other words, a unique quantum technology like quantum dot cellular automata (QCA) can easily overcome all previous shortcomings. Thus, based on quantum technology, this paper proposes a multiplier for ECG using carry skip adder, half-adder, and XOR circuits. All suggested frameworks utilized a single-layer design without rotated cells to increase their operability in complex architectures. All designs have been proposed with a coplanar configuration in view, having an impact on the circuits' durability and stability. All proposed architectures have been designed and validated with the tool QCADesigner 2.0.3. All designed circuits showed a simple structure with minimum quantum cells, minimum area, and minimum delay with respect to state-of-the-art structures.Article Citation - WoS: 20Citation - Scopus: 16A New a Flow-Based Approach for Enhancing Botnet Detection Using Convolutional Neural Network and Long Short-Term Memory(Springer London Ltd, 2025) Asadi, Mehdi; Heidari, Arash; Navimipour, Nima Jafari; Jafari Navimipour, NimaDespite the growing research and development of botnet detection tools, an ever-increasing spread of botnets and their victims is being witnessed. Due to the frequent adaptation of botnets to evolving responses offered by host-based and network-based detection mechanisms, traditional methods are found to lack adequate defense against botnet threats. In this regard, the suggestion is made to employ flow-based detection methods and conduct behavioral analysis of network traffic. To enhance the performance of these approaches, this paper proposes utilizing a hybrid deep learning method that combines convolutional neural network (CNN) and long short-term memory (LSTM) methods. CNN efficiently extracts spatial features from network traffic, such as patterns in flow characteristics, while LSTM captures temporal dependencies critical to detecting sequential patterns in botnet behaviors. Experimental results reveal the effectiveness of the proposed CNN-LSTM method in classifying botnet traffic. In comparison with the results obtained by the leading method on the identical dataset, the proposed approach showcased noteworthy enhancements, including a 0.61% increase in precision, a 0.03% augmentation in accuracy, a 0.42% enhancement in the recall, a 0.51% improvement in the F1-score, and a 0.10% reduction in the false-positive rate. Moreover, the utilization of the CNN-LSTM framework exhibited robust overall performance and notable expeditiousness in the realm of botnet traffic identification. Additionally, we conducted an evaluation concerning the impact of three widely recognized adversarial attacks on the Information Security Centre of Excellence dataset and the Information Security and Object Technology dataset. The findings underscored the proposed method's propensity for delivering a promising performance in the face of these adversarial challenges.Article Citation - WoS: 3Citation - Scopus: 3A New Energy-Aware Method for Gas Lift Allocation in IoT-Based Industries Using a Chemical Reaction-Based Optimization Algorithm(Mdpi, 2022) Zanbouri, Kouros; Bastak, Mostafa Razoughi; Alizadeh, Seyed Mehdi; Navimipour, Nima Jafari; Yalcin, Senay; Jafari Navimipour, Nima; Razoughi Bastak, MostafaThe Internet of Things (IoT) has recently developed opportunities for various industries, including the petrochemical industry, that allow for intelligent manufacturing with real-time management and the analysis of the produced big data. In oil production, extracting oil reduces reservoir demand, causing oil supply to fall below the economically viable level. Gas lift is a popular artificial lift system that is both efficient and cost-effective. If gas supplies in the gas lift process are not limited, a sufficient amount of gas may be injected into the reservoir to reach the highest feasible production rate. Because of the limited supply of gas, it is essential to achieve the sustainable utilization of our limited resources and manage the injection rate of the gas into each well in order to enhance oil output while reducing gas injection. This study describes a novel IoT-based chemical reaction optimization (CRO) technique to solve the gas lift allocation issue. The CRO algorithm is inspired by the interaction of molecules with each other and achieving the lowest possible state of free energy from an unstable state. The CRO algorithm has excellent flexibility, enabling various operators to modify solutions and a favorable trade-off between intensification and diversity. A reasonably fast convergence rate serves as a powerful motivator to use as a solution. The extensive simulation and computational study have presented that the proposed method using CRO based on IoT systems significantly improves the overall oil production rate and reduces gas injection, energy consumption and cost compared to traditional algorithms. Therefore, it provides a more efficient system for the petroleum production industry.Article Citation - WoS: 89Citation - Scopus: 120A new lung cancer detection method based on the chest CT images using Federated Learning and blockchain systems(Elsevier, 2023) Heidari, Arash; Javaheri, Danial; Toumaj, Shiva; Navimipour, Nima Jafari; Rezaei, Mahsa; Unal, MehmetWith an estimated five million fatal cases each year, lung cancer is one of the significant causes of death worldwide. Lung diseases can be diagnosed with a Computed Tomography (CT) scan. The scarcity and trustworthiness of human eyes is the fundamental issue in diagnosing lung cancer patients. The main goal of this study is to detect malignant lung nodules in a CT scan of the lungs and categorize lung cancer according to severity. In this work, cutting-edge Deep Learning (DL) algorithms were used to detect the location of cancerous nodules. Also, the real-life issue is sharing data with hospitals around the world while bearing in mind the organizations' privacy issues. Besides, the main problems for training a global DL model are creating a collaborative model and maintaining privacy. This study presented an approach that takes a modest amount of data from multiple hospitals and uses blockchain-based Federated Learning (FL) to train a global DL model. The data were authenticated using blockchain technology, and FL trained the model internationally while maintaining the organization's anonymity. First, we presented a data normalization approach that addresses the variability of data obtained from various institutions using various CT scanners. Furthermore, using a CapsNets method, we classified lung cancer patients in local mode. Finally, we devised a way to train a global model cooperatively utilizing blockchain technology and FL while maintaining anonymity. We also gathered data from real-life lung cancer patients for testing purposes. The suggested method was trained and tested on the Cancer Imaging Archive (CIA) dataset, Kaggle Data Science Bowl (KDSB), LUNA 16, and the local dataset. Finally, we performed extensive experiments with Python and its well-known libraries, such as Scikit-Learn and TensorFlow, to evaluate the suggested method. The findings showed that the method effectively detects lung cancer patients. The technique delivered 99.69 % accuracy with the smallest possible categorization error.

