Browsing by Author "Erdem, Zeki"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
Article Citation - Scopus: 4A Comparative Study on Denoising from Facial Images Using Convolutional Autoencoder(Gazi Univ, 2023) Darici, Muazzez Buket; Erdem, ZekiDenoising is one of the most important preprocesses in image processing. Noises in images can prevent extracting some important information stored in images. Therefore, before some implementations such as image classification, segmentation, etc., image denoising is a necessity to obtain good results. The purpose of this study is to compare the deep learning techniques and traditional techniques on denoising facial images considering two different types of noise (Gaussian and Salt&Pepper). Gaussian, Median, and Mean filters have been specified as traditional methods. For deep learning methods, deep convolutional denoising autoencoders (CDAE) structured on three different optimizers have been proposed. Both accuracy metrics and computational times have been considered to evaluate the denoising performance of proposed autoencoders, and traditional methods. The utilized standard evaluation metrics are the peak signal to noise ratio (PSNR) and structural similarity index measure (SSIM). It has been observed that overall, while the traditional methods gave results in shorter times in terms of computation times, the autoencoders performed better concerning the evaluation metrics. The CDAE based on the Adam optimizer has been shown the best results in terms of PSNR and SSIM metrics on removing both types of noise.Conference Object Citation - Scopus: 1Towards Better Energy Efficiency Through Coil-Based Electricity Consumption Forecasting in Steel Manufacturing(IEEE, 2024) Koca, Asli; Erdem, Zeki; Dag, HasanForecasting electricity consumption with the possibly-highest accuracy is crucial for cost optimization, operational efficiency, competitiveness, contract negotiation, and achieving the global goals of sustainable development in steel manufacturing. This study focuses on identifying the most appropriate prediction algorithm for coil-based electricity consumption and the most effective implementation purposes in a steel company. Random Forest, Gradient-Boosted Trees, and Deep Neural Networks are preferred because they are suitable for the given problem and widely used for forecasting. The performance of the prediction models is evaluated based on the root mean squared error (RMSE) and the coefficient of determination (R-squared). Experiments show that the Random Forest model outperforms the Gradient-Boosted Trees and Deep Neural Network models. The results will provide benefits for many different purposes. Firstly, during contract negotiations, it will enable us to gain a competitive advantage when purchasing electricity in the day-ahead market. Secondly, in the production scheduling phase, the ones with the highest electricity consumption will be produced during the hours when there is the least demand at the most affordable prices. Finally, when prioritizing sales orders, the use of the existing capacity for orders with lower energy intensity or a higher profit margin will be ensured.Master Thesis Zihinsel Bozukluk, Duygu ve Duygu His Tespiti için Çok Görevli Öğrenme Yoluyla Büyük Dil Modellerini Uyarlama(2025) Bhat, Amir Rafiq; Dehkharghani, Rahim; Erdem, ZekiDetection of people's mental health problems based on text has gained increasing attention recently. Many studies have attempted to solve this problem using deep neural networks, transformer-based models, and large language models. Adapting Large Language Models (LLMs) has obtained the best performance compared to rival methods. This paper investigates sequential multi-task learning (MTL) using Parameter-Efficient Fine-Tuning (PEFT), Low-Rank Adaptation (LoRA) with 4-bit quantization, on the meta-llama/Llama-3.1-8B-Instruct model. We target Mental Health Problem Detection as the primary task, and Multi-Label Emotion Detection and Sentiment Analysis as secondary and ternary tasks. Then we change their order and finetune the Llama LLM on different primary and secondary tasks. We observed that the second finetuning could improve performance in the primary task most of the time. We used the extended SWMH dataset, including 4,243 posts written by social media users. Model performance was evaluated after each stage of two sequential orders: (1) Mental Health Problem → Emotion → Sentiment (MHP-first) and (2) Emotion → Mental Health Problem → Sentiment (Emo-first). Sequential PEFT approaches significantly improved over the base LLM but revealed critical trade-offs dependent on task order. The MHP-first sequence achieved a 0.7624 Micro F1 score for MHP Detection. While initial training on Mental Health Problem detection provided a strong boost to auxiliary tasks, subsequent fine-tuning stages caused emotion detection performance to degrade (0.4385 F1). In contrast, the Emo-first sequence resulted in superior performance for Emotion (0.6004 F1) and Sentiment (0.9500 F1) but achieved a lower score for the primary MHP task (0.6250 F1). Results demonstrate that optimal training order is task dependent. This research offers empirical insights into sequential MTL with PEFT for LLMs in mental health problem detection, showing efficient adaptation potential for clinical tasks while highlighting the critical influence of task ordering, interference, and data imbalance. Keywords: Large Language Models (LLMs), Multi-Task Learning (MTL), Mental Health Problem Detection, Emotion Detection, Sentiment Analysis

