Zihinsel Bozukluk, Duygu ve Duygu His Tespiti için Çok Görevli Öğrenme Yoluyla Büyük Dil Modellerini Uyarlama
Loading...

Date
2025
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Open Access Color
OpenAIRE Downloads
OpenAIRE Views
Abstract
Detection of people's mental health problems based on text has gained increasing attention recently. Many studies have attempted to solve this problem using deep neural networks, transformer-based models, and large language models. Adapting Large Language Models (LLMs) has obtained the best performance compared to rival methods. This paper investigates sequential multi-task learning (MTL) using Parameter-Efficient Fine-Tuning (PEFT), Low-Rank Adaptation (LoRA) with 4-bit quantization, on the meta-llama/Llama-3.1-8B-Instruct model. We target Mental Health Problem Detection as the primary task, and Multi-Label Emotion Detection and Sentiment Analysis as secondary and ternary tasks. Then we change their order and finetune the Llama LLM on different primary and secondary tasks. We observed that the second finetuning could improve performance in the primary task most of the time. We used the extended SWMH dataset, including 4,243 posts written by social media users. Model performance was evaluated after each stage of two sequential orders: (1) Mental Health Problem → Emotion → Sentiment (MHP-first) and (2) Emotion → Mental Health Problem → Sentiment (Emo-first). Sequential PEFT approaches significantly improved over the base LLM but revealed critical trade-offs dependent on task order. The MHP-first sequence achieved a 0.7624 Micro F1 score for MHP Detection. While initial training on Mental Health Problem detection provided a strong boost to auxiliary tasks, subsequent fine-tuning stages caused emotion detection performance to degrade (0.4385 F1). In contrast, the Emo-first sequence resulted in superior performance for Emotion (0.6004 F1) and Sentiment (0.9500 F1) but achieved a lower score for the primary MHP task (0.6250 F1). Results demonstrate that optimal training order is task dependent. This research offers empirical insights into sequential MTL with PEFT for LLMs in mental health problem detection, showing efficient adaptation potential for clinical tasks while highlighting the critical influence of task ordering, interference, and data imbalance. Keywords: Large Language Models (LLMs), Multi-Task Learning (MTL), Mental Health Problem Detection, Emotion Detection, Sentiment Analysis
Description
Keywords
Computer Engineering and Computer Science and Control, Bilgisayar Mühendisliği Bilimleri-Bilgisayar ve Kontrol
Turkish CoHE Thesis Center URL
Fields of Science
Citation
WoS Q
Scopus Q
Source
Volume
Issue
Start Page
End Page
92
