Özer, Demet

Loading...
Profile Picture
Name Variants
O., Demet
ÖZER, DEMET
Demet Ozer
Özer, DEMET
Özer, D.
Özer, Demet
ÖZER, Demet
Demet ÖZER
D. Ozer
Ozer, D.
Ozer,D.
Demet Özer
DEMET ÖZER
O.,Demet
Ozer, Demet
Özer,D.
Ö., Demet
Ozer,Demet
D. Özer
Demet, Ozer
Job Title
Dr. Öğr. Üyesi
Email Address
demet.ozer@khas.edu.tr
Scopus Author ID
Turkish CoHE Profile ID
Google Scholar ID
WoS Researcher ID
Scholarly Output

8

Articles

6

Citation Count

6

Supervised Theses

1

Scholarly Output Search Results

Now showing 1 - 5 of 5
  • Article
    Citation Count: 0
    Grammatical Complexity and Gesture Production of Younger and Older Adults;
    (Dilbilim Dernegi, 2023) Özer, Demet; Özer,D.; Göksun,T.
    Age-related effects are observed in both speech and gesture production. Older adults produce grammatically fewer complex sentences and use fewer iconic gestures than younger adults. This study investigated whether gesture use, especially iconic gesture production, was associated with the syntactic complexity within and across younger and older age groups. We elicited language samples from these groups, using a picture description task (N=60). Results suggested shorter and less complex speech for older than younger adults. Although the two age groups were similar in overall gesture frequency, older adults produced fewer iconic gestures. Overall gesture frequency, along with participants’ ages, negatively predicted grammatical complexity. However, iconic gesture frequency was not a significant predictor of complex syntax. We conclude that each gesture might carry a function in a coordinated multimodal system, which might, in turn, influence speech quality. Focusing on individual differences, rather than age groups, might unravel the nature of multimodal communication. © 2023 Dilbilim Derneği, Ankara.
  • Article
    Citation Count: 1
    Gestures Cued by Demonstratives in Speech Guide Listeners' Visual Attention During Spatial Language Comprehension
    (Amer Psychological Assoc, 2023) Özer, Demet; Karadoller, Dilay Z.; Ozyurek, Asli; Goksun, Tilbe
    Gestures help speakers and listeners during communication and thinking, particularly for visual-spatial information. Speakers tend to use gestures to complement the accompanying spoken deictic constructions, such as demonstratives, when communicating spatial information (e.g., saying The candle is here and gesturing to the right side to express that the candle is on the speaker's right). Visual information conveyed by gestures enhances listeners' comprehension. Whether and how listeners allocate overt visual attention to gestures in different speech contexts is mostly unknown. We asked if (a) listeners gazed at gestures more when they complement demonstratives in speech (here) compared to when they express redundant information to speech (e.g., right) and (b) gazing at gestures related to listeners' information uptake from those gestures. We demonstrated that listeners fixated gestures more when they expressed complementary than redundant information in the accompanying speech. Moreover, overt visual attention to gestures did not predict listeners' comprehension. These results suggest that the heightened communicative value of gestures as signaled by external cues, such as demonstratives, guides listeners' visual attention to gestures. However, overt visual attention does not seem to be necessary to extract the cued information from the multimodal message.
  • Article
    Citation Count: 4
    Gesture use in L1-Turkish and L2-English: Evidence from emotional narrative retellings
    (Sage Publications Ltd, 2023) Özer, Demet; Ozer, Demet; Goksun, Tilbe
    Bilinguals tend to produce more co-speech hand gestures to compensate for reduced communicative proficiency when speaking in their L2. We here investigated L1-Turkish and L2-English speakers' gesture use in an emotional context. We specifically asked whether and how (1) speakers gestured differently while retelling L1 versus L2 and positive versus negative narratives and (2) gesture production during retellings was associated with speakers' later subjective emotional intensity ratings of those narratives. We asked 22 participants to read and then retell eight emotion-laden narratives (half positive, half negative; half Turkish, half English). We analysed gesture frequency during the entire retelling and during emotional speech only (i.e., gestures that co-occur with emotional phrases such as happy). Our results showed that participants produced more representational gestures in L2 than in L1; however, they used more representational gestures during emotional content in L1 than in L2. Participants also produced more co-emotional speech gestures when retelling negative than positive narratives, regardless of language, and more beat gestures co-occurring with emotional speech in negative narratives in L1. Furthermore, using more gestures when retelling a narrative was associated with increased emotional intensity ratings for narratives. Overall, these findings suggest that (1) bilinguals might use representational gestures to compensate for reduced linguistic proficiency in their L2, (2) speakers use more gestures to express negative emotional information, particularly during emotional speech, and (3) gesture production may enhance the encoding of emotional information, which subsequently leads to the intensification of emotion perception.
  • Article
    Citation Count: 0
    Multimodal language in child-directed versus adult-directed speech
    (Sage Publications Ltd, 2023) Özer, Demet; Ozer, Demet; Aktan-Erciyes, Asli
    Speakers design their multimodal communication according to the needs and knowledge of their interlocutors, phenomenon known as audience design. We use more sophisticated language (e.g., longer sentences with complex grammatical forms) when communicating with adults compared with children. This study investigates how speech and co-speech gestures change in adult-directed speech (ADS) versus child-directed speech (CDS) for three different tasks. Overall, 66 adult participants (M-age = 21.05, 60 female) completed three different tasks (story-reading, storytelling and address description) and they were instructed to pretend to communicate with a child (CDS) or an adult (ADS). We hypothesised that participants would use more complex language, more beat gestures, and less iconic gestures in the ADS compared with the CDS. Results showed that, for CDS, participants used more iconic gestures in the story-reading task and storytelling task compared with ADS. However, participants used more beat gestures in the storytelling task for ADS than CDS. In addition, language complexity did not differ across conditions. Our findings indicate that how speakers employ different types of gestures (iconic vs beat) according to the addressee's needs and across different tasks. Speakers might prefer to use more iconic gestures with children than adults. Results are discussed according to audience design theory.
  • Conference Object
    Citation Count: 1
    Studying Children's Object Interaction in Virtual Reality: A Manipulative Gesture Taxonomy for VR Hand Tracking
    (Association for Computing Machinery, 2023) Özer, Demet; Leyleko?lu, A.; Arslan, S.; Özer, D.
    In this paper, we propose a taxonomy for the classification of children's gestural input elicited from spatial puzzle play in VR hand tracking. The taxonomy builds on the existing manipulative gesture taxonomy in human-computer interaction, and offers two main analytical categories; Goal-directed actions and Hand kinematics as complementary dimensions for analysing gestural input. Based on our study with eight children (aged between 7-14), we report the qualitative results for describing the categories for analysis and quantitative results for their frequency in occurring in children's interaction with the objects during the spatial task. This taxonomy is an initial step towards capturing the complexity of manipulative gestures in relation to mental rotation actions, and helps designers and developers to understand and study children's gestures as an input for object interaction as well as an indicator for spatial thinking strategies in VR hand tracking systems. © 2023 Owner/Author.