Gestures Cued by Demonstratives in Speech Guide Listeners' Visual Attention During Spatial Language Comprehension
Abstract
Gestures help speakers and listeners during communication and thinking, particularly for visual-spatial information. Speakers tend to use gestures to complement the accompanying spoken deictic constructions, such as demonstratives, when communicating spatial information (e.g., saying The candle is here and gesturing to the right side to express that the candle is on the speaker's right). Visual information conveyed by gestures enhances listeners' comprehension. Whether and how listeners allocate overt visual attention to gestures in different speech contexts is mostly unknown. We asked if (a) listeners gazed at gestures more when they complement demonstratives in speech (here) compared to when they express redundant information to speech (e.g., right) and (b) gazing at gestures related to listeners' information uptake from those gestures. We demonstrated that listeners fixated gestures more when they expressed complementary than redundant information in the accompanying speech. Moreover, overt visual attention to gestures did not predict listeners' comprehension. These results suggest that the heightened communicative value of gestures as signaled by external cues, such as demonstratives, guides listeners' visual attention to gestures. However, overt visual attention does not seem to be necessary to extract the cued information from the multimodal message.
Collections
Related items
Showing items related by title, author, creator and subject.
-
Gesture use in L1-Turkish and L2-English: Evidence from emotional narrative retellings
Ozder, Levent Emir; Ozer, Demet; Goksun, Tilbe (Sage Publications Ltd, 2023)Bilinguals tend to produce more co-speech hand gestures to compensate for reduced communicative proficiency when speaking in their L2. We here investigated L1-Turkish and L2-English speakers' gesture use in an emotional ... -
Studying Children's Object Interaction in Virtual Reality: A Manipulative Gesture Taxonomy for VR Hand Tracking
Baykal, G.E.; Leyleko?lu, A.; Arslan, S.; Özer, D. (Association for Computing Machinery, 2023)In this paper, we propose a taxonomy for the classification of children's gestural input elicited from spatial puzzle play in VR hand tracking. The taxonomy builds on the existing manipulative gesture taxonomy in human-computer ... -
Motion event representation in L1-Turkish versus L2-English speech and gesture: Relations to eye movements for event components
Aktan-Erciyes, Asli; Akbuga, Emir; Kizildere, Erim; Goksun, Tilbe (Sage Publications Ltd, 2023)Purpose: We investigated interrelations among speech, co-speech gestures, and visual attention in first language (L1)-Turkish second language (L2)-English speakers' descriptions of motion events. We asked whether young ...