audio-visual processing
Listeners process intonational and segmental differences differently when viewing a speaker. Specifically, head and facial movements around the eyes may provide crucial cues for intonational differences, while articulatory gestures around the mouth are more informative for segmental contrasts. This project examines the interaction between visual processing and auditory input, focusing on how listeners allocate visual attention when faced with an audio-visual match-to-sample task. We investigate how monolingual adult listeners’ eye-gaze patterns vary when processing different linguistic cues in their native and a foreign language.
The project consists of three studies:
- Study A explores how native English listeners’ visual scanning of different facial regions predicts their accuracy in identifying segmental versus prosodic information, as well as how language familiarity influences this process.
- Study B examines native English listeners’ gaze patterns when processing prosodic contrasts in English at different linguistic levels, including clausal, phrasal, and lexical prosody.
- Study C investigates how native Mandarin listeners visually decode Mandarin sentences that contrast in segmental features, lexical tones, and sentential focus.