Contributed Talk Sessions | Poster Sessions | All Posters | Search Papers
Poster Session A: Tuesday, August 12, 1:30 – 4:30 pm, de Brug & E‑Hall
Prior Scene Context Modulates the Dynamic Interplay Between Bottom-Up and Top-Down Neural Processes in Face Detection
Sule Tasliyurt Celebi1, Daniel Kaiser, Katharina Dobs1; 1Justus Liebig Universität Gießen
Presenter: Sule Tasliyurt Celebi
Within a fraction of a second, we detect faces in our environment. How is this remarkably fast process implemented in the brain, and is it modulated by top-down mechanisms? Here, we used electroencephalography (EEG) to probe how prior scene context shapes temporal dynamics of neural face representations in natural settings. Participants viewed images of natural scenes containing a single face (on the left or right) that followed either a faceless preview (preview condition) or a gray screen (no-preview condition), while performing a face detection task (~10% foils). Using MVPA decoding, we were able to decode the face location (left vs. right) shortly after target onset. Critically, decoding accuracy of face location was initially higher in the preview condition, while the no-preview condition showed increased accuracy at later processing stages. Moreover, time-frequency analyses showed an enhanced decodability of face location in the preview condition in the alpha band (8–13 Hz), consistent with enhanced spatial orienting. Our findings suggest that prior scene context modulates face detection via distinct neural mechanisms that affect both bottom-up sensory integration and top-down spatial attention, thereby highlighting the dynamic interplay between contextual cues and neural processing. Keywords: face perception; time-frequency analysis; top-down processing; MVPA; EEG
Topic Area: Object Recognition & Visual Attention
Extended Abstract: Full Text PDF