The human environment is multimodal. At any moment, multiple sensory systems receive input simultaneously. Still, most research on perception, attention, and learning has focused unisensory, mostly visual, information processing. Early research on crossmodal interactions has emphasized that visual information is dominant. Yet, more recent work has identified many instances in which coinciding information from other sensory modalities alters visual processing. Also, information processing is recently specifically tested using multisensory objects or asking participants to respond to stimuli in different sensory modalities simultaneously. In this symposium, we would like to provide an overview about recent research on crossmodal interactions and integration. Therefore, five invited speakers from different German laboratories will address fundamental psychological processes, namely, perception, attention, or learning, in multisensory contexts.
First, Hauke Meyerhoff (Leibniz-Institut für Wissensmedien) presents evidence how coinciding crossmodal information impacts on visual perception. The following two presentations will address the influence of crossmodal stimuli on visual attention. Malte Möller (University of Passau) addresses the impact of crossmodal spatial compatibility on attention (i.e., crossmodal Simon effect), Edina Fintor (RWTH Aachen University) reports on the role of crossmodal congruency for task switching. Then, Ann-Katrin Wesslein (University of Tübingen) presents evidence from negative priming indicating that processing a stimulus may cause an amodal representation to be formed, potentially affecting subsequent processing of the same stimulus in another sensory modality. Finally, Julia Föcker (Ludwig Maximilians University, Munich) shows how the processing of human voices is altered in congenitally blind individuals (i.e., differences in the learning history). Taken together, the symposium is supposed to bring together researchers with a mutual interest in crossmodal interactions and to inspire future research as well as collaborations.
14:00 Uhr
Effects of Distractor Duration and Target Modality on the Time Course of the Accessory Simon Effect
Dr. Malte Möller | Universität Passau
» Details anzeigen
Autoren:
Dr. Malte Möller | Universität Passau
Prof. Dr. Susanne Mayr | Universität Passau
Prof. Dr. Axel Buchner | Heinrich-Heine Universität Düsseldorf
Lateralized responses to central targets are facilitated when a distractor is presented ipsilaterally (congruent trials) as compared with contralaterally (incongruent trials) to the response. This so-called accessory Simon effect is explained by assuming that the distractor activates a spatial code which conforms to or conflicts with the side of the response. In vision, the Simon effect typically decreases when the time between distractor and target increases. In contrast, a non-decreasing Simon effect is found when the irrelevant information is conveyed by an auditory stimulus. However, it is unclear whether the distinct time courses of the accessory Simon effect in vision and audition reflect differences in the way the cognitive system responds to irrelevant spatial events in both modalities. The present study tested whether (1) the duration of the distractor and (2) the modality of the target impacts the time course of the accessory Simon effect. In four experiments, a lateralized white noise distractor either occurred prior to/ simultaneously with the target. Distractor duration (short vs. persistent until response) and the modality of the target (visual vs. auditory) were systematically varied between experiments. A Simon effect was obtained in all experiments when distractors and targets were presented simultaneously or in close temporal proximity. For visual targets, the Simon effect did not dissipate over time when distractor sounds persisted until the response, but decreased when distractors were presented shortly. For auditory targets, decreasing Simon effects were found for both short and persisting distractor durations. Moreover, a reversed Simon effect—indicated by impaired performance in congruent as compared with incongruent trials—was found in the short distractor condition. Together, the results show that (1) longer distractor durations lead to a persisting Simon effect, but only for visual targets and (2) distractor-related activation is presumably only or more strongly inhibited when distractor and target are presented in the auditory modality.
14:15 Uhr
Beep, be-, or –ep: The impact of auditory transients on perceived bouncing/streaming
Dr. Hauke Meyerhoff | Leibniz-Institut für Wissenmedien
» Details anzeigen
Autoren:
Dr. Hauke Meyerhoff | Leibniz-Institut für Wissenmedien
Prof. Dr. Satoru Suzuki | Northwestern University
Establishing object correspondence over time (“Which object went where?”) is of central importance for a meaningful interpretation of the surrounding environment. Here, we study auditory contributions to this process using the bouncing/streaming paradigm wherein two discs move toward each other, superimpose, and then move apart. Critically, this event is ambiguous with regard to object correspondence as it is consistent with the interpretation of two discs streaming past each other as well as two discs bouncing off each other. When presented in silence, human observers tend to perceive streaming discs; however, a brief beep that coincides with the moment of visual overlap biases this impression toward bouncing. In four experiments, we tested the hypothesis that this crossmodal interaction is primarily mediated by low-level magnitude-based rather than high-level semantic-based processing. To do so, we orthogonally manipulated the number and semantic category of auditory transients. Specifically, different combinations of onsets and offsets generate qualitatively different events with distinct meanings; a single auditory transient can be a tone onset or a tone offset, and a pair of transients can be a brief tone (onset+offset) or a brief gap (offset+onset). The proportion of seeing bouncing increased with an increasing number of auditory transients (0 vs. 1 vs. 2) regardless of the sound’s semantic category. For example, a tone onset and a tone offset were equally effective (relative to no transients), and a brief tone (onset+offset) and a brief gap (offset+onset) were equivalently more effective. We identified a critical window of ± 200 ms around the visual overlap; a longer tone whose offset occurred outside the window was only as effective as a single onset. These results suggest that a simple additive integration of auditory transients within the critical time window primarily mediates the auditory biasing of visual bouncing percepts.
14:30 Uhr
Modality-specific Crosstalk in Task Switching
Edina Fintor | Universität Aachen
» Details anzeigen
Autoren:
Edina Fintor | Universität Aachen
Prof. Dr. Iring Koch | Universität Aachen
Dr. Denise N. Stephan | Universität Aachen
In our multimodal daily life, it is a relevant question how humans process different sensory modalities and how they select a specific response modality. Our research investigates modality compatibility effects using a task-switching paradigm. We define modality compatibility as the similarity between the stimulus modality and the modality of response-related sensory consequences. Previous studies in task switching found larger switch costs when participants switched between modality incompatible tasks (auditory-manual and visual-vocal) relative to when they switched between modality compatible tasks (auditory-vocal and visual-manual). In the first part of my talk, I focus on our theoretical account, derived from ideomotor learning, explaining modality compatibility effects in task switching. In the second part, I review recent evidence for modality compatibility effects in task switching in different approaches. Firstly, looking into structural issues, I show how modality mappings are represented and that modality compatibility biases responding in “free choice” of response modality. Secondly, I turn to flexibility issue and show that modality compatibility is independent of preparation, suggesting different underlying mechanisms. Finally, I report findings suggesting that short-term induction of modality incompatible tasks can reduce modality compatibility effects.
14:45 Uhr
Negative Priming: Is Ignoring Amodal or Modality-Specific?
Dr. Ann-Katrin Wesslein | Universität Tübingen
» Details anzeigen
Autoren:
Dr. Ann-Katrin Wesslein | Universität Tübingen
Prof. Dr. Christian Frings | Universität Trier
Negative priming (NP) describes the finding that responding towards a target stimulus is impaired on the second of two subsequent displays (i.e., the probe), when this target comprises the same stimulus that served as a distractor stimulus on the first display (i.e., the prime). NP is well-established in vision, audition, and touch. This indicates that performance is generally hindered when it involves responding to previously-ignored stimulus features or using a response that was previously associated with these features (see Frings, Schneider, & Fox, 2015, for a review of possible explanations of NP). We investigate the nature of these representations, namely whether they are modality-specific or exist on an amodal level. In order to measure crossmodal NP, we use rhythms (i.e., temporal patterns) as stimuli, enabling us to present the same stimulus information to different sensory systems. We report a series of experiments testing whether NP can be observed across sensory modalities under different conditions We will discuss explanations for the findings observed and implications for theories of NP.
15:00 Uhr
Unisensory and Multisensory Person Perception
Dr. Julia Föcker | Ludwig-Maximilians-Universität München
» Details anzeigen
Autor:
Dr. Julia Föcker | Ludwig-Maximilians-Universität München
The human voice is one of the most important carriers of relevant social information on which blind individuals rely, as it conveys characteristics about gender, age, and the affective state of a person. Based on the assumption that use-dependent mechanisms of plasticity modify both structure and functions of the brain, we asked the question if human voice processing in early blindness would be faster and more efficient than in late blind and sighted adults. Moreover, we were interested in the neural plastic changes by applying EEG and fMRI.
In order to understand the nature of multisensory integration of human faces and voices, another group of sighted individuals was tested.
The first part of this talk explores the effects of visual deprivation since birth or adulthood on vocal person identity processing. Congenitally blind, late blind and sighted controls were trained to discriminate a set of voices. After fulfilling a specific learning criterion, behavioral, EEG and fMRI experiments were conducted. Results showed that congenitally blind and late blind individuals had superior voice learning skills compared to sighted controls. Congenitally blind, but not late blind individuals revealed earlier priming effects compared to sighted controls, which were distributed over posterior clusters in both blind groups. Moreover, brain imaging data revealed an enhanced activation in the right anterior fusiform gyrus in congenitally and late blind compared to sighted controls. The second part of this talk addresses multisensory interactions in person identification. We asked when human faces modulate the processing of human voices by extending the paradigm mentioned in Part 1. Early face-voice interactions were observed in the time range of the N1 and at later processing stages (>270 ms). Corresponding brain imaging data indicate that the right angular gyrus and the right posterior superior temporal sulcus have a crucial role in crossmodal interactions of human faces and voices. The general discussion integrates the results in the broader context of person-identification.