A computational model of crossmodal processing for conflict resolution

Link:
Autor/in:
Verlag/Körperschaft:
IEEE Press
Erscheinungsjahr:
2017
Medientyp:
Text
Schlagworte:
  • Perception
  • Sound
  • Multisensory processing
  • Attention
  • Brain
  • Learning
  • Perception
  • Sound
  • Multisensory processing
  • Attention
  • Brain
  • Learning
Beschreibung:
  • The brain integrates information from multiple sensory modalities to form a coherent and robust perceptual experience in complex environments. This ability is progressively acquired and fine-tuned during developmental stages in a multisensory environment. A rich set of neural mechanisms supports the integration and segregation of multimodal stimuli, providing the means to efficiently solve conflicts across modalities. Therefore, there is the motivation to develop efficient mechanisms for robotic platforms that process multisensory signals and trigger robust sensory-driven motor behavior. In this paper, we implement a computational model of crossmodal integration in a sound source localization task that accounts also for audiovisual conflict resolution. Our model consists of two layers of reciprocally connected visual and auditory neurons and a layer with crossmodal neurons that learns to integrate (or segregate) audiovisual stimuli on the basis of spatial disparity. To validate our architecture, we propose a spatial localization task in which 30 subjects had to determine the location of the sound source in a virtual scenario with four animated avatars. We measured their accuracy and reaction time under different conditions for congruent and incongruent audiovisual stimuli. We used this study as a baseline to model human-like behavioral responses with a neural network architecture exposed to the same experimental conditions.
Lizenz:
  • info:eu-repo/semantics/closedAccess
Quellsystem:
Forschungsinformationssystem der UHH

Interne Metadaten
Quelldatensatz
oai:www.edit.fis.uni-hamburg.de:publications/6e551bc5-510c-4d9f-afda-32d120902f02