Causal Inference in Multisensory Perception

Causal Inference in Multisensory Perception

September 26, 2007 | Konrad P. Körding¹, Ulrik Beierholm², Wei Ji Ma³, Steven Quartz²⁴, Joshua B. Tenenbaum⁵, Ladan Shams⁶
This paper presents a causal inference model for multisensory perception, which infers whether two sensory cues originate from the same location and estimates their location(s). The model accurately predicts the nonlinear integration of cues by human subjects in two auditory-visual localization tasks. The results show that humans can efficiently infer the causal structure as well as the location of causes. By combining insights from the study of causal inference with the ideal-observer approach to sensory cue combination, the authors show that the capacity to infer causal structure is not limited to conscious, high-level cognition; it is also performed continually and effortlessly in perception. The study demonstrates that perceptual cues are seldom ecologically relevant by themselves but acquire their significance through their meaning about their causes. The nervous system constantly combines uncertain information from different sensory modalities into an integrated understanding of the causes of sensory stimulation. The authors developed an ideal observer model that estimates the positions of cues and whether they have a common cause. This model uses two pieces of information: the likelihood of the sensed visual and auditory positions, which are corrupted by noise, and the prior knowledge about the spatial layout of objects. The model accounts for the data very well, with an R² of 0.97. The model predicts the circumstances under which subjects should perceive a common cause or independent causes, whether the individual cues should be fused or processed separately, and how the cues are combined if they are combined. The model also predicts the influence of vision on the perceived position of an auditory stimulus and the influence of auditory cues on the perceived position of a visual stimulus. The model was tested against other models, including those that use interaction priors. The causal inference model fits the data better than these models. The model also explains the observed patterns of partial combination well. The model shows that the probability of perceiving a common cause decreases with increasing spatial disparity. The model also explains the observed biases in auditory localization and the counterintuitive negative biases in perception of distinct causes. The model provides precise predictions of the way people combine cues in an auditory-visual spatial localization task and does so better than earlier models. The model is based on a Bayesian approach and uses a generative model to infer the causal structure. The model accounts for the data very well and provides a good fit to human behavior. The model is also able to make direct predictions about the causal structure of sensory input that would have been impossible with previous models. The model provides an explanation for why models utilizing an interaction prior have been successful at modeling human performance. The model is a partial answer to the question of how and when sights and sounds get paired into a unified conscious percept. The model is a special case of the causal inference model and provides an explanation for why models utilizing an interaction prior have been successful at modeling human performance. The model is a formalism that derives from a strong normative idea and leads to better fits of human performance in the auditory-visual localization task. The model is also able to make directThis paper presents a causal inference model for multisensory perception, which infers whether two sensory cues originate from the same location and estimates their location(s). The model accurately predicts the nonlinear integration of cues by human subjects in two auditory-visual localization tasks. The results show that humans can efficiently infer the causal structure as well as the location of causes. By combining insights from the study of causal inference with the ideal-observer approach to sensory cue combination, the authors show that the capacity to infer causal structure is not limited to conscious, high-level cognition; it is also performed continually and effortlessly in perception. The study demonstrates that perceptual cues are seldom ecologically relevant by themselves but acquire their significance through their meaning about their causes. The nervous system constantly combines uncertain information from different sensory modalities into an integrated understanding of the causes of sensory stimulation. The authors developed an ideal observer model that estimates the positions of cues and whether they have a common cause. This model uses two pieces of information: the likelihood of the sensed visual and auditory positions, which are corrupted by noise, and the prior knowledge about the spatial layout of objects. The model accounts for the data very well, with an R² of 0.97. The model predicts the circumstances under which subjects should perceive a common cause or independent causes, whether the individual cues should be fused or processed separately, and how the cues are combined if they are combined. The model also predicts the influence of vision on the perceived position of an auditory stimulus and the influence of auditory cues on the perceived position of a visual stimulus. The model was tested against other models, including those that use interaction priors. The causal inference model fits the data better than these models. The model also explains the observed patterns of partial combination well. The model shows that the probability of perceiving a common cause decreases with increasing spatial disparity. The model also explains the observed biases in auditory localization and the counterintuitive negative biases in perception of distinct causes. The model provides precise predictions of the way people combine cues in an auditory-visual spatial localization task and does so better than earlier models. The model is based on a Bayesian approach and uses a generative model to infer the causal structure. The model accounts for the data very well and provides a good fit to human behavior. The model is also able to make direct predictions about the causal structure of sensory input that would have been impossible with previous models. The model provides an explanation for why models utilizing an interaction prior have been successful at modeling human performance. The model is a partial answer to the question of how and when sights and sounds get paired into a unified conscious percept. The model is a special case of the causal inference model and provides an explanation for why models utilizing an interaction prior have been successful at modeling human performance. The model is a formalism that derives from a strong normative idea and leads to better fits of human performance in the auditory-visual localization task. The model is also able to make direct
Reach us at info@study.space
[slides] Causal Inference in Multisensory Perception | StudySpace