Action understanding as inverse planning

Action understanding as inverse planning

2009 | Baker, Chris L., Rebecca Saxe, and Joshua B. Tenenbaum
This article presents a computational framework for modeling human action understanding based on Bayesian inverse planning. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an "intentional stance" or a "teleological stance". In three psychophysical experiments using animated stimuli of agents moving in simple mazes, the authors assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. The authors discuss the implications of their experimental results for human action understanding in real-world contexts, and suggest how their framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.This article presents a computational framework for modeling human action understanding based on Bayesian inverse planning. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an "intentional stance" or a "teleological stance". In three psychophysical experiments using animated stimuli of agents moving in simple mazes, the authors assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. The authors discuss the implications of their experimental results for human action understanding in real-world contexts, and suggest how their framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.
Reach us at info@study.space
[slides] Action understanding as inverse planning | StudySpace