Action understanding as inverse planning

Action understanding as inverse planning

2009 | Chris L. Baker, Joshua B. Tenenbaum & Rebecca R. Saxe
The article "Action Understanding as Inverse Planning" by Chris L. Baker, Joshua B. Tenenbaum, and Rebecca R. Saxe proposes a computational framework for understanding human action inference based on Bayesian inverse planning. The framework models the behavior of intentional agents using the principle of rationality, which assumes that agents plan to achieve their goals efficiently given their beliefs about the world. The mental states that cause an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of observed actions with prior knowledge about mental states. The authors formalize this approach in three psychophysical experiments using animated stimuli of agents moving in simple mazes. They assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within the simplified stimulus paradigm and for the flexible nature of goal representations that humans can adopt. The framework is compared with previous models of action understanding, including logical and probabilistic approaches, and a simple heuristic alternative based on low-level motion cues. The experiments test whether human action understanding can be explained by inverse planning and probe the space of representations used in action understanding. The findings suggest that inverse planning models, particularly one that allows for goal changes and subgoals, better capture human judgments compared to simpler models.The article "Action Understanding as Inverse Planning" by Chris L. Baker, Joshua B. Tenenbaum, and Rebecca R. Saxe proposes a computational framework for understanding human action inference based on Bayesian inverse planning. The framework models the behavior of intentional agents using the principle of rationality, which assumes that agents plan to achieve their goals efficiently given their beliefs about the world. The mental states that cause an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of observed actions with prior knowledge about mental states. The authors formalize this approach in three psychophysical experiments using animated stimuli of agents moving in simple mazes. They assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within the simplified stimulus paradigm and for the flexible nature of goal representations that humans can adopt. The framework is compared with previous models of action understanding, including logical and probabilistic approaches, and a simple heuristic alternative based on low-level motion cues. The experiments test whether human action understanding can be explained by inverse planning and probe the space of representations used in action understanding. The findings suggest that inverse planning models, particularly one that allows for goal changes and subgoals, better capture human judgments compared to simpler models.
Reach us at info@study.space
[slides and audio] Action understanding as inverse planning