March 1, 1996 | P. Read Montague, Peter Dayan, and Terrence J. Sejnowski
This paper presents a theoretical framework for understanding how mesencephalic dopamine systems encode information about future expectations. The authors propose that dopamine neurons can represent predictive relationships between sensory cues and rewards, and that fluctuations in dopamine release can act as prediction errors that guide learning and decision-making. The framework is based on a model of predictive Hebbian learning, where dopamine neurons adjust their activity based on the difference between expected and actual rewards.
The authors show that dopamine neurons respond to sensory stimuli and reward delivery, and that their activity is sensitive to the timing of these events. They demonstrate that dopamine neurons can represent the expected time of reward delivery and that their activity is modulated by the accuracy of these predictions. The model also accounts for how dopamine neurons can represent both sensory and reward-related information, and how these representations can be used to guide behavior.
The authors test their model against physiological data from monkeys, showing that dopamine neurons respond to sensory cues and reward delivery in a way that is consistent with the model. They also show that the model can predict human decision-making behavior in a card choice task, where participants choose between two decks of cards based on the expected reward from each.
The model is based on a prediction error mechanism, where dopamine neurons adjust their activity based on the difference between expected and actual rewards. This mechanism is consistent with other models of reinforcement learning and optimal control. The authors also show that the model can account for the effects of dopamine on synaptic plasticity and signal-to-noise ratios, and that it can explain the role of dopamine in learning and decision-making.
The authors conclude that their model provides a general framework for understanding how dopamine systems contribute to learning and decision-making. They suggest that dopamine neurons represent prediction errors between expected and actual rewards, and that these errors are used to guide behavior. The model also suggests that dopamine systems can represent both sensory and reward-related information, and that these representations can be used to guide behavior. The authors also note that the model can account for the effects of dopamine on synaptic plasticity and signal-to-noise ratios, and that it can explain the role of dopamine in learning and decision-making.This paper presents a theoretical framework for understanding how mesencephalic dopamine systems encode information about future expectations. The authors propose that dopamine neurons can represent predictive relationships between sensory cues and rewards, and that fluctuations in dopamine release can act as prediction errors that guide learning and decision-making. The framework is based on a model of predictive Hebbian learning, where dopamine neurons adjust their activity based on the difference between expected and actual rewards.
The authors show that dopamine neurons respond to sensory stimuli and reward delivery, and that their activity is sensitive to the timing of these events. They demonstrate that dopamine neurons can represent the expected time of reward delivery and that their activity is modulated by the accuracy of these predictions. The model also accounts for how dopamine neurons can represent both sensory and reward-related information, and how these representations can be used to guide behavior.
The authors test their model against physiological data from monkeys, showing that dopamine neurons respond to sensory cues and reward delivery in a way that is consistent with the model. They also show that the model can predict human decision-making behavior in a card choice task, where participants choose between two decks of cards based on the expected reward from each.
The model is based on a prediction error mechanism, where dopamine neurons adjust their activity based on the difference between expected and actual rewards. This mechanism is consistent with other models of reinforcement learning and optimal control. The authors also show that the model can account for the effects of dopamine on synaptic plasticity and signal-to-noise ratios, and that it can explain the role of dopamine in learning and decision-making.
The authors conclude that their model provides a general framework for understanding how dopamine systems contribute to learning and decision-making. They suggest that dopamine neurons represent prediction errors between expected and actual rewards, and that these errors are used to guide behavior. The model also suggests that dopamine systems can represent both sensory and reward-related information, and that these representations can be used to guide behavior. The authors also note that the model can account for the effects of dopamine on synaptic plasticity and signal-to-noise ratios, and that it can explain the role of dopamine in learning and decision-making.