The authors evaluated four sequential sampling models for two-choice decisions—Wiener diffusion, Ornstein–Uhlenbeck (OU) diffusion, accumulator, and Poisson counter—by fitting them to response time (RT) and accuracy data from three experiments. Each model was augmented with assumptions about variability in evidence accumulation rates, response criteria, and base RT across trials. While there was some model mimicry, empirical conditions were identified where models made discriminably different predictions. The best accounts of the data were provided by the Wiener diffusion model, the OU model with small-to-moderate decay, and the accumulator model with long-tailed (exponential) distributions of criteria, although the last could not produce error RTs shorter than correct RTs. The relationship between these models and three recent, neurally inspired models was also examined.
A common feature of many tasks studied by experimental psychologists is a simple decision about a stimulus feature expressed as a choice between two responses. Sequential sampling models provide a way to understand both speed and accuracy within a common theoretical framework. These models assume that stimulus representations are inherently variable or noisy, and decisions are made by accumulating evidence until a criterion is reached. The criterion determines the response, and the time taken determines RT.
Sequential sampling models explain the speed–accuracy trade-off by varying the amount of evidence needed for a response, represented by changes in decision criteria. They also provide precise quantitative predictions of the relationships between mean RTs, accuracy, and RT distribution shapes. Because different models predict different relationships, it is possible to determine which models account for experimental data well.
The authors evaluated four sequential sampling models for two-choice decisions. They conducted a detailed qualitative investigation of RT and accuracy properties and performed comparative fits of the models to three sets of experimental data. They then compared the best-fitting models to a recent model by Usher and McClelland (2001) and two new models closely related to Usher and McClelland's model. These newer models combine features of various sequential sampling models and are argued to be more compatible with neurally inspired theoretical frameworks.
The authors also examined variability in processing and criteria across trials, which is a key factor in model performance. They found that models with variability in accumulation rates and starting points could better fit experimental data. The Wiener diffusion model, OU model with small-to-moderate decay, and accumulator model with long-tailed (exponential) distributions of criteria provided the best fits to the data. The accumulator model, however, could not produce error RTs shorter than correct RTs.
The authors also examined the shape of quantile probability functions (QPFs) and found that the shape of QPFs is determined by a few model parameters. The vertical location of the QPF is determined by the nondecision component of reaction time, $ T_{er} $. The shape of the QPF allows the relative speeds of correct and error responses to be determined by visual inspection. The authors concludedThe authors evaluated four sequential sampling models for two-choice decisions—Wiener diffusion, Ornstein–Uhlenbeck (OU) diffusion, accumulator, and Poisson counter—by fitting them to response time (RT) and accuracy data from three experiments. Each model was augmented with assumptions about variability in evidence accumulation rates, response criteria, and base RT across trials. While there was some model mimicry, empirical conditions were identified where models made discriminably different predictions. The best accounts of the data were provided by the Wiener diffusion model, the OU model with small-to-moderate decay, and the accumulator model with long-tailed (exponential) distributions of criteria, although the last could not produce error RTs shorter than correct RTs. The relationship between these models and three recent, neurally inspired models was also examined.
A common feature of many tasks studied by experimental psychologists is a simple decision about a stimulus feature expressed as a choice between two responses. Sequential sampling models provide a way to understand both speed and accuracy within a common theoretical framework. These models assume that stimulus representations are inherently variable or noisy, and decisions are made by accumulating evidence until a criterion is reached. The criterion determines the response, and the time taken determines RT.
Sequential sampling models explain the speed–accuracy trade-off by varying the amount of evidence needed for a response, represented by changes in decision criteria. They also provide precise quantitative predictions of the relationships between mean RTs, accuracy, and RT distribution shapes. Because different models predict different relationships, it is possible to determine which models account for experimental data well.
The authors evaluated four sequential sampling models for two-choice decisions. They conducted a detailed qualitative investigation of RT and accuracy properties and performed comparative fits of the models to three sets of experimental data. They then compared the best-fitting models to a recent model by Usher and McClelland (2001) and two new models closely related to Usher and McClelland's model. These newer models combine features of various sequential sampling models and are argued to be more compatible with neurally inspired theoretical frameworks.
The authors also examined variability in processing and criteria across trials, which is a key factor in model performance. They found that models with variability in accumulation rates and starting points could better fit experimental data. The Wiener diffusion model, OU model with small-to-moderate decay, and accumulator model with long-tailed (exponential) distributions of criteria provided the best fits to the data. The accumulator model, however, could not produce error RTs shorter than correct RTs.
The authors also examined the shape of quantile probability functions (QPFs) and found that the shape of QPFs is determined by a few model parameters. The vertical location of the QPF is determined by the nondecision component of reaction time, $ T_{er} $. The shape of the QPF allows the relative speeds of correct and error responses to be determined by visual inspection. The authors concluded