2009 | Klaas Enno Stephan, Will D. Penny, Jean Daunizeau, Rosalyn J. Moran, Karl J. Friston
In 2009, Stephan, Penny, Daunizeau, Moran, and Friston introduced a novel Bayesian model selection (BMS) method for group studies. This approach addresses the limitations of traditional fixed-effects methods, such as the group Bayes factor (GBF), which assume all subjects are generated by the same model and can be influenced by outliers. The proposed method uses a hierarchical Bayesian framework with variational Bayes (VB) to estimate the probability distribution over models, treating the model as a random variable and estimating the parameters of a Dirichlet distribution that describes the probabilities of all models. This allows for inference on model space using both classical and Bayesian perspectives. The method computes the conditional density of model probabilities given the log-evidence for each model over subjects, providing a more informative and robust approach than the GBF or frequentist tests of log-evidence. The approach is particularly effective in the presence of outliers and is applicable to a wide range of group studies, including comparisons of dynamic causal models (DCMs) for fMRI or electrophysiological data, as well as other modeling tasks like comparing source reconstruction methods for EEG/MEG or selecting among competing computational models of learning and decision-making. The method was validated using empirical and synthetic data, demonstrating its superiority in accurately reflecting the structure of the population sampled from. The hierarchical model's agglomerative property allows for the comparison of different model subspaces, enabling the assessment of the importance of specific model attributes or subspaces. The results show that the proposed method provides more accurate and robust model comparisons, particularly in the presence of outliers, and is more sensitive to both the distribution and magnitude of log-evidence differences. The method also enables the computation of exceedance probabilities, which represent the probability that a particular model is more likely than any other model, and are more intuitive and interpretable than conventional posterior probabilities. The approach is sensitive to the confidence in the posterior probability and is particularly useful for assessing the importance of changes along any specific dimension of model space. The method's ability to integrate out uncertainty over any aspect of model structure, other than the one of interest, makes it a powerful tool for group-level model selection.In 2009, Stephan, Penny, Daunizeau, Moran, and Friston introduced a novel Bayesian model selection (BMS) method for group studies. This approach addresses the limitations of traditional fixed-effects methods, such as the group Bayes factor (GBF), which assume all subjects are generated by the same model and can be influenced by outliers. The proposed method uses a hierarchical Bayesian framework with variational Bayes (VB) to estimate the probability distribution over models, treating the model as a random variable and estimating the parameters of a Dirichlet distribution that describes the probabilities of all models. This allows for inference on model space using both classical and Bayesian perspectives. The method computes the conditional density of model probabilities given the log-evidence for each model over subjects, providing a more informative and robust approach than the GBF or frequentist tests of log-evidence. The approach is particularly effective in the presence of outliers and is applicable to a wide range of group studies, including comparisons of dynamic causal models (DCMs) for fMRI or electrophysiological data, as well as other modeling tasks like comparing source reconstruction methods for EEG/MEG or selecting among competing computational models of learning and decision-making. The method was validated using empirical and synthetic data, demonstrating its superiority in accurately reflecting the structure of the population sampled from. The hierarchical model's agglomerative property allows for the comparison of different model subspaces, enabling the assessment of the importance of specific model attributes or subspaces. The results show that the proposed method provides more accurate and robust model comparisons, particularly in the presence of outliers, and is more sensitive to both the distribution and magnitude of log-evidence differences. The method also enables the computation of exceedance probabilities, which represent the probability that a particular model is more likely than any other model, and are more intuitive and interpretable than conventional posterior probabilities. The approach is sensitive to the confidence in the posterior probability and is particularly useful for assessing the importance of changes along any specific dimension of model space. The method's ability to integrate out uncertainty over any aspect of model structure, other than the one of interest, makes it a powerful tool for group-level model selection.