Circular analysis in systems neuroscience: the dangers of double dipping

Circular analysis in systems neuroscience: the dangers of double dipping

May 2009 | Nikolaus Kriegeskorte, W Kyle Simmon, Patrick S F Bellgowan & Chris I Baker
Circular analysis in systems neuroscience: the dangers of double dipping Nikolaus Kriegeskorte, W Kyle Simmons, Patrick S F Bellgowan & Chris I Baker Neuroscientific experiments generate large datasets, but only a small fraction is analyzed in detail. Selection among noisy measurements can lead to circular analysis, invalidating results. The authors argue that systems neuroscience should adjust practices to avoid circularity, particularly 'double dipping'—using the same dataset for selection and analysis. This can distort statistics and invalidate inference when results are not independent of selection criteria under the null hypothesis. Selection, such as defining regions of interest (ROIs) or restricting analysis to certain neurons, can introduce bias. In neuroimaging, ROIs are often defined by statistical maps, while in electrophysiology, analysis is restricted to certain neurons. Selection can lead to overfitting, where results appear more consistent with the selection criteria, biasing the analysis. The authors demonstrate that circular analysis can produce spurious effects in both univariate and multivariate pattern-information analyses. They suggest a policy to avoid circularity, such as using independent data for selection and analysis. In neuroimaging, this involves using independent datasets for final analysis or using inherently independent statistics for selection and analysis. The authors analyzed 134 fMRI studies published in five prestigious journals and found that 42% contained non-independent selective analyses. They argue that while some studies may not be entirely incorrect, the distortions can be significant and affect the validity of results. They emphasize the need for careful consideration of each case and the importance of reanalyses and replications. The problem arises when selection criteria are related to the results statistics. In neuroimaging, this can occur when ROI selection is based on the same data used for analysis. The authors suggest using independent data or orthogonal contrasts to ensure independence. The authors conclude that circular analysis can distort results and invalidate statistical tests. They advocate for a policy that ensures independence between selection and analysis, using independent data or inherently independent statistics. They emphasize the importance of statistical methods in neuroimaging and the need for careful analysis to avoid circularity.Circular analysis in systems neuroscience: the dangers of double dipping Nikolaus Kriegeskorte, W Kyle Simmons, Patrick S F Bellgowan & Chris I Baker Neuroscientific experiments generate large datasets, but only a small fraction is analyzed in detail. Selection among noisy measurements can lead to circular analysis, invalidating results. The authors argue that systems neuroscience should adjust practices to avoid circularity, particularly 'double dipping'—using the same dataset for selection and analysis. This can distort statistics and invalidate inference when results are not independent of selection criteria under the null hypothesis. Selection, such as defining regions of interest (ROIs) or restricting analysis to certain neurons, can introduce bias. In neuroimaging, ROIs are often defined by statistical maps, while in electrophysiology, analysis is restricted to certain neurons. Selection can lead to overfitting, where results appear more consistent with the selection criteria, biasing the analysis. The authors demonstrate that circular analysis can produce spurious effects in both univariate and multivariate pattern-information analyses. They suggest a policy to avoid circularity, such as using independent data for selection and analysis. In neuroimaging, this involves using independent datasets for final analysis or using inherently independent statistics for selection and analysis. The authors analyzed 134 fMRI studies published in five prestigious journals and found that 42% contained non-independent selective analyses. They argue that while some studies may not be entirely incorrect, the distortions can be significant and affect the validity of results. They emphasize the need for careful consideration of each case and the importance of reanalyses and replications. The problem arises when selection criteria are related to the results statistics. In neuroimaging, this can occur when ROI selection is based on the same data used for analysis. The authors suggest using independent data or orthogonal contrasts to ensure independence. The authors conclude that circular analysis can distort results and invalidate statistical tests. They advocate for a policy that ensures independence between selection and analysis, using independent data or inherently independent statistics. They emphasize the importance of statistical methods in neuroimaging and the need for careful analysis to avoid circularity.
Reach us at info@study.space
[slides] Circular analysis in systems neuroscience%3A the dangers of double dipping | StudySpace