March 13, 2015 | Megan L. Head*, Luke Holman1, Rob Lanfear1,2, Andrew T. Kahn1, Michael D. Jennions1
P-hacking, the practice of manipulating data or analyses to achieve statistically significant results, is widespread in scientific research. A study using text-mining found that p-values from a broad range of scientific disciplines show strong evidence of "evidential value," indicating that researchers are primarily studying phenomena with nonzero effect sizes. However, there was also strong evidence of p-hacking, with an overabundance of p-values just below 0.05. This suggests that while p-hacking may be common, its effect is relatively weak compared to the true effect sizes being measured. The study also analyzed meta-analyses and found that p-hacking is more prevalent in certain disciplines, but the overall effect on meta-analyses is limited. The study highlights the importance of using p-curves to detect p-hacking and assess the reliability of published research. Despite the prevalence of p-hacking, the study concludes that meta-analyses are generally robust to its effects, as they combine data from multiple studies and account for variability. The authors recommend improving research practices, such as better education of researchers, clearer guidelines for data reporting, and the use of prespecification and open data access to reduce p-hacking. Overall, the study underscores the need for greater transparency and rigor in scientific research to minimize the impact of p-hacking on the reliability of published findings.P-hacking, the practice of manipulating data or analyses to achieve statistically significant results, is widespread in scientific research. A study using text-mining found that p-values from a broad range of scientific disciplines show strong evidence of "evidential value," indicating that researchers are primarily studying phenomena with nonzero effect sizes. However, there was also strong evidence of p-hacking, with an overabundance of p-values just below 0.05. This suggests that while p-hacking may be common, its effect is relatively weak compared to the true effect sizes being measured. The study also analyzed meta-analyses and found that p-hacking is more prevalent in certain disciplines, but the overall effect on meta-analyses is limited. The study highlights the importance of using p-curves to detect p-hacking and assess the reliability of published research. Despite the prevalence of p-hacking, the study concludes that meta-analyses are generally robust to its effects, as they combine data from multiple studies and account for variability. The authors recommend improving research practices, such as better education of researchers, clearer guidelines for data reporting, and the use of prespecification and open data access to reduce p-hacking. Overall, the study underscores the need for greater transparency and rigor in scientific research to minimize the impact of p-hacking on the reliability of published findings.