John P. A. Ioannidis discusses the issue that most published research findings are likely false. He argues that the probability of a research claim being true depends on factors such as study power, bias, the number of studies on the same question, and the ratio of true to no relationships in a field. Simulations show that for most study designs, research claims are more likely to be false than true. He also notes that many research findings may simply reflect the prevailing bias in a field.
Ioannidis explains that the probability of a research finding being true is influenced by the prior probability of it being true, the statistical power of the study, and the level of statistical significance. He uses a 2x2 table to illustrate how the probability of a finding being true (positive predictive value, PPV) depends on these factors. He also considers the impact of bias and repeated testing by multiple teams, which can further reduce the likelihood of a finding being true.
He outlines several corollaries, including that smaller studies, smaller effect sizes, a larger number of tested relationships, and greater flexibility in research designs all reduce the likelihood of a finding being true. Additionally, financial and other interests, and the presence of many teams in a field, also lower the probability of a finding being true.
Ioannidis also notes that research findings in fields with very low pre-study odds are often inaccurate measures of true relationships and may simply reflect the prevailing bias. He suggests that improving research standards, reducing bias, and focusing on large-scale studies can help improve the reliability of research findings. However, he argues that it is often difficult to know the true probability of a finding being true, and that statistical significance testing in a single study provides only a partial picture. He concludes that most new discoveries will come from hypothesis-generating research with low pre-study odds, and that researchers should be cautious about interpreting statistically significant findings without considering the broader context of the field.John P. A. Ioannidis discusses the issue that most published research findings are likely false. He argues that the probability of a research claim being true depends on factors such as study power, bias, the number of studies on the same question, and the ratio of true to no relationships in a field. Simulations show that for most study designs, research claims are more likely to be false than true. He also notes that many research findings may simply reflect the prevailing bias in a field.
Ioannidis explains that the probability of a research finding being true is influenced by the prior probability of it being true, the statistical power of the study, and the level of statistical significance. He uses a 2x2 table to illustrate how the probability of a finding being true (positive predictive value, PPV) depends on these factors. He also considers the impact of bias and repeated testing by multiple teams, which can further reduce the likelihood of a finding being true.
He outlines several corollaries, including that smaller studies, smaller effect sizes, a larger number of tested relationships, and greater flexibility in research designs all reduce the likelihood of a finding being true. Additionally, financial and other interests, and the presence of many teams in a field, also lower the probability of a finding being true.
Ioannidis also notes that research findings in fields with very low pre-study odds are often inaccurate measures of true relationships and may simply reflect the prevailing bias. He suggests that improving research standards, reducing bias, and focusing on large-scale studies can help improve the reliability of research findings. However, he argues that it is often difficult to know the true probability of a finding being true, and that statistical significance testing in a single study provides only a partial picture. He concludes that most new discoveries will come from hypothesis-generating research with low pre-study odds, and that researchers should be cautious about interpreting statistically significant findings without considering the broader context of the field.