2018 | Daniël Lakens, Anne M. Scheel, and Peder M. Isager
Lakens, Scheel, and Isager (2018) present a tutorial on equivalence testing in psychological research. The article explains how to test for the absence of an effect, using the two one-sided tests (TOST) procedure to determine if an observed effect is surprisingly small given a meaningful effect exists. They discuss various approaches to determining the smallest effect size of interest (SESOI) and provide detailed examples of how to perform and report equivalence tests. Equivalence tests are an important extension of current statistical tools, enabling researchers to falsify predictions about the presence or absence of meaningful effects.
The article emphasizes the importance of justifying the SESOI based on theoretical predictions, cost-benefit analyses, or previous studies. It also discusses the difference between raw and standardized equivalence bounds and provides examples of how to perform equivalence tests using R software. The authors argue that equivalence tests allow researchers to distinguish between statistical significance and practical significance, improving the falsifiability of psychological research.
The authors also highlight the importance of preregistering equivalence bounds before data collection and emphasize that equivalence tests should be used in conjunction with null-hypothesis significance tests. They conclude that incorporating equivalence tests into the statistical toolbox will help researchers contribute to better, more falsifiable theories in psychology. The article is published in Advances in Methods and Practices in Psychological Science, and the authors have made their code and data publicly available on the Open Science Framework.Lakens, Scheel, and Isager (2018) present a tutorial on equivalence testing in psychological research. The article explains how to test for the absence of an effect, using the two one-sided tests (TOST) procedure to determine if an observed effect is surprisingly small given a meaningful effect exists. They discuss various approaches to determining the smallest effect size of interest (SESOI) and provide detailed examples of how to perform and report equivalence tests. Equivalence tests are an important extension of current statistical tools, enabling researchers to falsify predictions about the presence or absence of meaningful effects.
The article emphasizes the importance of justifying the SESOI based on theoretical predictions, cost-benefit analyses, or previous studies. It also discusses the difference between raw and standardized equivalence bounds and provides examples of how to perform equivalence tests using R software. The authors argue that equivalence tests allow researchers to distinguish between statistical significance and practical significance, improving the falsifiability of psychological research.
The authors also highlight the importance of preregistering equivalence bounds before data collection and emphasize that equivalence tests should be used in conjunction with null-hypothesis significance tests. They conclude that incorporating equivalence tests into the statistical toolbox will help researchers contribute to better, more falsifiable theories in psychology. The article is published in Advances in Methods and Practices in Psychological Science, and the authors have made their code and data publicly available on the Open Science Framework.