This article introduces the concept of equivalence testing as a practical approach to support the absence of a meaningful effect in psychological research. Traditional significance tests often lead to incorrect conclusions about the absence of an effect when p-values are nonsignificant. Equivalence testing, particularly the two one-sided tests (TOST) procedure, allows researchers to statistically reject effects that are large enough to be considered worthwhile. The article provides a detailed explanation of the TOST procedure, including the calculation of t-values and power analysis, and offers a spreadsheet and R package to facilitate its implementation. It emphasizes the importance of setting equivalence bounds based on standardized effect sizes or theoretical predictions, and discusses the benefits of using equivalence tests, such as improving statistical and theoretical inferences. The article also addresses common challenges and limitations, such as the need for large sample sizes and the lack of clear benchmarks for effect sizes. Overall, the article advocates for the adoption of equivalence testing to enhance the rigor and transparency of psychological research.This article introduces the concept of equivalence testing as a practical approach to support the absence of a meaningful effect in psychological research. Traditional significance tests often lead to incorrect conclusions about the absence of an effect when p-values are nonsignificant. Equivalence testing, particularly the two one-sided tests (TOST) procedure, allows researchers to statistically reject effects that are large enough to be considered worthwhile. The article provides a detailed explanation of the TOST procedure, including the calculation of t-values and power analysis, and offers a spreadsheet and R package to facilitate its implementation. It emphasizes the importance of setting equivalence bounds based on standardized effect sizes or theoretical predictions, and discusses the benefits of using equivalence tests, such as improving statistical and theoretical inferences. The article also addresses common challenges and limitations, such as the need for large sample sizes and the lack of clear benchmarks for effect sizes. Overall, the article advocates for the adoption of equivalence testing to enhance the rigor and transparency of psychological research.