1994, 1 (4), 476-490 | GEOFFREY R. LOFTUS and MICHAEL E. J. MASSON
The article by Geoffrey R. Loftus and Michael E. J. Masson discusses the use of confidence intervals in within-subject designs, a topic that is often overlooked in social science statistics textbooks. They argue that plotting sample statistics with associated confidence intervals can be a useful supplement or even a replacement for standard hypothesis-testing procedures. The authors focus on the construction of a confidence interval for within-subject designs, which is based on the variability due to the subject × condition interaction rather than between-subject variance. This interval is derived from the same error term as the analysis of variance (ANOVA) and shares two key properties: it provides comparable conclusions to ANOVA and is related to a confidence interval for the difference between sample means by a factor of \(\sqrt{2}\).
The article begins by explaining the historical roots of hypothesis-testing procedures, including Bayesian techniques, Fisher's significance testing, and Neyman and Pearson's approach to hypothesis testing. It then introduces the concept of confidence intervals, emphasizing their utility in understanding the correspondence between observed sample means and underlying population means. The authors provide a detailed explanation of how to compute a within-subject confidence interval, which involves normalizing the data to remove subject variability and using the interaction variance as the error term.
The article also addresses the implications of violating assumptions such as sphericity in repeated measures ANOVA, suggesting corrections and alternative methods like multivariate analysis of variance (MANOVA). It discusses the use of confidence intervals in multifactor designs, mixed designs, and data reduction techniques. The authors conclude by illustrating these concepts with examples, including a hypothetical priming experiment, to demonstrate how confidence intervals can provide valuable insights into the patterns of means and the reliability of effects.The article by Geoffrey R. Loftus and Michael E. J. Masson discusses the use of confidence intervals in within-subject designs, a topic that is often overlooked in social science statistics textbooks. They argue that plotting sample statistics with associated confidence intervals can be a useful supplement or even a replacement for standard hypothesis-testing procedures. The authors focus on the construction of a confidence interval for within-subject designs, which is based on the variability due to the subject × condition interaction rather than between-subject variance. This interval is derived from the same error term as the analysis of variance (ANOVA) and shares two key properties: it provides comparable conclusions to ANOVA and is related to a confidence interval for the difference between sample means by a factor of \(\sqrt{2}\).
The article begins by explaining the historical roots of hypothesis-testing procedures, including Bayesian techniques, Fisher's significance testing, and Neyman and Pearson's approach to hypothesis testing. It then introduces the concept of confidence intervals, emphasizing their utility in understanding the correspondence between observed sample means and underlying population means. The authors provide a detailed explanation of how to compute a within-subject confidence interval, which involves normalizing the data to remove subject variability and using the interaction variance as the error term.
The article also addresses the implications of violating assumptions such as sphericity in repeated measures ANOVA, suggesting corrections and alternative methods like multivariate analysis of variance (MANOVA). It discusses the use of confidence intervals in multifactor designs, mixed designs, and data reduction techniques. The authors conclude by illustrating these concepts with examples, including a hypothetical priming experiment, to demonstrate how confidence intervals can provide valuable insights into the patterns of means and the reliability of effects.