Interaction revisited: the difference between two estimates

Interaction revisited: the difference between two estimates

25 JANUARY 2003 | Douglas G Altman, J Martin Bland
This article discusses the statistical concept of interaction, focusing on comparing two estimates of the same quantity from separate analyses. It explains how to compare two estimates, such as means or proportions, each with its standard error. The difference between the two estimates is calculated, and its standard error is derived from the square root of the sum of the squares of the separate standard errors. A z-score is then calculated to test the null hypothesis that the difference is zero. The 95% confidence interval for the difference is also calculated. The article illustrates this method using relative risks and odds ratios, which are analyzed on the log scale. It provides an example of a meta-analysis of non-vertebral fractures in hormone replacement therapy trials, where the relative risks from two subgroups were compared. The logs of the relative risks and their confidence intervals were calculated, and the difference in log relative risks was tested for significance. The results showed no significant interaction between the subgroups. The article also notes that comparing odds ratios follows a similar approach. Comparing means or regression coefficients is simpler as there is no log transformation required. The two estimates must be independent, and the method should not be used to compare a subset with the whole group or two estimates from the same patients. The article emphasizes that detecting interactions has limited power, even in a meta-analysis. It also highlights that significant results in one subgroup and non-significant results in another do not necessarily indicate an interaction. Similarly, overlapping confidence intervals do not necessarily mean the estimates are not significantly different. Statistical analysis should be targeted on the question in hand, not based on comparing P values from separate analyses.This article discusses the statistical concept of interaction, focusing on comparing two estimates of the same quantity from separate analyses. It explains how to compare two estimates, such as means or proportions, each with its standard error. The difference between the two estimates is calculated, and its standard error is derived from the square root of the sum of the squares of the separate standard errors. A z-score is then calculated to test the null hypothesis that the difference is zero. The 95% confidence interval for the difference is also calculated. The article illustrates this method using relative risks and odds ratios, which are analyzed on the log scale. It provides an example of a meta-analysis of non-vertebral fractures in hormone replacement therapy trials, where the relative risks from two subgroups were compared. The logs of the relative risks and their confidence intervals were calculated, and the difference in log relative risks was tested for significance. The results showed no significant interaction between the subgroups. The article also notes that comparing odds ratios follows a similar approach. Comparing means or regression coefficients is simpler as there is no log transformation required. The two estimates must be independent, and the method should not be used to compare a subset with the whole group or two estimates from the same patients. The article emphasizes that detecting interactions has limited power, even in a meta-analysis. It also highlights that significant results in one subgroup and non-significant results in another do not necessarily indicate an interaction. Similarly, overlapping confidence intervals do not necessarily mean the estimates are not significantly different. Statistical analysis should be targeted on the question in hand, not based on comparing P values from separate analyses.
Reach us at info@futurestudyspace.com