Confirmation, Disconfirmation, and Information in Hypothesis Testing

Confirmation, Disconfirmation, and Information in Hypothesis Testing

1987, Vol. 94, No. 2, 211–228 | Joshua Klayman and Young-Won Ha
The article by Joshua Klayman and Young-Won Ha explores the strategies used in hypothesis testing, particularly focusing on the concepts of confirmation and disconfirmation. They argue that many phenomena often labeled as "confirmation bias" can be better understood as a general *positive test strategy*, where people tend to test cases that are expected to have the property of interest rather than those expected to lack it. This strategy is not equivalent to confirmation bias in the traditional sense but can be a useful heuristic under realistic conditions. However, it can lead to systematic errors or inefficiencies, especially in probabilistic environments. The authors examine the rule discovery task, a classic example of hypothesis testing, where subjects must determine the rule that differentiates a target set from a hypothesized set. They show that people tend to use positive hypothesis tests (+Htests) more frequently than negative hypothesis tests (−Htests), leading to an overconfidence in their hypotheses. They also discuss the logic of ambiguous versus conclusive events, emphasizing that conclusive falsification can only occur with −Htests in certain scenarios, while conclusive verification can only occur with +Htests. The article further analyzes the conditions under which +Htests or −Htests are more likely to yield critical falsification, based on the relationship between the base rates of the target and hypothesized sets. They conclude that under common conditions, such as testing minority phenomena, +Htests are more likely to result in falsification. Additionally, they explore the use of positive and negative target tests (+Ttests and −Ttests) and show that under certain conditions, these tests can also be more informative for obtaining conclusive falsification. Finally, the authors discuss the implications of their findings in probabilistic environments, where even the best possible hypothesis will make some false-positive and false-negative predictions. They argue that the basic findings still apply, with +tests being favored under certain conditions, but the analysis becomes more complex due to the presence of irreducible error.The article by Joshua Klayman and Young-Won Ha explores the strategies used in hypothesis testing, particularly focusing on the concepts of confirmation and disconfirmation. They argue that many phenomena often labeled as "confirmation bias" can be better understood as a general *positive test strategy*, where people tend to test cases that are expected to have the property of interest rather than those expected to lack it. This strategy is not equivalent to confirmation bias in the traditional sense but can be a useful heuristic under realistic conditions. However, it can lead to systematic errors or inefficiencies, especially in probabilistic environments. The authors examine the rule discovery task, a classic example of hypothesis testing, where subjects must determine the rule that differentiates a target set from a hypothesized set. They show that people tend to use positive hypothesis tests (+Htests) more frequently than negative hypothesis tests (−Htests), leading to an overconfidence in their hypotheses. They also discuss the logic of ambiguous versus conclusive events, emphasizing that conclusive falsification can only occur with −Htests in certain scenarios, while conclusive verification can only occur with +Htests. The article further analyzes the conditions under which +Htests or −Htests are more likely to yield critical falsification, based on the relationship between the base rates of the target and hypothesized sets. They conclude that under common conditions, such as testing minority phenomena, +Htests are more likely to result in falsification. Additionally, they explore the use of positive and negative target tests (+Ttests and −Ttests) and show that under certain conditions, these tests can also be more informative for obtaining conclusive falsification. Finally, the authors discuss the implications of their findings in probabilistic environments, where even the best possible hypothesis will make some false-positive and false-negative predictions. They argue that the basic findings still apply, with +tests being favored under certain conditions, but the analysis becomes more complex due to the presence of irreducible error.
Reach us at info@study.space
[slides and audio] Confirmation%2C Disconfirmation%2C and Informa-tion in Hypothesis Testing