16 Apr 2010 | John J. Horton, David G. Rand, Richard J. Zeckhauser
The paper "The Online Laboratory: Conducting Experiments in a Real Labor Market" by John J. Horton, David G. Rand, and Richard J. Zeckhauser explores the potential of online labor markets as platforms for conducting experiments. The authors argue that online experiments can be as valid as traditional laboratory and field experiments, offering significant advantages in terms of cost, time, and subject diversity. They replicate three classic experiments—framing, social preferences, and priming—and confirm their results using Amazon's Mechanical Turk (MTurk) platform. Additionally, they conduct a labor supply field experiment, confirming that workers have upward sloping labor supply curves.
The authors discuss the unique challenges and threats to validity in online experiments, such as the risk of multiple accounts and selective attrition, and propose methods to address these issues. They emphasize the importance of proper assignment, stable unit treatment value assumption (SUTVA), and strategies to minimize attrition. Despite these challenges, the authors conclude that online experiments can provide valuable insights into human behavior at a lower cost and with greater efficiency compared to traditional methods.
The paper also highlights the external validity of online experimental results, suggesting that they can be as valid or more valid than traditional methods, depending on the research question. The authors recommend software development priorities and best practices for conducting online experiments, emphasizing the potential role of online laboratories in advancing social sciences.The paper "The Online Laboratory: Conducting Experiments in a Real Labor Market" by John J. Horton, David G. Rand, and Richard J. Zeckhauser explores the potential of online labor markets as platforms for conducting experiments. The authors argue that online experiments can be as valid as traditional laboratory and field experiments, offering significant advantages in terms of cost, time, and subject diversity. They replicate three classic experiments—framing, social preferences, and priming—and confirm their results using Amazon's Mechanical Turk (MTurk) platform. Additionally, they conduct a labor supply field experiment, confirming that workers have upward sloping labor supply curves.
The authors discuss the unique challenges and threats to validity in online experiments, such as the risk of multiple accounts and selective attrition, and propose methods to address these issues. They emphasize the importance of proper assignment, stable unit treatment value assumption (SUTVA), and strategies to minimize attrition. Despite these challenges, the authors conclude that online experiments can provide valuable insights into human behavior at a lower cost and with greater efficiency compared to traditional methods.
The paper also highlights the external validity of online experimental results, suggesting that they can be as valid or more valid than traditional methods, depending on the research question. The authors recommend software development priorities and best practices for conducting online experiments, emphasizing the potential role of online laboratories in advancing social sciences.