Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research

Evaluating Amazon's Mechanical Turk as a Tool for Experimental Behavioral Research

March 13, 2013 | Matthew J. C. Crump, John V. McDonnell, Todd M. Gureckis
This study evaluates Amazon Mechanical Turk (AMT) as a tool for experimental behavioral research. The authors replicated a diverse set of cognitive tasks, including the Stroop, Switching, Flanker, Simon, Posner cuing, attentional blink, subliminal priming, and category learning tasks, using participants recruited via AMT. While most replications were qualitatively successful and validated the approach of collecting data anonymously online, others revealed discrepancies between laboratory and online results. The study highlights important lessons for researchers considering online data collection, emphasizing the need for careful consideration of technical and environmental factors. The study found that data collected online using AMT closely resembled data collected in the lab under controlled conditions. However, for certain types of experiments, the alignment between laboratory and online results showed greater disparity. The results suggest that AMT can replicate classic findings in cognitive psychology with reasonable fidelity, supporting its use in cognitive behavioral research. The study also notes that while AMT offers advantages such as faster data collection and access to a diverse participant pool, it also presents challenges, including potential variability in computer systems and environmental control. The authors conducted four reaction time experiments (Stroop, Task-switching, Flanker, Simon) and three experiments involving rapid stimulus presentation (Visual cuing, Attentional blink, Subliminal priming). The results showed that AMT can replicate classic effects such as the Stroop effect, task-switching costs, Flanker effect, and Simon effect, as well as visual cuing and attentional blink effects. The study concludes that AMT is a viable tool for conducting multi-trial designs in cognitive behavioral research, although researchers should be aware of potential limitations and take steps to ensure the reliability of their data.This study evaluates Amazon Mechanical Turk (AMT) as a tool for experimental behavioral research. The authors replicated a diverse set of cognitive tasks, including the Stroop, Switching, Flanker, Simon, Posner cuing, attentional blink, subliminal priming, and category learning tasks, using participants recruited via AMT. While most replications were qualitatively successful and validated the approach of collecting data anonymously online, others revealed discrepancies between laboratory and online results. The study highlights important lessons for researchers considering online data collection, emphasizing the need for careful consideration of technical and environmental factors. The study found that data collected online using AMT closely resembled data collected in the lab under controlled conditions. However, for certain types of experiments, the alignment between laboratory and online results showed greater disparity. The results suggest that AMT can replicate classic findings in cognitive psychology with reasonable fidelity, supporting its use in cognitive behavioral research. The study also notes that while AMT offers advantages such as faster data collection and access to a diverse participant pool, it also presents challenges, including potential variability in computer systems and environmental control. The authors conducted four reaction time experiments (Stroop, Task-switching, Flanker, Simon) and three experiments involving rapid stimulus presentation (Visual cuing, Attentional blink, Subliminal priming). The results showed that AMT can replicate classic effects such as the Stroop effect, task-switching costs, Flanker effect, and Simon effect, as well as visual cuing and attentional blink effects. The study concludes that AMT is a viable tool for conducting multi-trial designs in cognitive behavioral research, although researchers should be aware of potential limitations and take steps to ensure the reliability of their data.
Reach us at info@study.space