Accurately Interpreting Clickthrough Data as Implicit Feedback

Accurately Interpreting Clickthrough Data as Implicit Feedback

August 15–19, 2005 | Thorsten Joachims, Laura Granka, Bing Pan, Helene Hembrooke & Geri Gay
This paper examines the reliability of implicit feedback generated from clickthrough data in WWW search. Using eyetracking to analyze users' decision processes and comparing implicit feedback against manual relevance judgments, the authors conclude that while clicks are informative, they are also biased. Clicks reflect relative preferences rather than absolute relevance judgments. The study identifies two main biases: a "trust bias" where users trust the search engine's ranking, and a "quality bias" where users' clicking decisions are influenced by the overall quality of the retrieval function. Despite these biases, the paper demonstrates that relative preferences derived from clicks are reasonably accurate on average. The authors propose several strategies for generating feedback from clicks, including "Click > Skip Above," "Last Click > Skip Above," and "Click > No-Click Next," which show good agreement with explicit relevance judgments. The study also explores the accuracy of implicit feedback in relation to explicit judgments of the pages, finding reasonable consistency. The findings suggest that while implicit feedback from clicks may not be as reliable as explicit feedback, it is still valuable for adaptive retrieval systems when properly interpreted using machine learning methods for pairwise preferences.This paper examines the reliability of implicit feedback generated from clickthrough data in WWW search. Using eyetracking to analyze users' decision processes and comparing implicit feedback against manual relevance judgments, the authors conclude that while clicks are informative, they are also biased. Clicks reflect relative preferences rather than absolute relevance judgments. The study identifies two main biases: a "trust bias" where users trust the search engine's ranking, and a "quality bias" where users' clicking decisions are influenced by the overall quality of the retrieval function. Despite these biases, the paper demonstrates that relative preferences derived from clicks are reasonably accurate on average. The authors propose several strategies for generating feedback from clicks, including "Click > Skip Above," "Last Click > Skip Above," and "Click > No-Click Next," which show good agreement with explicit relevance judgments. The study also explores the accuracy of implicit feedback in relation to explicit judgments of the pages, finding reasonable consistency. The findings suggest that while implicit feedback from clicks may not be as reliable as explicit feedback, it is still valuable for adaptive retrieval systems when properly interpreted using machine learning methods for pairwise preferences.
Reach us at info@study.space
[slides and audio] Accurately interpreting clickthrough data as implicit feedback