2006 | Eric Nowak, Frédéric Jurie, and Bill Triggs
This paper explores the effectiveness of different sampling strategies for bag-of-features (BoF) image classification. The authors focus on the impact of patch sampling methods, comparing random sampling with multi-scale interest operators like Harris-Laplace and Laplacian of Gaussian (LoG). They find that random sampling, especially with a large number of patches, outperforms sophisticated interest operators in terms of classification accuracy. The study also examines the influence of other factors such as codebook size, codebook construction method, histogram normalization, and minimum scale for feature extraction. The results highlight the importance of the number of sampled patches and suggest that dense random sampling is more effective than dense sampling from keypoint detectors. The paper concludes by discussing the implications for future research and the need to control for the number of samples when comparing different methods.This paper explores the effectiveness of different sampling strategies for bag-of-features (BoF) image classification. The authors focus on the impact of patch sampling methods, comparing random sampling with multi-scale interest operators like Harris-Laplace and Laplacian of Gaussian (LoG). They find that random sampling, especially with a large number of patches, outperforms sophisticated interest operators in terms of classification accuracy. The study also examines the influence of other factors such as codebook size, codebook construction method, histogram normalization, and minimum scale for feature extraction. The results highlight the importance of the number of sampled patches and suggest that dense random sampling is more effective than dense sampling from keypoint detectors. The paper concludes by discussing the implications for future research and the need to control for the number of samples when comparing different methods.