A Survey on Learning to Hash

A Survey on Learning to Hash

VOL. 13, NO. 9, MAY 2015 | Jingdong Wang, Ting Zhang, Jingkuan Song, Nicu Sebe, and Heng Tao Shen
This paper provides a comprehensive survey of learning to hash algorithms, categorizing them into four main classes based on how they preserve similarities: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, and quantization. The paper discusses the evaluation protocols, performance analysis, and emerging topics in the field. It highlights that quantization algorithms outperform other approaches in terms of search accuracy, time cost, and space cost. The survey also covers the background of nearest neighbor search and hashing, including exact and approximate nearest neighbor search problems, and the search strategies using hash tables and hash code ranking. The main methodology of learning to hash is explained, focusing on similarity preservation, and the paper categorizes existing algorithms into the four classes mentioned above. The paper provides detailed reviews of algorithms within each class, including spectral hashing, LDA hashing, minimal loss hashing, semi-supervised hashing, topology preserving hashing, binary reconstructive embedding, supervised hashing with kernels, and normalized similarity-similarity divergence minimization.This paper provides a comprehensive survey of learning to hash algorithms, categorizing them into four main classes based on how they preserve similarities: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, and quantization. The paper discusses the evaluation protocols, performance analysis, and emerging topics in the field. It highlights that quantization algorithms outperform other approaches in terms of search accuracy, time cost, and space cost. The survey also covers the background of nearest neighbor search and hashing, including exact and approximate nearest neighbor search problems, and the search strategies using hash tables and hash code ranking. The main methodology of learning to hash is explained, focusing on similarity preservation, and the paper categorizes existing algorithms into the four classes mentioned above. The paper provides detailed reviews of algorithms within each class, including spectral hashing, LDA hashing, minimal loss hashing, semi-supervised hashing, topology preserving hashing, binary reconstructive embedding, supervised hashing with kernels, and normalized similarity-similarity divergence minimization.
Reach us at info@study.space