MAY 2015 | Jingdong Wang, Ting Zhang, Jingkuan Song, Nicu Sebe, and Heng Tao Shen
This paper presents a comprehensive survey of learning to hash algorithms, categorizing them based on similarity preservation methods: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, and quantization. It discusses their relationships, evaluation protocols, and performance analysis, highlighting that quantization algorithms excel in search accuracy, time cost, and space cost. The survey also introduces emerging topics in the field. Learning to hash aims to learn hash functions that preserve similarities between data items in the original and hash spaces, minimizing the gap between similarities. The paper covers various similarity preservation strategies, including pairwise, multiwise, and implicit similarity preservation, as well as quantization. It discusses optimization techniques, loss functions, and hash function designs, and evaluates different approaches. The survey also addresses challenges in optimizing hash functions, including handling the sign function and computational complexity. The paper concludes with a discussion of current trends and future directions in learning to hash.This paper presents a comprehensive survey of learning to hash algorithms, categorizing them based on similarity preservation methods: pairwise similarity preserving, multiwise similarity preserving, implicit similarity preserving, and quantization. It discusses their relationships, evaluation protocols, and performance analysis, highlighting that quantization algorithms excel in search accuracy, time cost, and space cost. The survey also introduces emerging topics in the field. Learning to hash aims to learn hash functions that preserve similarities between data items in the original and hash spaces, minimizing the gap between similarities. The paper covers various similarity preservation strategies, including pairwise, multiwise, and implicit similarity preservation, as well as quantization. It discusses optimization techniques, loss functions, and hash function designs, and evaluates different approaches. The survey also addresses challenges in optimizing hash functions, including handling the sign function and computational complexity. The paper concludes with a discussion of current trends and future directions in learning to hash.