Discretization is a key technique in data mining and knowledge discovery, enabling the conversion of continuous features into discrete intervals. Discrete values are more concise, easier to use, and closer to a knowledge-level representation than continuous values. Discretization improves predictive accuracy, simplifies rules, and enhances the performance of induction algorithms. Many machine learning algorithms require discrete features, making discretization essential before or during data mining tasks. This paper provides a systematic study of discretization methods, their history, impact on classification, and trade-offs between speed and accuracy. It summarizes existing methods, proposes a hierarchical framework for categorization, and discusses representative methods through experiments and analysis. The paper also offers guidelines for selecting discretization methods under different circumstances and identifies unresolved issues and future research directions. Discretization is a process of quantizing continuous attributes, which can significantly enhance the performance of learning algorithms. The paper reviews existing methods, standardizes the discretization process, and provides an abstract framework for future research. It discusses the current status of discretization methods, proposes a unified vocabulary, defines a general process, and evaluates different methods. A new hierarchical framework is introduced, along with representative methods and their results on benchmark data. Comparative experiments are conducted, and guidelines for choosing discretization methods are provided. The paper highlights the advantages of discrete data, including better accuracy, speed, and interpretability, and emphasizes the importance of discretization in enabling a wide range of classification learning algorithms.Discretization is a key technique in data mining and knowledge discovery, enabling the conversion of continuous features into discrete intervals. Discrete values are more concise, easier to use, and closer to a knowledge-level representation than continuous values. Discretization improves predictive accuracy, simplifies rules, and enhances the performance of induction algorithms. Many machine learning algorithms require discrete features, making discretization essential before or during data mining tasks. This paper provides a systematic study of discretization methods, their history, impact on classification, and trade-offs between speed and accuracy. It summarizes existing methods, proposes a hierarchical framework for categorization, and discusses representative methods through experiments and analysis. The paper also offers guidelines for selecting discretization methods under different circumstances and identifies unresolved issues and future research directions. Discretization is a process of quantizing continuous attributes, which can significantly enhance the performance of learning algorithms. The paper reviews existing methods, standardizes the discretization process, and provides an abstract framework for future research. It discusses the current status of discretization methods, proposes a unified vocabulary, defines a general process, and evaluates different methods. A new hierarchical framework is introduced, along with representative methods and their results on benchmark data. Comparative experiments are conducted, and guidelines for choosing discretization methods are provided. The paper highlights the advantages of discrete data, including better accuracy, speed, and interpretability, and emphasizes the importance of discretization in enabling a wide range of classification learning algorithms.