This paper provides a comprehensive review of hyper-parameter optimization (HPO) techniques for machine learning (ML) algorithms. It discusses the importance of tuning hyper-parameters to achieve optimal model performance and introduces various state-of-the-art optimization techniques, including decision-theoretic approaches, Bayesian optimization, multi-fidelity optimization, and metaheuristic algorithms. The paper also covers common ML models such as linear models, KNN, SVM, Naive Bayes, tree-based models, ensemble learning algorithms, and deep learning models, detailing their key hyper-parameters. Additionally, it reviews popular HPO libraries and frameworks, presents experimental results on benchmark datasets, and discusses open challenges and future research directions in the field of HPO. The paper aims to help industrial users, data analysts, and researchers effectively tune hyper-parameters and develop high-performance ML models.This paper provides a comprehensive review of hyper-parameter optimization (HPO) techniques for machine learning (ML) algorithms. It discusses the importance of tuning hyper-parameters to achieve optimal model performance and introduces various state-of-the-art optimization techniques, including decision-theoretic approaches, Bayesian optimization, multi-fidelity optimization, and metaheuristic algorithms. The paper also covers common ML models such as linear models, KNN, SVM, Naive Bayes, tree-based models, ensemble learning algorithms, and deep learning models, detailing their key hyper-parameters. Additionally, it reviews popular HPO libraries and frameworks, presents experimental results on benchmark datasets, and discusses open challenges and future research directions in the field of HPO. The paper aims to help industrial users, data analysts, and researchers effectively tune hyper-parameters and develop high-performance ML models.