This article provides a comprehensive overview of variable selection methods in high-dimensional statistical modeling, emphasizing the role of penalized likelihood estimation. The authors discuss the challenges posed by high-dimensional data, such as collinearity and noise accumulation, and highlight the importance of variable selection for improving statistical accuracy, interpretability, and computational efficiency. They review classical model selection methods like AIC and BIC, and introduce penalized likelihood estimation, including its theoretical foundations and practical implementation. The article also covers advanced penalty functions like SCAD and MCP, and discusses algorithms for solving penalized likelihood problems, such as the local quadratic approximation (LQA) and coordinate descent methods. Additionally, it explores ultra-high-dimensional variable selection techniques, including sure independence screening (SIS) and iterative sure independence screening (ISIS), which are crucial for handling datasets with a vast number of features. The authors conclude by emphasizing the need for further research to address the computational and statistical challenges in high-dimensional statistical inference.This article provides a comprehensive overview of variable selection methods in high-dimensional statistical modeling, emphasizing the role of penalized likelihood estimation. The authors discuss the challenges posed by high-dimensional data, such as collinearity and noise accumulation, and highlight the importance of variable selection for improving statistical accuracy, interpretability, and computational efficiency. They review classical model selection methods like AIC and BIC, and introduce penalized likelihood estimation, including its theoretical foundations and practical implementation. The article also covers advanced penalty functions like SCAD and MCP, and discusses algorithms for solving penalized likelihood problems, such as the local quadratic approximation (LQA) and coordinate descent methods. Additionally, it explores ultra-high-dimensional variable selection techniques, including sure independence screening (SIS) and iterative sure independence screening (ISIS), which are crucial for handling datasets with a vast number of features. The authors conclude by emphasizing the need for further research to address the computational and statistical challenges in high-dimensional statistical inference.