This study investigates the impact of hyperparameter optimization on the classification accuracy of fine-tuned convolutional neural network (CNN) models. The research focuses on four hyperparameter optimization methods: grid search, random search, Bayesian optimization, and the Asynchronous Successive Halving Algorithm (ASHA). The study uses three publicly available datasets—CIFAR-100, Stanford Dogs, and MIO-TCD—to evaluate the effectiveness of these methods. The results show that hyperparameter optimization significantly improves CNN classification accuracy, with ASHA achieving a 6% improvement over grid search on the CIFAR-100 dataset. The study also explores the feasibility of using a subset of the training data for hyperparameter optimization, finding that balancing the class distribution in the subset is crucial for achieving optimal hyperparameters. The findings indicate that hyperparameter optimization is highly dependent on the specific task and dataset, and that methods like ASHA and Bayesian optimization are more efficient and effective than grid and random search. The study concludes that hyperparameter optimization is essential for improving the performance of CNN models, and that using balanced datasets and optimizing key hyperparameters such as learning rate and input image size can significantly enhance classification accuracy. The research highlights the importance of hyperparameter optimization in deep learning and provides insights into the best practices for optimizing CNN models.This study investigates the impact of hyperparameter optimization on the classification accuracy of fine-tuned convolutional neural network (CNN) models. The research focuses on four hyperparameter optimization methods: grid search, random search, Bayesian optimization, and the Asynchronous Successive Halving Algorithm (ASHA). The study uses three publicly available datasets—CIFAR-100, Stanford Dogs, and MIO-TCD—to evaluate the effectiveness of these methods. The results show that hyperparameter optimization significantly improves CNN classification accuracy, with ASHA achieving a 6% improvement over grid search on the CIFAR-100 dataset. The study also explores the feasibility of using a subset of the training data for hyperparameter optimization, finding that balancing the class distribution in the subset is crucial for achieving optimal hyperparameters. The findings indicate that hyperparameter optimization is highly dependent on the specific task and dataset, and that methods like ASHA and Bayesian optimization are more efficient and effective than grid and random search. The study concludes that hyperparameter optimization is essential for improving the performance of CNN models, and that using balanced datasets and optimizing key hyperparameters such as learning rate and input image size can significantly enhance classification accuracy. The research highlights the importance of hyperparameter optimization in deep learning and provides insights into the best practices for optimizing CNN models.