Explainable Benchmarking for Iterative Optimization Heuristics

Explainable Benchmarking for Iterative Optimization Heuristics

23 Feb 2024 | NIKI VAN STEIN, LIACS, Leiden University, Netherlands; DIEDERICK VERMETTEN, LIACS, Leiden University, Netherlands; ANNA V. KONONOVA, LIACS, Leiden University, Netherlands; THOMAS BÄCK, LIACS, Leiden University, Netherlands
The paper introduces a novel approach called "Explainable Benchmarking" to enhance the understanding of heuristic optimization algorithms. It presents the IOHxplainer software framework, which uses explainable AI techniques to analyze and interpret the performance of various optimization algorithms and their components and hyperparameters. The framework is demonstrated through two modular optimization frameworks: modCMA and modDE. By collecting performance data on a large number of algorithm configurations, IOHxplainer provides insights into the impact of different algorithmic components and hyperparameters across diverse scenarios. The framework allows for systematic evaluation and interpretation of iterative optimization heuristics, improving benchmarking and algorithm design. The paper also discusses the limitations of traditional benchmarking methods and highlights the importance of explainable AI in understanding the behavior of complex algorithms.The paper introduces a novel approach called "Explainable Benchmarking" to enhance the understanding of heuristic optimization algorithms. It presents the IOHxplainer software framework, which uses explainable AI techniques to analyze and interpret the performance of various optimization algorithms and their components and hyperparameters. The framework is demonstrated through two modular optimization frameworks: modCMA and modDE. By collecting performance data on a large number of algorithm configurations, IOHxplainer provides insights into the impact of different algorithmic components and hyperparameters across diverse scenarios. The framework allows for systematic evaluation and interpretation of iterative optimization heuristics, improving benchmarking and algorithm design. The paper also discusses the limitations of traditional benchmarking methods and highlights the importance of explainable AI in understanding the behavior of complex algorithms.
Reach us at info@study.space
[slides and audio] Explainable Benchmarking for Iterative Optimization Heuristics