Vanilla Bayesian Optimization Performs Great in High Dimensions

Vanilla Bayesian Optimization Performs Great in High Dimensions

2024 | Carl Hvarfner, Erik O. Hellsten, Luigi Nardi
Vanilla Bayesian Optimization (BO) performs well in high-dimensional settings when its prior assumptions are adjusted to account for dimensionality. The paper identifies that the main issue with vanilla BO in high dimensions is the assumed complexity of the objective function, which leads to poor performance due to the curse of dimensionality. By scaling the Gaussian process (GP) lengthscale prior with the problem dimensionality, the authors reduce the assumed complexity without imposing structural restrictions on the objective. This modification allows standard BO to outperform existing high-dimensional BO algorithms on multiple real-world tasks. The paper shows that the complexity of the problem is closely related to the maximal information gain (MIG), which measures the information that can be gained from a fixed number of data points. In high-dimensional settings, the MIG decreases, making it difficult to model the objective function accurately. The authors demonstrate that vanilla BO, when modified with a dimensionality-scaled lengthscale prior, performs better than previously thought in high dimensions, achieving competitive or superior results compared to state-of-the-art methods. The paper also discusses the boundary issue in high-dimensional BO, where the acquisition function (e.g., Expected Improvement) may explore high-variance regions along the boundary of the search space. However, the authors show that this is not necessarily the case when the model is uninformed, as the correlation between the incumbent and the next query is crucial for effective optimization. The proposed modification ensures that the model maintains a calibrated complexity, allowing for meaningful inference and a balanced exploration-exploitation trade-off. The authors evaluate their method on various high-dimensional tasks, including synthetic test functions, mid-dimensional tasks, and real-world optimization problems. The results show that the modified vanilla BO performs well, often outperforming other high-dimensional BO methods. The method is simple to implement and does not require any structural assumptions on the objective function, making it a viable alternative to existing high-dimensional BO algorithms. The paper concludes that vanilla BO, when appropriately modified, is highly effective for high-dimensional optimization tasks.Vanilla Bayesian Optimization (BO) performs well in high-dimensional settings when its prior assumptions are adjusted to account for dimensionality. The paper identifies that the main issue with vanilla BO in high dimensions is the assumed complexity of the objective function, which leads to poor performance due to the curse of dimensionality. By scaling the Gaussian process (GP) lengthscale prior with the problem dimensionality, the authors reduce the assumed complexity without imposing structural restrictions on the objective. This modification allows standard BO to outperform existing high-dimensional BO algorithms on multiple real-world tasks. The paper shows that the complexity of the problem is closely related to the maximal information gain (MIG), which measures the information that can be gained from a fixed number of data points. In high-dimensional settings, the MIG decreases, making it difficult to model the objective function accurately. The authors demonstrate that vanilla BO, when modified with a dimensionality-scaled lengthscale prior, performs better than previously thought in high dimensions, achieving competitive or superior results compared to state-of-the-art methods. The paper also discusses the boundary issue in high-dimensional BO, where the acquisition function (e.g., Expected Improvement) may explore high-variance regions along the boundary of the search space. However, the authors show that this is not necessarily the case when the model is uninformed, as the correlation between the incumbent and the next query is crucial for effective optimization. The proposed modification ensures that the model maintains a calibrated complexity, allowing for meaningful inference and a balanced exploration-exploitation trade-off. The authors evaluate their method on various high-dimensional tasks, including synthetic test functions, mid-dimensional tasks, and real-world optimization problems. The results show that the modified vanilla BO performs well, often outperforming other high-dimensional BO methods. The method is simple to implement and does not require any structural assumptions on the objective function, making it a viable alternative to existing high-dimensional BO algorithms. The paper concludes that vanilla BO, when appropriately modified, is highly effective for high-dimensional optimization tasks.
Reach us at info@study.space
[slides and audio] Vanilla Bayesian Optimization Performs Great in High Dimensions