This paper investigates the predictability of large language model (LLM) performance across five orders of magnitude of compute scaling in eleven recent model architectures. It shows that average benchmark performance, aggregating over many individual tasks and evaluations as in the commonly-used BIG-Bench dataset, is decently predictable as a function of training compute scale. Specifically, when extrapolating BIG-Bench Hard performance across one order of magnitude in compute, we observe average absolute errors of 6 percentage points (pp). By contrast, extrapolation for individual BIG-Bench tasks across an order of magnitude in compute yields higher average errors of 18pp. Nonetheless, individual task performance remains significantly more predictable than chance. Overall, our work suggests compute scaling provides a promising basis to forecast AI capabilities in diverse benchmarks, though predicting performance in specific tasks poses challenges.
The study uses a two-step procedure to predict benchmark performance: (i) estimate loss for a given model from scaling laws; and (ii) fit a model to the resulting series of performance versus log-loss. The results show that aggregate benchmark performance is decently predictable from model and data scaling: prediction error is 5–10pp when doubling compute. However, individual task performance is less predictable, though still more predictable than chance or a simple baseline.
The study also finds that performance on individual benchmark tasks varies substantially, with some tasks showing sharp emergence of capabilities that make prediction difficult. The results suggest that much of the observed differences in performance between LLM architectures are well-predicted by scale, with discrepancies often arising from differences in training data and functional forms used for fitting.
The study concludes that while aggregate benchmark performance is fairly predictable as a function of scale, individual task performance is less so. The findings suggest that scaling laws provide a useful basis for forecasting AI capabilities, though predicting performance in specific tasks remains challenging. The study also highlights the importance of benchmark design and the need for more challenging benchmarks to better assess AI capabilities.This paper investigates the predictability of large language model (LLM) performance across five orders of magnitude of compute scaling in eleven recent model architectures. It shows that average benchmark performance, aggregating over many individual tasks and evaluations as in the commonly-used BIG-Bench dataset, is decently predictable as a function of training compute scale. Specifically, when extrapolating BIG-Bench Hard performance across one order of magnitude in compute, we observe average absolute errors of 6 percentage points (pp). By contrast, extrapolation for individual BIG-Bench tasks across an order of magnitude in compute yields higher average errors of 18pp. Nonetheless, individual task performance remains significantly more predictable than chance. Overall, our work suggests compute scaling provides a promising basis to forecast AI capabilities in diverse benchmarks, though predicting performance in specific tasks poses challenges.
The study uses a two-step procedure to predict benchmark performance: (i) estimate loss for a given model from scaling laws; and (ii) fit a model to the resulting series of performance versus log-loss. The results show that aggregate benchmark performance is decently predictable from model and data scaling: prediction error is 5–10pp when doubling compute. However, individual task performance is less predictable, though still more predictable than chance or a simple baseline.
The study also finds that performance on individual benchmark tasks varies substantially, with some tasks showing sharp emergence of capabilities that make prediction difficult. The results suggest that much of the observed differences in performance between LLM architectures are well-predicted by scale, with discrepancies often arising from differences in training data and functional forms used for fitting.
The study concludes that while aggregate benchmark performance is fairly predictable as a function of scale, individual task performance is less so. The findings suggest that scaling laws provide a useful basis for forecasting AI capabilities, though predicting performance in specific tasks remains challenging. The study also highlights the importance of benchmark design and the need for more challenging benchmarks to better assess AI capabilities.