23 Jan 2020 | Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, Dario Amodei
This paper presents empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.
The key findings include: performance depends strongly on scale, weakly on model shape; performance has a power-law relationship with each of the three scale factors N, D, C when not bottlenecked by the other two; universality of overfitting; universality of training; transfer improves with test performance; sample efficiency; convergence is inefficient; optimal batch size; and optimal allocation of the compute budget.
The results show that language modeling performance improves smoothly and predictably as we appropriately scale up model size, data, and compute. Larger language models will perform better and be more sample efficient than current models. The paper also provides theoretical motivation for the scaling laws, analysis of learning curve fits, and a breakdown of results per token. It also makes brief comparisons to LSTMs and recurrent Transformers.This paper presents empirical scaling laws for language model performance on the cross-entropy loss. The loss scales as a power-law with model size, dataset size, and the amount of compute used for training, with some trends spanning more than seven orders of magnitude. Other architectural details such as network width or depth have minimal effects within a wide range. Simple equations govern the dependence of overfitting on model/dataset size and the dependence of training speed on model size. These relationships allow us to determine the optimal allocation of a fixed compute budget. Larger models are significantly more sample-efficient, such that optimally compute-efficient training involves training very large models on a relatively modest amount of data and stopping significantly before convergence.
The key findings include: performance depends strongly on scale, weakly on model shape; performance has a power-law relationship with each of the three scale factors N, D, C when not bottlenecked by the other two; universality of overfitting; universality of training; transfer improves with test performance; sample efficiency; convergence is inefficient; optimal batch size; and optimal allocation of the compute budget.
The results show that language modeling performance improves smoothly and predictably as we appropriately scale up model size, data, and compute. Larger language models will perform better and be more sample efficient than current models. The paper also provides theoretical motivation for the scaling laws, analysis of learning curve fits, and a breakdown of results per token. It also makes brief comparisons to LSTMs and recurrent Transformers.