An Analysis of Linear Time Series Forecasting Models

An Analysis of Linear Time Series Forecasting Models

25 Mar 2024 | William Toner, Luke Darlow
This paper analyzes the functional equivalence of various linear time series forecasting models. Despite their complexity, deep learning models often perform only marginally better than simpler linear models. The authors show that several popular linear models for time series forecasting are functionally equivalent to standard linear regression. They demonstrate that each model can be reinterpreted as unconstrained linear regression over a suitably augmented feature set, allowing for closed-form solutions when using a mean-squared loss function. Experimental evidence shows that these models learn nearly identical solutions, and that the simpler closed-form solutions are superior forecasters in 72% of test settings. The paper also shows that the model classes for various linear models can be characterized as affine linear functions, and that the use of feature normalization techniques such as instance normalization does not significantly alter the model class. The analysis reveals that all these models can be reformulated as unconstrained linear regression on an augmented feature set, and are solvable in closed form due to the convexity of least-squares linear regression. The paper concludes that simple linear models are often on par, or better, than complex or deep models for time series forecasting.This paper analyzes the functional equivalence of various linear time series forecasting models. Despite their complexity, deep learning models often perform only marginally better than simpler linear models. The authors show that several popular linear models for time series forecasting are functionally equivalent to standard linear regression. They demonstrate that each model can be reinterpreted as unconstrained linear regression over a suitably augmented feature set, allowing for closed-form solutions when using a mean-squared loss function. Experimental evidence shows that these models learn nearly identical solutions, and that the simpler closed-form solutions are superior forecasters in 72% of test settings. The paper also shows that the model classes for various linear models can be characterized as affine linear functions, and that the use of feature normalization techniques such as instance normalization does not significantly alter the model class. The analysis reveals that all these models can be reformulated as unconstrained linear regression on an augmented feature set, and are solvable in closed form due to the convexity of least-squares linear regression. The paper concludes that simple linear models are often on par, or better, than complex or deep models for time series forecasting.
Reach us at info@study.space