An Analysis of Linear Time Series Forecasting Models

An Analysis of Linear Time Series Forecasting Models

25 Mar 2024 | William Toner, Luke Darlow
This paper analyzes the performance and functional equivalence of various linear time series forecasting models. Despite their simplicity, linear models often outperform deeper and more complex models in time series forecasting. The authors explore several popular variants of linear models, including those that incorporate feature normalization techniques such as instance normalization (IN), reversible instance normalization (RevIN), and Now-Normalization (NN). They show that these models, despite their architectural differences, are functionally equivalent to standard linear regression when considering the augmented feature sets they use. The convexity of least-squares linear regression ensures that these models converge to the same optimal solutions, given suitable optimization procedures. Experimental results support this hypothesis, demonstrating that all models tend towards the same optima. Additionally, the closed-form solutions to least-squares linear regression are found to be superior forecasters in 72% of test settings, outperforming models trained using stochastic gradient descent. The paper concludes by highlighting the simplicity and effectiveness of linear models in time series forecasting, suggesting that they may be more efficient and interpretable than their deeper counterparts.This paper analyzes the performance and functional equivalence of various linear time series forecasting models. Despite their simplicity, linear models often outperform deeper and more complex models in time series forecasting. The authors explore several popular variants of linear models, including those that incorporate feature normalization techniques such as instance normalization (IN), reversible instance normalization (RevIN), and Now-Normalization (NN). They show that these models, despite their architectural differences, are functionally equivalent to standard linear regression when considering the augmented feature sets they use. The convexity of least-squares linear regression ensures that these models converge to the same optimal solutions, given suitable optimization procedures. Experimental results support this hypothesis, demonstrating that all models tend towards the same optima. Additionally, the closed-form solutions to least-squares linear regression are found to be superior forecasters in 72% of test settings, outperforming models trained using stochastic gradient descent. The paper concludes by highlighting the simplicity and effectiveness of linear models in time series forecasting, suggesting that they may be more efficient and interpretable than their deeper counterparts.
Reach us at info@study.space