20 Feb 2020 | Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, Yoshua Bengio
The paper introduces N-BEATS, a novel deep neural architecture for univariate time series forecasting. The architecture is designed to be interpretable, flexible, and applicable to a wide range of target domains without requiring domain-specific knowledge. N-BEATS consists of a stack of residual blocks, each predicting both forward and backward expansion coefficients. The architecture is trained using a hierarchical doubly residual stacking principle, which facilitates gradient flow and enables the model to decompose forecasts into interpretable components. The paper demonstrates that N-BEATS achieves state-of-the-art performance on three challenging datasets (M4, M3, and TOURISM) with two configurations: a generic configuration that does not rely on time-series-specific components and an interpretable configuration that incorporates inductive biases to make forecasts more interpretable. The interpretable configuration decomposes forecasts into trend and seasonality components, making it easier for practitioners to understand the underlying factors contributing to the forecasts. The paper also discusses the potential of N-BEATS in meta-learning and ensemble methods, suggesting that the architecture can be trained on multiple time series in a multi-task fashion, transferring and sharing learnings across different datasets.The paper introduces N-BEATS, a novel deep neural architecture for univariate time series forecasting. The architecture is designed to be interpretable, flexible, and applicable to a wide range of target domains without requiring domain-specific knowledge. N-BEATS consists of a stack of residual blocks, each predicting both forward and backward expansion coefficients. The architecture is trained using a hierarchical doubly residual stacking principle, which facilitates gradient flow and enables the model to decompose forecasts into interpretable components. The paper demonstrates that N-BEATS achieves state-of-the-art performance on three challenging datasets (M4, M3, and TOURISM) with two configurations: a generic configuration that does not rely on time-series-specific components and an interpretable configuration that incorporates inductive biases to make forecasts more interpretable. The interpretable configuration decomposes forecasts into trend and seasonality components, making it easier for practitioners to understand the underlying factors contributing to the forecasts. The paper also discusses the potential of N-BEATS in meta-learning and ensemble methods, suggesting that the architecture can be trained on multiple time series in a multi-task fashion, transferring and sharing learnings across different datasets.