N-BEATS: NEURAL BASIS EXPANSION ANALYSIS FOR INTERPRETABLE TIME SERIES FORECASTING

N-BEATS: NEURAL BASIS EXPANSION ANALYSIS FOR INTERPRETABLE TIME SERIES FORECASTING

20 Feb 2020 | Boris N. Oreshkin, Dmitri Carpov, Nicolas Chapados, Yoshua Bengio
N-BEATS is a deep neural architecture designed for interpretable time series forecasting. It uses backward and forward residual links and a deep stack of fully-connected layers, offering interpretability, versatility, and fast training. The model outperforms statistical benchmarks and previous competition winners on datasets like M3, M4, and TOURISM, achieving 11% improvement over statistical methods and 3% over the M4 competition winner. The architecture is interpretable without significant accuracy loss, decomposing forecasts into trend and seasonality components. It is trained using ensembling techniques and performs well across multiple time series forecasting tasks. The model demonstrates that deep learning can achieve high accuracy without domain-specific knowledge, and that interpretable outputs can be generated through structured basis functions. The architecture is also shown to generalize well across different time series types and can be extended to support meta-learning, enhancing its adaptability and performance.N-BEATS is a deep neural architecture designed for interpretable time series forecasting. It uses backward and forward residual links and a deep stack of fully-connected layers, offering interpretability, versatility, and fast training. The model outperforms statistical benchmarks and previous competition winners on datasets like M3, M4, and TOURISM, achieving 11% improvement over statistical methods and 3% over the M4 competition winner. The architecture is interpretable without significant accuracy loss, decomposing forecasts into trend and seasonality components. It is trained using ensembling techniques and performs well across multiple time series forecasting tasks. The model demonstrates that deep learning can achieve high accuracy without domain-specific knowledge, and that interpretable outputs can be generated through structured basis functions. The architecture is also shown to generalize well across different time series types and can be extended to support meta-learning, enhancing its adaptability and performance.
Reach us at info@study.space
[slides and audio] N-BEATS%3A Neural basis expansion analysis for interpretable time series forecasting