The paper "Fredformer: Frequency Debiasd Transformer for Time Series Forecasting" addresses the issue of frequency bias in Transformer models for time series forecasting. The authors, Xihao Piao, Zheng Chen, Taichi Murayama, Yasuko Matsubara, and Yasushi Sakurai, from Osaka University, Osaka, Japan, identify that Transformers tend to focus more on low-frequency features, neglecting high-frequency features, which can lead to inaccurate forecasting. They conduct empirical analyses to understand this bias and propose Fredformer, a Transformer-based framework designed to mitigate frequency bias by equally learning features across different frequency bands.
Key contributions of the paper include:
1. **Problem Definition**: The authors define key frequency components and the frequency bias, which is the relative error in the forecasting of these components.
2. **Algorithmic Design**: Fredformer consists of four main components: DFT-to-IDFT backbone, frequency domain refinement, local frequency independent learning, and global semantic frequency summarization. These components work together to ensure balanced attention to all key frequency components.
3. **Applicability**: Fredformer introduces a lightweight variant using Nyström approximation to reduce computational complexity while maintaining competitive performance.
Experiments on eight real-world datasets show that Fredformer outperforms other baselines, achieving superior results in forecasting accuracy. The paper also includes a detailed analysis of the frequency bias and its mitigation strategies, providing insights into the effectiveness of the proposed approach.The paper "Fredformer: Frequency Debiasd Transformer for Time Series Forecasting" addresses the issue of frequency bias in Transformer models for time series forecasting. The authors, Xihao Piao, Zheng Chen, Taichi Murayama, Yasuko Matsubara, and Yasushi Sakurai, from Osaka University, Osaka, Japan, identify that Transformers tend to focus more on low-frequency features, neglecting high-frequency features, which can lead to inaccurate forecasting. They conduct empirical analyses to understand this bias and propose Fredformer, a Transformer-based framework designed to mitigate frequency bias by equally learning features across different frequency bands.
Key contributions of the paper include:
1. **Problem Definition**: The authors define key frequency components and the frequency bias, which is the relative error in the forecasting of these components.
2. **Algorithmic Design**: Fredformer consists of four main components: DFT-to-IDFT backbone, frequency domain refinement, local frequency independent learning, and global semantic frequency summarization. These components work together to ensure balanced attention to all key frequency components.
3. **Applicability**: Fredformer introduces a lightweight variant using Nyström approximation to reduce computational complexity while maintaining competitive performance.
Experiments on eight real-world datasets show that Fredformer outperforms other baselines, achieving superior results in forecasting accuracy. The paper also includes a detailed analysis of the frequency bias and its mitigation strategies, providing insights into the effectiveness of the proposed approach.