2024 | Mononito Goswami, Konrad Szafer, Arjun Choudhry, Yifu Cai, Shuo Li, Artur Dubrawski
MOMENT is a family of open-source foundation models for general-purpose time series analysis. The paper introduces MOMENT, which addresses challenges in pre-training large time series models, including the lack of a large public time series repository, diverse time series characteristics, and limited experimental benchmarks. To overcome these challenges, the authors compile a large and diverse collection of public time series called the Time Series Pile. They also design a benchmark to evaluate time series foundation models on diverse tasks and datasets in limited supervision settings. Experiments on this benchmark demonstrate the effectiveness of their pre-trained models with minimal data and task-specific fine-tuning. The authors also present several empirical observations about large pretrained time series models. Pre-trained models (AutonLab/MOMENT-1-large) and Time Series Pile (AutonLab/Timeseries-PILE) are available on Huggingface. The paper also discusses related work, methodology, experimental setup and results, and future work. The authors find that MOMENT can solve multiple time series modeling tasks in limited supervision settings, and that it can be used for cross-modal transfer learning. The paper also discusses the environmental impact of pre-training MOMENT and ethical considerations. The authors conclude that their work contributes to the development of time series foundation models and open-source research.MOMENT is a family of open-source foundation models for general-purpose time series analysis. The paper introduces MOMENT, which addresses challenges in pre-training large time series models, including the lack of a large public time series repository, diverse time series characteristics, and limited experimental benchmarks. To overcome these challenges, the authors compile a large and diverse collection of public time series called the Time Series Pile. They also design a benchmark to evaluate time series foundation models on diverse tasks and datasets in limited supervision settings. Experiments on this benchmark demonstrate the effectiveness of their pre-trained models with minimal data and task-specific fine-tuning. The authors also present several empirical observations about large pretrained time series models. Pre-trained models (AutonLab/MOMENT-1-large) and Time Series Pile (AutonLab/Timeseries-PILE) are available on Huggingface. The paper also discusses related work, methodology, experimental setup and results, and future work. The authors find that MOMENT can solve multiple time series modeling tasks in limited supervision settings, and that it can be used for cross-modal transfer learning. The paper also discusses the environmental impact of pre-training MOMENT and ethical considerations. The authors conclude that their work contributes to the development of time series foundation models and open-source research.