UNITs: A Unified Multi-Task Time Series Model

UNITs: A Unified Multi-Task Time Series Model

29 May 2024 | Shanghua Gao, Teddy Koker, Owen Queen, Thomas Hartvigsen, Theodoros Tsi ligkaridis, Marinka Zitnik
UNITS is a unified multi-task time series model that addresses the challenge of handling diverse time series tasks within a single framework. The model uses task tokenization to represent both predictive and generative tasks, enabling a unified approach to time series analysis. UNITS leverages a modified transformer block to capture universal time series representations, allowing it to transfer knowledge from a heterogeneous, multi-domain pre-training dataset to various downstream tasks. The model is evaluated across 38 datasets spanning human activity, healthcare, engineering, and finance domains, outperforming 12 forecasting models, 20 classification models, 18 anomaly detection models, and 16 imputation models. UNITS demonstrates strong performance in few-shot and prompt learning scenarios, and excels in multi-task settings, handling 38 tasks with a single shared model. The model's architecture supports both generative and predictive tasks through a unified network design, and it achieves superior performance in forecasting, classification, anomaly detection, and imputation tasks. UNITS is trained on time series data alone, eliminating the need for pre-trained large language models. The model's ability to adapt to new tasks through prompt learning and zero-shot inference makes it a versatile solution for time series analysis. The source code and datasets are available at https://github.com/mims-harvard/UniTS.UNITS is a unified multi-task time series model that addresses the challenge of handling diverse time series tasks within a single framework. The model uses task tokenization to represent both predictive and generative tasks, enabling a unified approach to time series analysis. UNITS leverages a modified transformer block to capture universal time series representations, allowing it to transfer knowledge from a heterogeneous, multi-domain pre-training dataset to various downstream tasks. The model is evaluated across 38 datasets spanning human activity, healthcare, engineering, and finance domains, outperforming 12 forecasting models, 20 classification models, 18 anomaly detection models, and 16 imputation models. UNITS demonstrates strong performance in few-shot and prompt learning scenarios, and excels in multi-task settings, handling 38 tasks with a single shared model. The model's architecture supports both generative and predictive tasks through a unified network design, and it achieves superior performance in forecasting, classification, anomaly detection, and imputation tasks. UNITS is trained on time series data alone, eliminating the need for pre-trained large language models. The model's ability to adapt to new tasks through prompt learning and zero-shot inference makes it a versatile solution for time series analysis. The source code and datasets are available at https://github.com/mims-harvard/UniTS.
Reach us at info@study.space