{ "url": "http://arxiv.org/abs/2404.16563v1", "title": "Evaluating Large Language Models on Time Series Feature Understanding: A Comprehensive Taxonomy and Benchmark", "abstract": "Large Language Models (LLMs) offer the potential for automatic time series\nanalysis and reporting, which is a critical task across many domains, spanning\nhealthcare, finance, climate, energy, and many more. In this paper, we propose\na framework for rigorously evaluating the capabilities of LLMs on time series\nunderstanding, encompassing both univariate and multivariate forms. We\nintroduce a comprehensive taxonomy of time series features, a critical\nframework that delineates various characteristics inherent in time series data.\nLeveraging this taxonomy, we have systematically designed and synthesized a\ndiverse dataset of time series, embodying the different outlined features. This\ndataset acts as a solid foundation for assessing the proficiency of LLMs in\ncomprehending time series. Our experiments shed light on the strengths and\nlimitations of state-of-the-art LLMs in time series understanding, revealing\nwhich features these models readily comprehend effectively and where they\nfalter. In addition, we uncover the sensitivity of LLMs to factors including\nthe formatting of the data, the position of points queried within a series and\nthe overall time series length.", "authors": "Elizabeth Fons, Rachneet Kaur, Soham Palande, Zhen Zeng, Svitlana Vyetrenko, Tucker Balch", "published": "2024-04-25", "updated": "2024-04-25", "primary_cat": "cs.CL", "cats": [ "cs.CL" ], "label": "Original Paper", "paper_cat": "LLM Fairness", "gt": "Time series analysis and reporting play a crucial role in many areas like healthcare, finance, climate, etc. With the recent advances in Large Language Models (LLMs), integrating them in time series analysis and reporting processes presents a huge po- tential for automation. Recent works have adapted general-purpose LLMs for time series understand- ing in various specific domains, such as seizure localization in EEG time series (Chen et al., 2024), cardiovascular disease diagnosis in ECG time se- ries (Qiu et al., 2023), weather and climate data understanding (Chen et al., 2023), and explainable financial time series forecasting (Yu et al., 2023). Despite these advancements in domain-specific LLMs for time series understanding, it is crucial to conduct a systematic evaluation of general-purpose LLMs\u2019 inherent capabilities in generic time se- ries understanding, without domain-specific fine- tuning. This paper aims to uncover the pre-existing strengths and weaknesses in general-purpose LLMs regarding time series understanding, such that prac- titioners can be well informed of areas where the general-purpose LLMs are readily applicable, and focus on areas for improvements with targeted ef- forts during fine-tuning. To systematically evaluate the performance of general-purpose LLMs on generic time series un- derstanding, we propose a taxonomy of time series features for both univariate and multivariate time series. This taxonomy provides a structured cat- egorization of core characteristics of time series across domains. Building upon this taxonomy, we have synthesized a diverse dataset of time series covering different features in the taxonomy. This dataset is pivotal to our evaluation framework, as it provides a robust basis for assessing LLMs\u2019 abil- ity to interpret and analyze time series data accu- rately. Specifically, we examine the state-of-the-art LLMs\u2019 performance across a range of tasks on our dataset, including time series features detection and classification, data retrieval as well as arithmetic reasoning. Our contributions are three-fold: \u2022Taxonomy - we introduce a taxonomy that pro- vides a systematic categorization of important time series features, an essential tool for stan- dardizing the evaluation of LLMs in time series understanding. \u2022 Diverse Time Series Dataset - we synthesize a comprehensive time series dataset, ensuring a broad representation of the various types of time series, encompassing the spectrum of features identified in our taxonomy. 1 arXiv:2404.16563v1 [cs.CL] 25 Apr 2024 \u2022 Evaluations of LLMs - our evaluations pro- vide insights into what LLMs do well when it comes to understanding time series and where they struggle, including how they deal with the format of the data, where the query data points are located in the series and how long the time series is.", "main_content": "2.1 Large Language Models Large Language Models (LLMs) are characterized as pre-trained, Transformer-based models endowed with an immense number of parameters, spanning from tens to hundreds of billions, and crafted through the extensive training on vast text datasets (Zhang et al., 2024; Zhao et al., 2023). Notable examples of LLMs include Llama2 (Touvron et al., 2023), PaLM (Chowdhery et al., 2023), GPT3 (Brown et al., 2020), GPT4 (Achiam et al., 2023), and Vicuna-13B (Chiang et al., 2023). These models have surpassed expectations in numerous language-related tasks and extended their utility to areas beyond traditional natural language processing. For instance, Wang et al. (2024) have leveraged LLMs for the prediction and modeling of human mobility, Yu et al. (2023) for explainable financial time series forecasting, and Chen et al. (2024) for seizure localization. This expansive application of LLMs across diverse domains sets the stage for their potential utility in the analysis of time series data, a domain traditionally governed by statistical and machine learning models. 2.2 Language models for time series Recent progress in time series forecasting has capitalized on the versatile and comprehensive abilities of LLMs, merging their language expertise with time series data analysis. This collaboration marks a significant methodological change, underscoring the capacity of LLMs to revolutionize conventional predictive methods with their advanced information processing skills. In the realm of survey literature, comprehensive overviews provided by Zhang et al. (2024) and Jiang et al. (2024) offer valuable insights into the integration of LLMs in time series analysis, highlighting key methodologies, challenges, and future directions. Notably, Gruver et al. (2023) have set benchmarks for pretrained LLMs such as GPT-3 and Llama2 by assessing their capabilities for zero-shot forecasting. Similarly, Xue and Salim (2023) introduced Prompcast, and it adopts a novel approach by treating forecasting as a question-answering activity, utilizing strategic prompts. Further, Yu et al. (2023) delved into the potential of LLMs for generating explainable forecasts in financial time series, tackling inherent issues like cross-sequence reasoning, integration of multi-modal data, and interpretation of results, which pose challenges in conventional methodologies. Additionally, Zhou et al. (2023) demonstrated that leveraging frozen pre-trained language models, initially trained on vast corpora, for time series analysis that could achieve comparable or even state-of-the-art performance across various principal tasks in time series analysis including imputation, classification and forecasting. 2.3 LLMs for arithmetic tasks Despite their advanced capabilities, LLMs face challenges with basic arithmetic tasks, crucial for time series analysis involving quantitative data (Azerbayev et al., 2023; Liu and Low, 2023). Research has identified challenges such as inconsistent tokenization and token frequency as major barriers (Nogueira et al., 2021; Kim et al., 2021). Innovative solutions, such as Llama2\u2019s approach to digit tokenization (Yuan et al., 2023), highlight ongoing efforts to refine LLMs\u2019 arithmetic abilities, enhancing their applicability in time series analysis. 3 Time Series Data 3.1 Taxonomy of Time Series Features Our study introduces a comprehensive taxonomy for evaluating the analytical capabilities of Large Language Models (LLMs) in the context of time series data. This taxonomy categorizes the intrinsic characteristics of time series, providing a structured basis for assessing the proficiency of LLMs in identifying and extracting these features. Furthermore, we design a series of datasets following the proposed taxonomy and we outline an evaluation framework, incorporating specific metrics to quantify model performance accurately across various tasks. The proposed taxonomy encompasses critical aspects of time series data that are frequently analyzed for different applications. Table 1 shows the selected features in increasing complexity, and each sub-feature. We evaluate the LLM in this taxonomy in a two-step process. In first place, we evaluate if the LLM can detect the feature, and in a 2 Time series characteristics Description Sub-categories Univariate Trend Directional movements over time. Up , Down Seasonality and Cyclical Patterns Patterns that repeat over a fixed or irregular period. Fixed-period \u2013 constant amplitude , Fixed-period \u2013 varying amplitude , Shifting period , Multiple seasonality Volatility Degree of dispersion of a series over time. Constant Increasing , Clustered , Leverage effect . Anomalies Significant deviations from typical patterns. Spikes , step-spikes , level shifts , temporal disruptions Structural Breaks Fundamental shifts in the series data, such as regime changes or parameter shifts. Regime changes , parameter shifts Statistical Properties Characteristics like fat tails, and stationarity versus non-stationarity. Fat tails , Stationarity Multivariate Correlation Measure the linear relationship between series. Useful for predicting one series from another if they are correlated. Positive Negative Cross-Correlation Measures the relationship between two series at different time lags, useful for identifying lead or lag relationships. Positive direct , Positive lagged , Negative direct , Negative lagged Dynamic Conditional Correlation Assesses situations where correlations between series change over time. Correlated first half Correlated second half Table 1: Taxonomy of time series characteristics. second step, we evaluate if the LLM can identify the sub-category of the feature. A detailed description of the process is described in Sec. 6.1.2. 3.2 Synthetic Time Series Dataset Leveraging our taxonomy, we construct a diverse synthetic dataset of time series, covering the features outlined in the previous section. We generated in total 9 datasets with 200 time series samples each. Within each dataset the time series length is randomly chosen between 30 and 150 to encompass a variety of both short and long time series data. In order to make the time series more realistic, we add a time index, using predominantly daily frequency. Fig. 1 showcases examples of our generated univariate time series. Each univariate dataset showcases a unique single-dimensional patterns, whereas multivariate data explore series interrelations to reveal underlying patterns. Please see Table 4 in the appendix for examples of each univariate dataset, and Table 5 for visual examples of the multivariate cases. For a detailed description of the generation of each dataset, refer to Sec. A in the Appendix. 4 Time Series Benchmark Tasks Our evaluation framework is designed to assess the LLMs\u2019 capabilities in analyzing time series across the dimensions in our taxonomy (Sec. 3.1). The evaluation includes four primary tasks: Feature Detection This task evaluates the LLMs\u2019 ability to identify the presence of specific features within a time series, such as trend, seasonality, or anomalies. For instance, given a time series dataset with an upward trend, the LLM is queried to determine if a trend exists. Queries are structured as yes/no questions to assess the LLMs\u2019 ability to recognize the presence of specific time series features, such as \"Is a trend present in the time series?\" Feature Classification Once a feature is detected, this task assesses the LLMs\u2019 ability to classify the feature accurately. For example, if a trend is present, the LLM must determine whether it is upward, downward, or non-linear. This task involves a QA setup where LLMs are provided with definitions of sub-features within the prompt. Performance is evaluated based on the correct identification of sub-features, using the F1 score to balance precision and recall. This task evaluates the models\u2019 depth of understanding and ability to distinguish between similar but distinct phenomena. Information Retrieval Evaluates the LLMs\u2019 accuracy in retrieving specific data points, such as values on a given date. 3 Figure 1: Example synthetically generated time series. Arithmetic Reasoning Focuses on quantitative analysis tasks, such as identifying minimum or maximum values. Accuracy and Mean Absolute Percentage Error (MAPE) are used to measure performance, with MAPE offering a precise evaluation of the LLMs\u2019 numerical accuracy. Additionally, to account for nuanced aspects of time series analysis, we propose in Sec. 5.2 to study the influence of multiple factors, including time series formatting, location of query data point in the time series and time series length. 5 Performance Metrics and Factors 5.1 Performance Metrics We employ the following metrics to report the performance of LLMs on various tasks. F1 Score Applied to feature detection and classification, reflecting the balance between precision and recall. Accuracy Used for assessing the information retrieval and arithmetic reasoning tasks. Mean Absolute Percentage Error (MAPE) Employed for numerical responses in the information retrieval and arithmetic reasoning tasks, providing a measure of precision in quantitative analysis. 5.2 Performance Factors We identified various factors that could affect the performance of LLMs on time series understanding, for each we designed deep-dive experiments to reveal the impacts. Time Series Formatting Extracting useful information from raw sequential data as in the case of numerical time series is a challenging task for LLMs. The tokenization directly influences how the patterns are encoded within tokenized sequences (Gruver et al., 2023), and methods such as BPE separate a single number into tokens that are not aligned. On the contrary, Llama2 has a consistent tokenization of numbers, where it splits each digit into an individual token, which ensures consistent tokenization of numbers (Liu and Low, 2023). We study different time series formatting approaches to determine if they influence the LLMs performance to capture the time series information. In total we propose 9 formats, ranging from simple CSV to enriched formats with additional information. Time Series Length We study the impact that the length of the time series has in the retrieval task. Transformer-based models use attention mechanisms to weigh the importance of different parts of the input sequence. Longer sequences can dilute the attention mechanism\u2019s effectiveness, potentially making it harder for the model to focus on the most relevant parts of the text (Vaswani et al., 2017). Position Bias Given a retrieval question, the position of where the queried data point occurs in the time series might impact the retrieval accuracy. Studies have discovered recency bias (Zhao et al., 2021) in the task of few-shot classification, where the LLM tends to repeat the label at the end. Thus, it\u2019s important to investigate whether LLM exhibits similar bias on positions in the task of time series 4 understanding. 6 Experiments 6.1 Experimental setup 6.1.1 Models We evaluate the following LLMs on our proposed framework: 1) GPT4. (Achiam et al., 2023) 2) GPT3.5. 3) Llama2-13B (Touvron et al., 2023), and 4) Vicuna-13B (Chiang et al., 2023). We selected two open-source models, Llama2 and Vicuna, each with 13 billion parameters, the version of Vicuna is 1.5 was trained by fine-tuning Llama2. Additionally we selected GPT4 and GPT3.5 where the number of parameters is unknown. In the execution of our experiments, we used an Amazon Web Services (AWS) g5.12xlarge instance, equipped with four NVIDIA A10G Tensor Core GPUs, each featuring 24 GB of GPU RAM. This setup was essential for handling both extensive datasets and the computational demands of LLMs. 6.1.2 Prompts The design of prompts for interacting with LLMs is separated into two approaches: retrieval/arithmetic reasoning and detection/classification questioning. Time series characteristics To evaluate the LLM reasoning over time series features, we use a two-step prompt with an adaptive approach, dynamically tailoring the interaction based on the LLM\u2019s responses. The first step involves detection, where the model is queried to identify relevant features within the data. If the LLM successfully detects a feature, we proceed with a follow-up prompt, designed to classify the identified feature between multiple sub-categories. For this purpose, we enrich the prompts with definitions of each subfeature (e.g. up or down trend), ensuring a clearer understanding and more accurate identification process. An example of this two-turn prompt is shown in Fig. 2. The full list can be found in Sec. F of the supplementary. Information Retrieval/Arithmetic Reasoning We test the LLM\u2019s comprehension of numerical data represented as text by querying it for information retrieval and numerical reasoning, as exemplified in Fig. 3 and detailed in the supplementary Sec. F. Trend Prompts \"Input: