Skip to main content
Large Language Models (LLMs) Revolutionize Time Series Forecasting

Large Language Models (LLMs) Revolutionize Time Series Forecasting

Submitted by admin on 2 May 2025

A new research proposes a methodology to adapt Large Language Models, known as a type of Foundational Model, to the task of time series prediction by leveraging their extensive knowledge and aligning the temporal dimension of these sequences with textual data.

Foundational Models have become an emerging and widely used technique within Generative Artificial Intelligence. They stand out for their scalability and the ease with which they can be adapted to new tasks, thanks to the use of Transfer Learning. The development of these models has been driven by the availability of high computational power thanks to current advances and diverse existing datasets, which allows them to reach a high generalization capacity due to the huge and heterogeneous amount of data used in their initial training.

These features provide a solid foundation that can be adjusted to a wide range of tasks, broadening their applicability. In this context, a recent study has proposed the LLIAM methodology, a direct adaptation applied to Large Language Models (LLM) for the Time Series Prediction task.

LLIAM aligns the sequential structure of the numerical data with the textual format used by LLMs by applying a prompting scheme, which allows the model to know when it is facing the prediction task. In addition, Low-Rank Adaptations have been employed to strengthen the predictive capabilities of the model with different time series data sets, in what is known as the “fine-tuning” phase.

Initially, the performance of LLIAM and different Deep Learning algorithms used in the literature, including Recurrent Neural Networks and Temporal Convolutional Neural Networks, as well as a method based on LLMs known as TimeLLM, were compared. Subsequently, a zero-shot study was presented to evaluate the generalizability of the proposed methodology with time series data sets from unknown domains, not used in its training.

The results of this research demonstrate the effectiveness of LLIAM, highlighting that this straightforward and generalist approach can achieve competent results without the need to apply complex modifications. This work also encourages the use of already available resources (such as these pre-trained models) and efficient fine-tuning techniques to avoid unnecessary and costly training, bringing the goals of traditional Artificial Intelligence and Green Artificial Intelligence closer together.

In Press Article
M. Germán-Morales, A.J. Rivera-Rivas, M.J. del Jesus, Cristóbal J. Carmona
Transfer Learning with Foundational Models for Time Series Forecasting using Low-Rank Adaptations
Information Fusion