Call for Papers

We welcome researchers working in the field of time series analysis to submit their latest original research work to the NeurIPS 2025 workshop on Recent Advances in Time Series Foundation Models: Have We Reached the BERT Moment?

  • Submission link: OpenReview
  • Submission deadline: (To be confirmed soon), Aug 22, 2025 (11:59 pm AoE)
  • Acceptance notification: (To be confirmed soon), Oct 1, 2025 (11:59 pm AoE)
  • Camera ready deadline: (To be confirmed soon), Nov 19, 2025 (11:59 pm AoE)

Instructions

We invite submissions in the form of short papers (up to 4 pages). Additional pages containing references and appendices are allowed, but reviewers do not have to take the appendices into account during the review. Submissions must be formatted following this LaTeX style template and submitted as a single .pdf file on OpenReview. Submissions must be anonymized, along with any supplementary material, to ensure a proper_double-blind_ review process. Papers that exceed the page limit or that are not properly anonymized will be desk-rejected without review. There will be no rebuttal phase, and final decisions will be solely based on the reviews. Rejected or withdrawn submissions will not be made public. Authors of accepted submissions will be able to present a poster, and 4 selected submissions will be invited for oral talks of 15 minutes.

This workshop is non-archival; as such, we welcome original work, but also papers currently under review in another venue. If authors want to submit a paper that has already appeared in a journal, conference, or workshop, it should be reasonably extended.

Main Topics

In line with the motivation of this workshop, we will solicit contributions related but not limited to the following broad topics:

  1. Benchmarking Foundation Models in Time Series.
    • Proposals for new benchmarks and datasets with a significant performance gap compared to simpler models.
    • Criteria and metrics for robust benchmarking of time series models, with a focus on OOD generalization.
  2. Scaling Laws and Efficiency in Time Series Models.
    • Investigating the scaling laws in TSFMs, especially for classification and multivariate forecasting.
    • Understanding the efficiency of TSFMs across different architectures, sizes of models and pre-training datasets.
  3. Evaluating Transferability and Adaptability of Foundation Models.
    • Techniques for assessing the adaptability of foundation models in new or evolving time series environments.
    • The role of data diversity and pretraining in improving model transferability across time series tasks.
  4. Leveraging Foundation Models of Other Modalities for Time Series.
    • FMs trained on other data modalities that are applied to time series.
    • Techniques used for the representation alignment of the different data modalities.
  5. Unsupervised Analysis and Performance Estimation of TSFMs.
    • Predicting the efficiency of TSFMs on new previously unseen datasets.
    • Estimating the calibration and the uncertainty of TSFMs.
  6. Industrial Benchmarking of TSFMs.
    • Usability of TSFMs for complex real-world problems.
    • Domain-specific analysis of TSFMS.

Reviewing Guidelines

Reviewers should focus on the alignment between the submission and the main topics of this workshop and on the validity of the claims and evidence provided in the submission. We invite new reviewers to look at TMLR guidelines or this reviewing tutorial for inspiration. Reviews should include the following parts:

Summary of contributions: Brief description, in the reviewer’s words, of the contributions and new knowledge presented by the submission.
Strengths and weaknesses: List of the strong aspects of the submission as well as weaker elements (if any) that you think require attention from the authors.
Suggestions: Any suggestions to improve the paper for future versions.

Contact

If you have questions about this workshop or are not sure if your paper’s topic is suitable for submission, please feel free to contact thomas.moreau@inria.fr and ievgen.redko@gmail.com.