Speakers
List of invited speakers and panel members.
Felix Divo is a Ph.D. student at the AIML Lab Computer Science Department of TU Darmstadt in Germany. His research is dedicated towards time series forecasting we can trust. More specifically, his work revolves around answering the following questions: What structure hides in the data in the first place? What did the model pick up? Can we use natural and formal languages as a bridge to the opaque domain of time series? His work includes time series models such as xLSTM Mixer, QuAnTS, a question-answering benchamrk, and contributions to TSLearn, a library providing machine learning tools for time series.
Maurice Kraus is a Ph.D. student at the AIML Lab Computer Science Department of TU Darmstadt in Germany. His research is dedicated to explaining deep models in the context of multimodal time series data. More specifically, he works on techniques that enable humans to understand the reasoning behind the predictions made by machine learning models when they operate on complex and diverse time-varying data types, such as images, videos, audio, and text, that are collected over time. His work includes time series models such as xLSTM Mixer and novel benchmarks like QuAnTS, for question-answering on time series.
Chenghao Liu is a staff research scientist at Datadog AI. He received Ph.D. and B.S. from the School of Computer Science and Engineering at Zhejiang University in 2017 and 2011, respectively. His research interests span fundamental machine learning (e.g., deep learning, meta learning, online learning) and data mining applications (time series analysis, recommendation systems, web and social media analytics). He served as a program committee member at leading machine learning and data mining conferences such as AAAI, IJCAI, and ICDM. He actively works on Time Series Foundation Models (TSFMs), including MOIRAI, Time-FFM and MOIRAI-MoE. He has also contributed to the GIFT-Eval time series forecasting benchmark.
Danielle Maddix Robinson is a senior applied scientist at AWS AI Labs. She received her Ph.D. in Computational and Mathematical Engineering from Stanford University in 2018 under the supervision of Professor Margot Gerritsen, where she worked on finite volume averaged-based methods for nonlinear porous media flow. She graduated from the University of California, Berkeley, with the highest honors in Applied Mathematics in 2012 and received a Master of Science from Stanford University in 2015. Her research is focused on deep learning models for probabilistic time series forecasting (Chronos, GluonTS) and physics-constrained machine learning models (PreDiff) as part of the DeepEarth team.
Ameet Talwalkar is an Associate Professor at CMU and Chief Scientist at Datadog. He holds a Ph.D. in Computer Science from New York University, advised by Mehryar Mohri. His research lies at the intersection of systems and learning, including the application of foundation models in solving partial differential equations and establishing strong supervised baselines for time series foundation models. He co-founded Determined AI (acquired by HPE), helped create MLlib in Apache Spark, co-authored the textbook Foundations of Machine Learning, and spearheaded the creation of the MLSys conference. His team developed Toto, one of the top time series forecasting models on the GIFT-Eval leaderboard.
Qingsong Wen is the Head of AI & Chief Scientist at Squirrel AI Learning, a leading EdTech unicorn with 3,000+ learning centers. He leads AI teams in Seattle and Shanghai, focusing on advanced technologies like LLMs, AI Agents, and GNNs for education. Previously, he worked at Alibaba, Qualcomm, and Marvell, and holds M.S. and Ph.D. degrees in Electrical and Computer Engineering from Georgia Institute of Technology. He has published ~150 top-tier papers, received multiple awards (e.g., Most Influential Paper at IJCAI, IAAI Innovation Awards), and serves in leadership/editorial roles across major conferences and journals (e.g., NeurIPS, ICML, IEEE TPAMI). His research focuses on AI for Time Series and Education, and his team has contributed to several famous TSFMs like Time-LLM, TIME-FFM, and Time-MoE.