Speakers
List of invited speakers and panel members.

Chenghao Liu is a research scientist at Salesforce Research Asia. He received Ph.D. and B.S. from the School of Computer Science and Engineering at Zhejiang University in 2017 and 2011, respectively. His research interests span fundamental machine learning (e.g., deep learning, meta learning, online learning) and data mining applications (time series analysis, recommendation systems, web and social media analytics). He served as a program committee member at leading machine learning and data mining conferences such as AAAI, IJCAI, and ICDM. He actively works on Time Series Foundation Models (TSFMs), including MOIRAI, Time-FFM and MOIRAI-MoE. He has also contributed to the GIFT-Eval time series forecasting benchmark.

Mingsheng Long is a Tenured Associate Professor at Tsinghua University, where he received his Ph.D. degree in 2014. He was a visiting researcher at UC Berkeley from 2014 to 2015. His research is dedicated to transfer learning, foundation models, and scientific machine learning. He is the leader of the Tsinghua Machine Learning Group which is interested in powering machine learning for representation, perception, prediction, and generation of big data with a good tradeoff between accuracy, efficiency, generalizability, and transferability. His team has contributed several famous time series methods, such as iTransformer, Timexer, Timesnet, and Sundial TSFMs, which demonstrate top performance on the TSLib leaderboard).

Zoe Piran is currently a post-doc at Genentech and Stanford, hosted by Aviv Regev and Jure Leskovec. She holds a Ph.D. from the Hebrew University of Jerusalem, where she worked on decoding cellular identities in single-cell data to explore axes of biological variation. Her research aims to develop tools that will provide a deeper biological understanding by merging advancements in single-cell genomics and machine learning, integrating the time evolution in the analysis, like with MOSCOT, a framework to map cells through time and space (see the corresponding Nature article).

Danielle Maddix Robinson is a senior applied scientist at AWS AI Labs. She received her Ph.D. in Computational and Mathematical Engineering from Stanford University in 2018 under the supervision of Professor Margot Gerritsen, where she worked on finite volume averaged-based methods for nonlinear porous media flow. She graduated from the University of California, Berkeley, with the highest honors in Applied Mathematics in 2012 and received a Master of Science from Stanford University in 2015. Her research is focused on deep learning models for probabilistic time series forecasting (Chronos, GluonTS) and physics-constrained machine learning models (PreDiff) as part of the DeepEarth team.

Ameet Talwalkar is an Associate Professor at CMU and Chief Scientist at Datadog. He holds a Ph.D. in Computer Science from New York University, advised by Mehryar Mohri. His research lies at the intersection of systems and learning, including the application of foundation models in solving partial differential equations and establishing strong supervised baselines for time series foundation models. He co-founded Determined AI (acquired by HPE), helped create MLlib in Apache Spark, co-authored the textbook Foundations of Machine Learning, and spearheaded the creation of the MLSys conference. His team developed Toto, one of the top time series forecasting models on the GIFT-Eval leaderboard.

Qingsong Wen is the Head of AI & Chief Scientist at Squirrel AI Learning, a leading EdTech unicorn with 3,000+ learning centers. He leads AI teams in Seattle and Shanghai, focusing on advanced technologies like LLMs, AI Agents, and GNNs for education. Previously, he worked at Alibaba, Qualcomm, and Marvell, and holds M.S. and Ph.D. degrees in Electrical and Computer Engineering from Georgia Institute of Technology. He has published ~150 top-tier papers, received multiple awards (e.g., Most Influential Paper at IJCAI, IAAI Innovation Awards), and serves in leadership/editorial roles across major conferences and journals (e.g., NeurIPS, ICML, IEEE TPAMI). His research focuses on AI for Time Series and Education, and his team has contributed to several famous TSFMs like Time-LLM, TIME-FFM, and Time-MoE.