Time |
Activity |
8:30 - 9:00 |
Welcome Coffee |
9:00 - 9:40 |
Maximilian Dax
ETH Zürich
Accelerating Gravitational-Wave Astronomy with Probabilistic Machine Learning
Bio: Maximilian Dax is a postdoctoral researcher at ETH Zurich and the ELLIS Institute Tübingen and a member of the LIGO Scientific Collaboration. He pursued his PhD at the Max Planck Institute for Intelligent Systems in Tübingen under the supervision of Bernhard Schölkopf (2020-2024) and interned at Google Research (2023). His research focuses on probabilistic inference, generative modeling and density estimation, with a particular emphasis on scientific applications. Together with his collaborators, he developed DINGO, a leading machine learning approach for gravitational-wave data analysis. His research is published in top venues for science (e.g., Nature, Physical Review Letters) and machine learning (e.g., NeurIPS, ICLR).
Abstract: Gravitational-wave (GW) astronomy promises groundbreaking discoveries in the coming decades, but its progress is bottlenecked by the computational challenges of large-scale and real-time data analysis. I will present a machine learning (ML) approach for fast and accurate GW inference that addresses these challenges. This work combines simulation-based inference, generative modeling, equivariant ML, and classical sampling techniques. I will demonstrate how ML enables new scientific capabilities in GW astronomy and, conversely, how the demands of this domain drive fundamental innovations in ML, with applications beyond astrophysics.
|
9:40 - 10:20 |
Tatjana Chavdarova
Politecnico di Milano - polimi
Learning Dynamics in Multiplayer Games
Bio: Tatjana Chavdarova is a visiting professor in the Department of Electronics, Information, and Bioengineering (DEIB) at Politecnico di Milano (Polimi), where she collaborates with Nicola Gatti and Nicolò Cesa-Bianchi. Her research lies at the intersection of game theory and machine learning, with a particular emphasis on optimization and algorithmic innovation. She holds a Ph.D. in machine learning from EPFL and Idiap, where she was supervised by François Fleuret. During her doctoral studies, she completed internships at Mila, working with Yoshua Bengio and Simon Lacoste-Julien, and at DeepMind, under the mentorship of Irina Jurenka (formerly Higgins). Following her Ph.D., Tatjana served as a Postdoctoral Research Scientist at EPFL’s Machine Learning and Optimization (MLO) lab with Martin Jaggi, and later joined UC Berkeley’s Department of Electrical Engineering and Computer Science (EECS) as a Postdoctoral Researcher working with Michael I. Jordan. Her research has been supported by the Swiss National Science Foundation through the Early.Postdoc.Mobility and Postdoc.Mobility fellowships.
Abstract: Intelligence frequently evolves through interaction and competition. In a similar vein, advanced AI algorithms often depend on competing learning objectives. Whether through data sampling, environmental interactions, or self-play methods, agents continuously refine their strategies to reach an equilibrium—a state where competing objectives are balanced. This talk delves into the learning dynamics within multi-player games, where players adapt their strategies to achieve equilibrium. We will explore how these equilibrium-seeking dynamics differ from single-player optimization, tackling key challenges such as rotational dynamics, noise, and constraints. The discussion will draw on examples from machine learning, including robust objectives, generative adversarial networks, and multi-agent reinforcement learning, emphasizing the significance of learning dynamics in these areas.
|
10:20 - 11:00 |
Julius von Kügelgen
ETH Zürich
Causal Representation Learning for Bioinformatics
Bio: Julius von Kügelgen is a postdoc at the Seminar for Statistics at ETH Zürich. His research lies at the intersection of causal inference and machine learning. He obtained his PhD in Machine Learning from the University of Cambridge and the Max Planck Institute for Intelligent Systems. His work has been recognized with the Google PhD Fellowship, a Best Paper Award at the Conference on Causal Learning and Reasoning, and the Cambridge PhD Prize in Quantitative Research. Prior to his PhD, Julius studied Mathematics (BSc, MSci) at Imperial College London and Artificial Intelligence (MSc) at UPC Barcelona and TU Delft.
Abstract: Many scientific questions are fundamentally causal in nature. Yet, existing causal inference methods cannot easily handle complex, high-dimensional data. Causal representation learning (CRL) seeks to fill this gap by embedding causal models in the latent space of a machine learning model. In this talk, I will provide an overview of my prior work on the theoretical foundations of CRL. I will then present ongoing work on leveraging CRL methods for problems in bioinformatics, specifically for predicting the effects of unseen drug or gene perturbations from omics measurements. CRL requires rich experimental data and single-cell biology offers unique opportunities for gaining new scientific insights by leveraging such methods.
|
11:00 - 11:30 |
Coffee Break |
11:30 - 12:10 |
Weiyang Liu
MPI-IS
Towards Principled Adaptation of Foundation Models
Bio: Weiyang Liu is currently a postdoctoral researcher at Max Planck Institute for Intelligent Systems, hosted by Bernhard Schölkopf. He received his PhD in Machine Learning from University of Cambridge and Max Planck Institute for Intelligent Systems, jointly advised by Adrian Weller and Bernhard Schölkopf. His research focuses on the principled modeling of inductive bias for generalizable and reliable machine learning. He has received the Baidu Fellowship, Hitachi Fellowship, and was a Qualcomm Innovation Fellowship Finalist. His work has received the 2023 IEEE Signal Processing Society Best Paper Award, Best Demo Award at HCOMP 2022, and multiple oral/spotlight presentations at conferences including ICLR, NeurIPS, and CVPR. His work has been cited over 10,000 times according to Google Scholar.
Abstract: While foundation models become increasingly ubiquitous, the challenge of achieving efficient yet reliable adaptation to downstream tasks grows in importance. In this talk, I will introduce two families of principled approaches to foundation model adaptation. First, I will present orthogonal finetuning, a weight-based adaptation framework that achieves parameter-efficient adaptation while effectively preserving pretrained knowledge within foundation models. Second, I will introduce verbalized machine learning, an input-based adaptation framework that leverages foundation models' instruction-following capabilities to approximate functions through natural language prompt learning. Finally, I will discuss the challenges and opportunities that arise in foundation model adaptation.
|
12:10 - 12:50 |
Chulin Xie
University of Illinois Urbana – Champaign
Improving Trustworthiness in Foundation Models: Assessing and Mitigating ML Risks
Bio: Chulin Xie is a PhD candidate in Computer Science at the University of Illinois Urbana-Champaign, advised by Professor Bo Li. Her research focuses on the principles and practices of trustworthy machine learning, addressing the safety, privacy, and generalization challenges of Foundation Models, agents, and federated (distributed) learning. Her work was recognized by an Outstanding Paper Award at NeurIPS 2023 and a Best Research Paper Finalist at VLDB 2024. She was a recipient of Rising Star in Machine Learning and IBM PhD Fellowship. During her PhD, she gained industry experience through research internships at NVIDIA, Microsoft, and Google.
Abstract: As machine learning (ML) models continue to scale in size and capability, they expand the surface area for safety and privacy risks, raising concerns about model trustworthiness and responsible data use. My research uncovers and mitigates these risks. In this presentation, I will focus on the two cornerstones of trustworthy foundation models and agents: safety and privacy. For safety, I will introduce our evaluation platforms designed to assess the trustworthiness risks in Large Language Models (LLMs) and LLM-based code agents. For privacy, I will present a solution for protecting data privacy with a synthetic text generation algorithm under differential privacy guarantees. The algorithm requires only LLMs inference API access without model training, enabling efficient safe text sharing. Finally, I will conclude with my future research plan for improving trustworthiness in foundation model-based systems.
|
12:50 - 13:50 |
Lunch |
18:00 - 18:40 |
Xi Wang
ETH Zurich
Learning to interact by learning to predict
Bio: Xi Wang is an established researcher in the Computer Vision and Geometry Lab with Prof. Marc Pollefeys at ETH Zurich while continue collaborating with Prof. Luc Van Gool at INSAIT. She is also a junior group leader at TU Munich and Munich Center for Machine Learning, funded by BMBF. She was an ETH Postdoc Fellow in the Advanced Interactive Technologies lab and completed her PhD in the Computer Graphics Group at TU Berlin. During her PhD, she visited MIT working in the Computational Perception & Cognition Group and later that year she interned at Adobe Research. Her research interests fall at the intersection of computer vision & graphics, and vision science. Her goal is to bring human common sense and behavior patterns into machine learning. Her current research interests are vision-language multimodal learning, with a focus on understanding how humans' intent drives their actions and their interactions with their surroundings. She is excited to learn about human behavior patterns and to leverage the gained knowledge in computational models and applications.
Abstract: Research in artificial intelligence continues to advance quickly and outperforms humans in many tasks, making its way into our daily lives. However, beneath their superior performance, current technologies, limited in how to perceive, process, and understand our visual world, struggle with understanding and interacting with people. These issues raise the core question of my research: How do we build intelligent systems that can interact with people and offer assistance in a natural and seamless way? In this talk, I will present our works following the learn-to-predict paradigm through an egocentric perspective.
|