We are thrilled to announce the addition of two exceptional Principal Investigators (PIs) to our Institute. Their groundbreaking expertise and innovative approaches promise to enrich our research landscape and inspire future collaborations. Dr. T. Konstantin Rusch and Dr. Shiwei Liu will join the ELLIS Institute Tübingen as PIs and Hector Endowed Fellows in June and July 2025, respectively. They will also have a co-affiliation with the Max Planck Institute for Intelligent Systems (MPI-IS) and the Tübingen AI Center as Independent Research Group Leaders. Please find below their short biographies and their research interests.
Dr. T. Konstantin Rusch
Research Group: Computational Applied Mathematics & Artificial Intelligence Lab (CAMAIL)
Starting Date: June 1st, 2025
Dr. T. Konstantin Rusch’s research aims to advance AI by addressing fundamental limitations, including the lack of rigorous safety guarantees and computational inefficiencies. He focuses on combining AI with computational applied mathematics to develop enhanced AI systems grounded in rigorous mathematical foundations. Moreover, his focus is on building efficient AI frameworks that incorporate structures of physical systems into model design, resulting in better inductive biases. His objective is to ensure that advancements are both theoretically robust and impactful in tangible, real-world settings. His research agenda centers on developing AI systems that are mathematically rigorous, computationally efficient, and practically applicable, merging foundational AI research with solutions tailored to real-world challenges.
Konstantin is an SNSF postdoctoral research fellow at the Massachusetts Institute of Technology (MIT), Computer Science and Artificial Intelligence Laboratory (CSAIL), advised by Professor Daniela Rus. Before this, he completed a PhD in applied mathematics and machine learning at ETH Zurich in 2023 under the supervision of Professor Siddhartha Mishra. During his doctoral studies, he had a second affiliation at UC Berkeley, advised by Professor Michael Mahoney. Moreover, he held visiting research appointments at UC Berkeley and the University of Oxford.
Konstantin’s prior work has already demonstrated how leveraging structures from physical systems can lead to state-of-the-art AI models that successfully address fundamental limits within their respective model classes. In addition, he has shown how AI can be integrated into computational mathematics, pushing its frontiers. Moreover, his models have been effectively utilized across various applications, including by large industrial research groups.
Dr. Shiwei Liu
Research Group: WEI Lab (Wild, Efficient, and Innovative AI Lab)
Starting Date: July 15th, 2025. Visiting the Institute already in May, 2025
Dr. Shiwei Liu’s research primarily aims to empirically understand the behavior of deep neural networks and to develop deep learning algorithms and architectures that learn better, faster, and cheaper. One central theme of his research is to leverage, understand, and expand the role of low-dimentionality in neural networks, whose impacts span many important topics, such as efficient training/inference/scaling of large-foundation models, robustness and trustworthiness, and generative AI.
Shiwei Liu is a Royal Society Newton International Fellow at University of Oxford. He was a Postdoctoral Fellow at University of Texas at Austin. He obtained his Ph.D. with the Cum Laude from Eindhoven University of Technology in 2022. Dr. Liu has received two Rising Star Awards from KAUST and the Conference on Parsimony and Learning (CPAL). His Ph.D. thesis received the 2023 Best Dissertation Award from Informatics Europe.
In March 2024, Shiwei Liu gave a talk at the ELLIS Institute Scientific Symposium, held at MPI-IS, talking about sparsity in neural networks. While existing research predominantly focuses on exploiting sparsity for model compression—such as deriving sparse neural networks from pre-trained dense ones—many other promising benefits such as scalability, robustness, and fairness remain under-explored. His talk delved into these overlooked advantages. Specifically, He showcased how sparsity can boost the scalability of neural networks by efficiently training sparse models from scratch. This approach enables a significant increase in model capacity without proportionally escalating computational or memory requirements. Additionally, he explored the future implications of sparsity in the realm of large language models, discussing its potential benefits to efficient LLM scaling, lossless LLM compression, and fostering trustworthy AI.
- Learn more about our current research groups, here.
- More about the ELLIS Institute Tübingen gGmbH:
The ELLIS Institute Tübingen gGmbH is sponsored with a 100 Mio EUR endowment from the Hector Foundation and 25 Mio EUR from Baden-Württemberg, and is located in the historic city of Tübingen, a beautiful university town in the southwest of Germany. The ELLIS Institute is set to become a world-renowned center for pioneering basic research in the field of artificial intelligence. The Institute aims to attract the world's best machine learning talent, providing them with outstanding conditions to conduct research in a state-of-the-art facility located in Tübingen. The vision is part of a broader initiative, the European Laboratory for Learning and Intelligent Systems (ELLIS), which aims to build a pan-European institution for machine learning research.