Biography:
Damien Ernst obtained his engineering degree and his PhD in Applied Sciences from the University of Liège, Belgium, in 1998 and 2003 respectively. He is currently a full professor at the University of Liège. His research interests include electrical power systems and reinforcement learning, a subfield of artificial intelligence. He is also the Chief Scientific Officer of Haulogy, a company developing intelligent software solutions for the energy sector. He is the co-author of over 300 research articles and two books. He has also won numerous awards for his research, including the prestigious 2018 Blondel Medal. He is regularly consulted by industry, government, international agencies, and the media for his profound understanding of the energy transition.
by Michal Valko | Founding Researcher | Isara Labs
Show infoBiography:
Michal is the Founding Researcher at Isara Labs, tenured researcher at Inria, and the lecturer at the MVA master of ENS Paris-Saclay. Michal is primarily interested in designing algorithms that would require as little human supervision as possible. That is why he is working on methods and settings that are able to deal with minimal feedback, such as deep reinforcement learning, bandit algorithms, self-supervised learning, or self play. Michal has recently worked on representation learning, word models and deep (reinforcement) learning algorithms that have some theoretical underpinning. In the past he has also worked on sequential algorithms with structured decisions where exploiting the structure leads to provably faster learning. Michal is now working on large large models (LMMs), in particular providing algorithmic solutions for their scalable fine-tuning and alignment. He received his Ph.D. in 2011 from the University of Pittsburgh, before getting a tenure at Inria in 2012 and starting Google DeepMind Paris in 2018 with Rémi Munos, In 2024, he became the principal Llama engineer at Meta, building online reinforcement learning stack and research for Llama 3.
Abstract:
How can machines learn meaningful representations from data without requiring expensive human annotations? This talk explores self-supervised learning—a paradigm shift that enables AI systems to learn from vast amounts of unlabeled data, much like humans learn through observation. We'll focus on BYOL (Bootstrap Your Own Latent), a method that trains neural networks to discover useful patterns by comparing different views of the same data point. Unlike traditional approaches that require carefully labeled examples, BYOL learns by predicting how the same entity appears under different transformations. Beyond images, we'll discuss how these ideas extend to diverse domains: analyzing social networks and community structures (BGRL), understanding neural recordings and biological signals (MYOW), and integrating multiple data modalities like text, images, and audio (BEAST). These techniques open possibilities for any field with abundant unlabeled data—from social media analysis and behavioral patterns to scientific discovery. The talk will be accessible to researchers across disciplines and will highlight opportunities for applying these methods to computational social science, network analysis, and other data-rich domains.
by Pierre Menard | Research Scientist | Meta
Show infoBiography:
Pierre Ménard is a Research Scientist at Meta, where he works on large language model agents. His research focuses on reinforcement learning, with particular interests in multi-agent systems, unsupervised learning, and reinforcement learning from human feedback. Before joining Meta, he was a postdoctoral researcher, most recently at ENS Lyon. He earned his PhD in machine learning from Université Toulouse III – Paul Sabatier, where he studied exploration-exploitation trade-offs in sequential decision-making problems.
Abstract:
We introduce Meta Agents Research Environments (ARE), a research platform for scalable creation of environments, integration of synthetic or real applications, and execution of agentic orchestrations. ARE provides simple abstractions to build complex and diverse environments, each with their own rules, tools, content, and verifiers, helping to bridge the gap between model development and real-world deployment. We also propose Gaia2, a benchmark built in ARE and designed to measure general agent capabilities. Beyond search and execution, Gaia2 requires agents to handle ambiguities and noise, adapt to dynamic environments, collaborate with other agents, and operate under temporal constraints. Unlike prior benchmarks, Gaia2 runs asynchronously, surfacing new failure modes that are invisible in static settings. Our experiments show that no system dominates across the intelligence spectrum: stronger reasoning often comes at the cost of efficiency, and budget scaling curves plateau, highlighting the need for new architectures and adaptive compute strategies. Perhaps more importantly, ARE abstractions enable continuous extension of Gaia2 to other environments, empowering the community to rapidly create new benchmarks tailored to their domains. In AI's second half, progress increasingly depends on defining meaningful tasks and robust evaluations to drive frontier capabilities forward.
Biography:
Ondřej Dušek is an Assistant Professor at Charles University in Prague, working on natural language generation and human-computer dialogue. His research focuses on generative language models including large language models, mostly applied to the data-to-text and dialogue response generation tasks. He is specifically interested in evaluating the quality of generated content, specifically its semantic accuracy. After obtaining his PhD in Prague, Ondřej spent 2 years as a postdoc at Heriot-Watt University in Edinburgh. Back in Prague, he is currently the PI of an ERC Starting Grant which aims to produce fluent, accurate and explainable natural language generation systems.
by Edoardo Ponti | Assistant Professor | University of Edinburgh
Show infoBiography:
I am an assistant professor in Natural Language Processing at the University of Edinburgh and an affiliated lecturer at the University of Cambridge. Over the past year, I was a visiting professor at NVIDIA. My research focuses primarily on efficient and modular architectures for foundation models, especially with respect to adaptive memory and end-to-end tokenization. Previously, I was a visiting postdoctoral scholar at Stanford University and a postdoctoral fellow at Mila Montreal and Mc Gill University. In 2021, I obtained a PhD from the University of Cambridge. My research has been featured the Economist and Scientific American, among others. I received a Google Research Faculty Award and several awards (Highlight Awards at ACL 2025 and Best Paper Awards at EMNLP 2021 and Repl4NLP 2019). I am recipient of an ERC Starting Grant and a £2M ARIA grant. I am a Scholar of the European Lab for Learning and Intelligent Systems (ELLIS) and part of the TACL journal editorial team.
by Rebekka Burkholz | Group Leader | Relational Machine Learning Group at the Helmholtz Center CISPA
Show infoBiography:
Rebekka leads the Relational Machine Learning Group at the Helmholtz Center CISPA since 2021. Their research on sparse deep learning is supported by an ERC starting grant since December 2023 and Apple Research since August 2025. From 2019-2021, she was a PostDoc at the Biostatistics Department of the Harvard T.H. Chan School of Public Health working with John Quackenbush, from 2017-2018 at the Institute for Machine Learning at ETH Zurich with Joachim Buhmann, and from 2016-2017 at the Chair of Systems Design at ETH Zurich with Frank Schweitzer. Her PhD research at the ETH Risk Center was supervised by Frank Schweitzer and co-supervised by Hans J. Herrmann from 2013-2016. Her thesis on systemic risk won the Zurich Dissertation Prize and our work on international maize trade received the CSF Best Contribution Award. She studied Mathematics and Physics at TU Darmstadt.
Abstract:
Deep learning continues to achieve impressive breakthroughs across disciplines but relies on increasingly large neural network models that are trained on massive data sets. Their development inflicts costs that are only affordable by a few labs and prevent global participation in the creation of related technologies. In this talk, we will ask the question whether it really has to be like this and discuss some of the major challenges that limit the success of deep learning at smaller scales. Last but not least, we will also provide an overview over our most promising solution strategies.
by Marina Esteban-Medina | Postdoctorate Researcher | ETH Zürich AI Center, ETH Zürich’s Department of Biosystems Science and Engineering
Show infoBiography:
Dr.Marina Esteban-Medina is a postdoctoral researcher at the ETH Zürich AI Center and ETH Zürich’s Department of Biosystems Science and Engineering (Switzerland). She works at the intersection of AI and translational biomedicine, developing machine-learning and statistical models to understand tumor heterogeneity and treatment response, with a particular focus on pediatric cancer.
She earned a degree in Biotechnology and an MSc in Computational Biology from Universidad Carlos III de Madrid, followed by a PhD in Computer Science from the University of Córdoba. During her doctoral work in a Computational Medicine platform, she built predictive models that integrated clinical and genomic data from the Andalusian healthcare system with public resources to identify therapeutic targets in cancer and rare diseases, collaborating with the Francis Crick Institute (London) and Institut Curie (Paris).
Her research has been supported by competitive international funding, including an EMBO Scientific Exchange Grant and an ETH AI Center Postdoctoral Fellowship. She contributes to international initiatives such as the Disease Map and several ELIXIR communities as well as the Immune Digital Twins. Her interests lie in bridging robust AI methodology with clinically actionable insights.
Abstract:
Predicting tumor drug response is a fundamental challenge in precision oncology, complicated by cellular heterogeneity, where minor resistant sub-populations drive clinical relapse. While single-cell transcriptomics offers a high-resolution view of these mixtures, a significant domain gap remains: most drug-response labels originate from bulk cell-line screens, whereas clinical deployment targets single-cell patient data.
In this talk, I will explore how to bridge this gap through biologically grounded domain adaptation. I will examine why standard alignment-focused deep transfer methods often fail when they overlook the complex biological realities of the transfer problem. Robust translation requires moving beyond predicting a single sensitivity score toward modeling response as a structured change in cellular state. In this direction, recent generative frameworks have opened promising paths toward a comprehensive modeling of these cellular responses. Finally, I will highlight the importance of uncertainty estimation as a guardrail, ensuring that predictions in new single-cell datasets are safely bounded by the limits of the model's training knowledge.
Biography:
Ivan Cimrák is a Professor at University of Žilina (Slovakia), where he works at the intersection of artificial intelligence and biomedical engineering. His recent research focuses on neural networks for medical imaging, with an emphasis on mammography—including analyses of mammography datasets for deep learning and methods that leverage longitudinal screening information to support earlier and more reliable detection. In collaboration with clinical partners, he also develops AI-assisted workflows for digital pathology, such as computer-supported scoring of proliferation markers (e.g., Ki67) from routine histology slides.
Alongside AI, he has a strong background in computational modelling of blood flow and cell mechanics, and he has contributed widely used open-source implementations for simulating deformable cells in fluid environments.
His work has been recognized by multiple awards, including the Slovak “Cena za vedu a techniku” (Science and Technology Award) 2024 in the category Personality of Science and Technology, and he was a finalist of the ESET Science Award 2024 (Outstanding Academic).