Upcoming Meeting in spring 2026

TimeProgram
14:30-14:45Opening
14:45-15:30Eni Musta (University of Amsterdam) - Can we detect treatment effect waning from time-to-event data?

Does a vaccine lose its protective power over time? This question matters for public health decisions — like when to recommend booster doses — but answering it turns out to be harder than one might expect. In this talk, we show that standard methods for detecting treatment effect waning from clinical trial data can be misleading. This holds not only for the conventional hazard ratio, which is known to suffer from selection bias, but also for recently proposed causal alternatives, including the "challenge effect" framework based on hypothetical controlled exposure trials. It can for example happen that these estimands suggest waning of treatment effect for both subpopulations but anti-waning for the combined population and vice versa. This phenomenon, which resembles Simpson's paradox and leads to a decision-theoretic contradiction, in this setting occurs even when using properly causal estimands. Moreover, without untestable modelling assumptions, the same observed data can be consistent with two opposite realities: one where the treatment becomes more effective over time for all subpopulations and another where it wanes for all subpopulations. We discuss what assumptions might resolve this paradox and explore some alternative approaches.
15:30-15:45Break
15:45-16:30Hidde Fokkema (University of Amsterdam) Sample-efficient Learning of Concepts with Theoretical Guarantees: from Data to Concepts without Interventions.

Concept Bottleneck Models (CBM) address some of the challenges modern ML approaches face by learning interpretable concepts from high-dimensional data, e.g. images, which are used to predict labels. In this talk, I will describe a new framework that provides theoretical guarantees on the correctness of the learned concepts and on the number of required labels, without requiring any interventions. Our framework leverages causal representation learning (CRL) methods to learn latent causal variables from high-dimensional observations in a unsupervised way, and then learns to align these variables with interpretable concepts with few concept labels. We propose a linear and a non-parametric estimator for this mapping, providing a finite-sample high probability result in the linear case and an asymptotic consistency result for the non-parametric estimator. We evaluate our framework in synthetic and image benchmarks, showing that the learned concepts have less impurities and are often more accurate than other CBMs, even in settings with strong correlations between concepts.
16:30-17:30Drinks at Polder