Upcoming Meeting on spring 2024
- Dec 4th, 2024 from 14.30 to 17.30 at UvA Science Park Lab42-L3.36.
Time | Program |
---|---|
14:30-15:30 | Riccardo Massidda (University of Pisa) - Frameworks for Representing and Learning Abstract Causal Models By extending graphical probabilistic models, causal models are fundamental tools for decision-making and what-if reasoning in the presence of uncertainty. Despite growing research on the problem of recovering causal graphs from data, their application to datasets with plenty of variables is still a pressing issue. Causal Abstraction is a recently defined framework that enables concise representations of large systems through smaller causal models. In this talk, Riccardo will overview the state-of-the-art definitions of Causal Abstraction and discuss their most important aspects and differences. Intuitively, we will see how abstract models can retain causal properties of the system by aggregating the higher-dimensional representation. Then, the talk will focus on the problem of learning abstract causal models by presenting existing approaches and discussing their common assumptions and limitations. |
15:30-16:30 | Phillip Lippe (UvA) - Causal Representation Learning across Multiple Environments Identifying causal variables and their relations from high-dimensional observations is of great interest in applications like robotics and embodied AI. As it has been shown that causal variables are not necessarily identifiable in the most general setting, recent research focused on using observations from multiple, slightly perturbed environments (e.g. by interventions) to enable identifiability. In this talk, we present the current state and open challenges of causal representation learning in multi-environment settings. We will first review existing methods, including our work on CITRIS and BISCUIT, which leverage intervention-based data to learn causal representations. However, real-world scenarios often involve dynamic environments with varying causal structures and observation functions. To address this, we will discuss our ongoing research on learning causal representations that generalize to unseen environments. Specifically, we will explore under which settings causal representations can identify multiple environments from samples of a joint observation distribution. Furthermore, we explore the usage of object-centric encodings to enable zero-shot generalization to novel, compositional environments. |
16:30-17:30 | Drinks at Polder |