Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Causal Inference Longitudinal
Time:
Wednesday, 01/Oct/2025:
9:00am - 10:30am

Session Chair: Lukas Junker
Location: Raum L 113

76

Show help for 'Increase or decrease the abstract text size'
Presentations

Towards a Clearer Understanding of Causal Estimands: The Importance of Joint Effects in Longitudinal Designs with Time-Varying Treatments

Lukas Junker1, Ramona Schödel2, Florian Pargent1

1LMU, Germany; 2Charlotte Fresenius Universität München

Longitudinal study designs pose unique challenges for causal reasoning. Joint effects are central in the causal inference literature because they extend average treatment effects to repeated interventions, offering a practical measure of combined intervention effects over time.
Besides explaining the concept of joint effects, we discuss their applicability to psychological research. We focus on their interpretation and whether they can realistically be identified in longitudinal observational studies in psychology. In this context, addressing unmeasured confounding is a crucial aspect of causal inference, yet it is insufficiently discussed in the psychological literature. To bridge this gap, we propose a class of research designs for psychological studies where treatment assignment is driven by observable covariates so that joint effects can be identified under more reasonable assumptions.



Estimating Causal Effects of Time-Varying Treatments in Latent State-Trait Models for Intensive Longitudinal Data

Fabian Felix Münch1, Jana Holtmann2, Tobias Koch1

1Friedrich-Schiller-Universität Jena; 2Universität Leipzig

With advancements in data collection methods as, for instance, experience sampling, new challenges arise to identify causal effects in these intensive longitudinal data. If allocation to a (possibly time-varying) treatment is not randomized as in most observational studies, causal effects may be confounded due to (possibly time-varying) covariates. However, adjusting for time-varying covariates in the outcome model may then lead to post-treatment bias. In such cases, g-methods, e.g., inverse probability of treatment weighting (IPTW, Robins et al., 2000), can be used to estimate causal effects. In this talk, we show how g-methods can be used with latent state-trait (LST, Steyer et al., 1992; Steyer et al., 2015) models for intensive longitudinal data to estimate causal changes in trait stability and situational carry-over. By building upon the recently introduced moderated nonlinear LST approach (MNLST; Oeltjen et al., 2023), we illustrate how time-varying treatment variables can be included to explain key model parameters in LST models such as mean trait level, trait variability, or autoregressive effects.



Combining Factor Scores and G-estimation to Handle Unmeasured Confounding in Latent Mediation Analysis

Sofia Morelli, Roberto Faleh, Holger Brandt

Methods Center, University Tübingen, Germany

Modelling mediation processes in longitudinal intervention studies provides a valuable framework for understanding underlying causal mechanisms. However, most standard mediation analyses rely on the often unrealistic assumption of no unmeasured confounding between the mediator and the outcome, called sequential ignorability.
By using G-estimation in place of standard estimation techniques such as maximum likelihood or least squares, this assumption can be relaxed and replaced with more plausible conditions, such as rank preservation or no essential heterogeneity. To extend this approach to latent constructs, we develop a factor score-based version using a two-stage method of moments to correct for bias introduced by measurement error.
We evaluate the performance of the proposed method through simulation studies, comparing it to standard structural equation modeling (SEM), and demonstrate its advantages in settings with unmeasured confounding.



A powerful arsenal of theory-building tools, or just hopelessly lost in 'modeling'? A look at structural equation models from a perspective of scientific logic.

Andreas Klein

Goethe University Frankfurt, Germany

Explanation, representation, and falsifiability are critical functions of a scientific model. Against a background of philosphy of science writings by Hempel, Carnap, and Suppes, we discuss the features of structural equation and related statistical models. What claims for scientific explanation can be made by application of such a model and what claims are clearly not substantiated and hardly more than wishful thinking? What aspects of their model structure are verifiable and which are not, and what are the consequences for scientific explanation? To what degree are statements about the data-generating structure of the model verifiable or falsifiable? And, finally, from the standpoint of scientific explanation, to what degree does good statistical fit permit statements about the confirmation of a theory? The paper attempts to address several of these questions and closes with some recommendations for the proper use and scientific interpretation of these models.