Conference Agenda
Session | ||
Causal Inference Longitudinal
| ||
Presentations | ||
Towards a Clearer Understanding of Causal Estimands: The Importance of Joint Effects in Longitudinal Designs with Time-Varying Treatments 1LMU, Germany; 2Charlotte Fresenius Universität München Longitudinal study designs pose unique challenges for causal reasoning. Joint effects are central in the causal inference literature because they extend average treatment effects to repeated interventions, offering a practical measure of combined intervention effects over time. Estimating Causal Effects of Time-Varying Treatments in Latent State-Trait Models for Intensive Longitudinal Data 1Friedrich-Schiller-Universität Jena; 2Universität Leipzig With advancements in data collection methods as, for instance, experience sampling, new challenges arise to identify causal effects in these intensive longitudinal data. If allocation to a (possibly time-varying) treatment is not randomized as in most observational studies, causal effects may be confounded due to (possibly time-varying) covariates. However, adjusting for time-varying covariates in the outcome model may then lead to post-treatment bias. In such cases, g-methods, e.g., inverse probability of treatment weighting (IPTW, Robins et al., 2000), can be used to estimate causal effects. In this talk, we show how g-methods can be used with latent state-trait (LST, Steyer et al., 1992; Steyer et al., 2015) models for intensive longitudinal data to estimate causal changes in trait stability and situational carry-over. By building upon the recently introduced moderated nonlinear LST approach (MNLST; Oeltjen et al., 2023), we illustrate how time-varying treatment variables can be included to explain key model parameters in LST models such as mean trait level, trait variability, or autoregressive effects. Combining Factor Scores and G-estimation to Handle Unmeasured Confounding in Latent Mediation Analysis Methods Center, University Tübingen, Germany Modelling mediation processes in longitudinal intervention studies provides a valuable framework for understanding underlying causal mechanisms. However, most standard mediation analyses rely on the often unrealistic assumption of no unmeasured confounding between the mediator and the outcome, called sequential ignorability. A powerful arsenal of theory-building tools, or just hopelessly lost in 'modeling'? A look at structural equation models from a perspective of scientific logic. Goethe University Frankfurt, Germany Explanation, representation, and falsifiability are critical functions of a scientific model. Against a background of philosphy of science writings by Hempel, Carnap, and Suppes, we discuss the features of structural equation and related statistical models. What claims for scientific explanation can be made by application of such a model and what claims are clearly not substantiated and hardly more than wishful thinking? What aspects of their model structure are verifiable and which are not, and what are the consequences for scientific explanation? To what degree are statements about the data-generating structure of the model verifiable or falsifiable? And, finally, from the standpoint of scientific explanation, to what degree does good statistical fit permit statements about the confirmation of a theory? The paper attempts to address several of these questions and closes with some recommendations for the proper use and scientific interpretation of these models. |