Session | ||
Statistical Inference
| ||
Presentations | ||
A Simulation Study to Compare Inferential Properties when Modelling Ordinal Outcomes: The Case for the (Plain but Robust) Proportional Odds Model 1Department of Statistics, TU Dortmund University; 2Research Center Trustworthy Data Science and Security, University Alliance Ruhr (UA Ruhr); 3Core Facility Biostatistics, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University Ordinal measurements are common outcomes in studies within psychology, as well as in the social and behavioral sciences. Choosing an appropriate regression model for the analysis of such data poses a difficult task. Relation Analysis - A new Approach to Logical and Statistical Analysis of Hypotheses and Data Faculty of Psychology, University of Vienna, Austria Introduction: Statistical hypotheses are often formulated too simply relative to the complexity of the empirical phenomena being studied (Humphreys, 1989; Sanbonmatsu & Johnston, 2019), and logical relationships between the variables are rarely considered although they are often present in empirical systems. This is one of the reasons why the effect sizes and power of many studies are likely to be very small, making them difficult to replicate. Relation Analysis is a new approach to data analysis based on prediction analysis (Hildebrand, Laing & Rosenthal, 1977; Eye, 1991) which aims to help overcome this shortcoming. No Analytical Power Formula? No Problem. A General Model-Implied Simulation-Based Approach to Power Estimation for Likelihood Ratio Tests with Applications in Complex SEM Humboldt-Universität zu Berlin, Department of Psychology, Psychological Research Methods, Berlin, Germany Power analysis is a critical step in study planning, guiding researchers in determining the sample size needed to detect effects with adequate sensitivity. While analytical solutions for power exist in structural equation modeling (SEM) frameworks—such as confirmatory factor analysis (CFA) and traditional SEM under multivariate normality—many advanced applications involve models or data structures where such formulas are unavailable or intractable. Examples include nonlinear models, continuous-time SEM (CTSEM), and various violations of distributional assumptions. To address these challenges, we propose Model-Implied Simulation-Based Power Estimation (MSPE), a flexible simulation-based approach to power estimation for likelihood ratio tests (LRTs) . MSPE models the relationship between sample size and statistical power by fitting a generalized linear model to simulated significance decisions based on the LRT. Building on Irmer et al. (2024), who identified the MSPE as a probit regression using the square root of sample size for z-tests and demonstrated its performance in nonlinear SEM, we extend this idea to LRT-based inference using maximum likelihood estimation. We illustrate the method in three SEM contexts: CFA and SEM under multivariate normality, where analytical power formulas exist, and CTSEM, where they do not. This allows us to validate MSPE against known benchmarks and demonstrate its utility in more complex scenarios. We also compare MSPE to alternative parametric, non-parametric, and machine learning approaches for modeling the power–sample size relationship. Results highlight MSPE’s strengths in flexibility, interpretability, and accuracy, making it a valuable tool for power analysis in both standard and advanced modeling settings. Post-selection inference in Linear Mixed Models Universität Kassel, Germany In psychological research, statistical models are often refined through data-driven variable selection. However, testing the significance of parameters in a model selected using the same data distorts the sampling distribution of the estimates. Consequently, under the null hypothesis, p-values are no longer uniformly distributed, inflating the (selective) Type I error rate beyond the nominal level. While correction methods have been proposed for general linear models, valid post-selection inference in linear mixed models (LMMs) remains underexplored. We conducted a simulation study to evaluate the performance of different correction methods in LMMs with respect to the p-value distribution, (selective) Type I error rate, and statistical power for fixed effects of both level-1 and level-2 predictors. The simulation settings were designed to reflect plausible scenarios in psychological research. |