Conference Agenda

Session
Statistical Inference
Time:
Monday, 29/Sept/2025:
11:30am - 1:00pm

Session Chair: Julien P. Irmer
Location: Raum L 115

60

Presentations

A Simulation Study to Compare Inferential Properties when Modelling Ordinal Outcomes: The Case for the (Plain but Robust) Proportional Odds Model

Stefan Inerle1, Markus Pauly1,2, Moritz Berger3

1Department of Statistics, TU Dortmund University; 2Research Center Trustworthy Data Science and Security, University Alliance Ruhr (UA Ruhr); 3Core Facility Biostatistics, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University

Ordinal measurements are common outcomes in studies within psychology, as well as in the social and behavioral sciences. Choosing an appropriate regression model for the analysis of such data poses a difficult task.
This paper aims at facilitating modeling decisions for quantitative researchers by presenting the results of an extensive simulation study on the inferential properties of common ordinal regression models: the proportional odds model, the category-specific odds model, the location-shift model, the location-scale model, and the linear model, which incorrectly treats ordinal outcomes as metric.
The simulations were conducted under different data generating processes based on each of the ordinal models and varying parameter configurations within each model class. We examined the bias of parameter estimates as well as type I error rates ($alpha$-errors) and the power of statistical parameter testing procedures corresponding to the respective models.
Our findings reveal several highlights. For parameter estimates, we observe that cumulative ordinal regression models show large biases in cases of large parameter values and high skewness of the outcome distribution in the true data generation process. Regarding statistical hypothesis testing, the proportional odds model and the linear model showed the most reliable results. Due to its better fit and interpretability for ordinal outcomes, we recommend the use of the proportional odds model unless there are relevant contraindications.



Relation Analysis - A new Approach to Logical and Statistical Analysis of Hypotheses and Data

Rainer Maderthaner

Faculty of Psychology, University of Vienna, Austria

Introduction: Statistical hypotheses are often formulated too simply relative to the complexity of the empirical phenomena being studied (Humphreys, 1989; Sanbonmatsu & Johnston, 2019), and logical relationships between the variables are rarely considered although they are often present in empirical systems. This is one of the reasons why the effect sizes and power of many studies are likely to be very small, making them difficult to replicate. Relation Analysis is a new approach to data analysis based on prediction analysis (Hildebrand, Laing & Rosenthal, 1977; Eye, 1991) which aims to help overcome this shortcoming.
Method: The Relation Analysis is based on mathematical relations between variables and their description by propositional logic. The available beta version of the programme (www.relan.at) allows the testing, simulation and exploration ("data mining") of hypotheses (with up to ten variables). The logical structure of hypotheses is examined for tautology, contradiction and similarity of the hypothesis components. The advantages of Relation Analysis are that it allows the inclusion of complex hypotheses (with alternative hypothesis testing), the consideration of effect structures, an exhaustive assessment of interactions, and the quantification of the empirical content (precision) of hypotheses. Many statistical parameters of significance and effect size are calculated.
Conclusion: Evaluation examples are used to show that correlational measures of association are unable to deal with logically related variables of the empirical reality. Because simple hypothesis testing masks complex regularities, complex hypotheses (including moderators, mediators, and alternative cause variables) should always be tested first before moving on to simple explanations.



No Analytical Power Formula? No Problem. A General Model-Implied Simulation-Based Approach to Power Estimation for Likelihood Ratio Tests with Applications in Complex SEM

Julien P. Irmer, Manuel C. Voelkle

Humboldt-Universität zu Berlin, Department of Psychology, Psychological Research Methods, Berlin, Germany

Power analysis is a critical step in study planning, guiding researchers in determining the sample size needed to detect effects with adequate sensitivity. While analytical solutions for power exist in structural equation modeling (SEM) frameworks—such as confirmatory factor analysis (CFA) and traditional SEM under multivariate normality—many advanced applications involve models or data structures where such formulas are unavailable or intractable. Examples include nonlinear models, continuous-time SEM (CTSEM), and various violations of distributional assumptions. To address these challenges, we propose Model-Implied Simulation-Based Power Estimation (MSPE), a flexible simulation-based approach to power estimation for likelihood ratio tests (LRTs) . MSPE models the relationship between sample size and statistical power by fitting a generalized linear model to simulated significance decisions based on the LRT. Building on Irmer et al. (2024), who identified the MSPE as a probit regression using the square root of sample size for z-tests and demonstrated its performance in nonlinear SEM, we extend this idea to LRT-based inference using maximum likelihood estimation. We illustrate the method in three SEM contexts: CFA and SEM under multivariate normality, where analytical power formulas exist, and CTSEM, where they do not. This allows us to validate MSPE against known benchmarks and demonstrate its utility in more complex scenarios. We also compare MSPE to alternative parametric, non-parametric, and machine learning approaches for modeling the power–sample size relationship. Results highlight MSPE’s strengths in flexibility, interpretability, and accuracy, making it a valuable tool for power analysis in both standard and advanced modeling settings.



Post-selection inference in Linear Mixed Models

Anna Nikolei, Florian Scharf

Universität Kassel, Germany

In psychological research, statistical models are often refined through data-driven variable selection. However, testing the significance of parameters in a model selected using the same data distorts the sampling distribution of the estimates. Consequently, under the null hypothesis, p-values are no longer uniformly distributed, inflating the (selective) Type I error rate beyond the nominal level. While correction methods have been proposed for general linear models, valid post-selection inference in linear mixed models (LMMs) remains underexplored. We conducted a simulation study to evaluate the performance of different correction methods in LMMs with respect to the p-value distribution, (selective) Type I error rate, and statistical power for fixed effects of both level-1 and level-2 predictors. The simulation settings were designed to reflect plausible scenarios in psychological research.