A Comparison of Different Approaches for Estimating Study-Level Moderation Effects in Meta-Analytic Structural Equation Modeling
Julian F. Lohmann1, Oliver Lüdtke1,2, Alexander Robitzsch1,2
1Leibniz Institute for Science and Mathematics Education Kiel, Germany; 2Centre for International Student Assessment (ZIB)
Meta-analytic structural equation modeling (MASEM) is a widely used method for synthesizing results across multiple primary studies examining the same multivariate relationships. Different MASEM approaches exist that can be distinguished by (1) employing either a one-stage or two-stage procedure, (2) whether they are correlation-based or parameter-based , and (3) whether between-study heterogeneity is explicitly modeled by random effects (Jak & Cheung, 2020). In many meta-analyses, a key objective is finding study-level moderators that explain heterogeneity across studies. In order to address such research questions, MASEMs have been proposed to be expanded by study-level moderators. However, MASEMs with study-level moderators were not yet systematically compared in the literature. In the present study, we ran a simulation study evaluating the parameter bias, the root mean squared error, and the coverage of moderation effects incorporated in different MASEM variants. The three MASEM approaches evaluated in the simulation study differ in how they model between-study heterogeneity in SEM parameters. We compare an extension of Ke et al.’s (2019) parameter-based approach—assuming either uncorrelated or correlated random effects—with a fixed-effects MASEM approach (using resampling-based standard errors) that does not incorporate random effects. The following factors were varied across simulation conditions: number of studies (20, 50, 100, and 500), sample size of the primary studies (100, 300, and 1000), and correlation of the random effects (0 and 0.5). Preliminary results suggest that the three approaches performed similarly across many conditions, yielding reliable point estimates and standard errors of moderation effects in MASEM.
MetaEGM - Combining Evidence Gap Maps and Meta-Analysis
Julian Gregor Scherhag, Michael Bosnjak
Trier University, Germany
Evidence Gap Maps (EGMs) are tabular and visual summaries of systematic review findings, highlighting well-studied and under-researched areas within a research field. As such, EGMs are useful for directing future research efforts, addressing knowledge gaps, providing justifications for funding, and for evaluating the evidence gathered. However, typical EGMs remain at the descriptive level and are thus limited, lacking in-depth substantive conclusions. Our presentation aims to show how EGMs can be combined with meta-analysis to increase their substantive conclusiveness, generalizability, and impact. Using an example from personality psychology, we mapped and synthesized trait-outcome associations across three taxonomies (affect-behavior-cognition-desire; Theory of Planned Behavior constructs; and life domains) to evaluate the predictive validity of the HEXACOD model, which add Disintegration (i.e.,psychotic-like experiences and behaviors) to the HEXACO model. We review the procedures and merits of this MetaEGM approach and discuss its minimum requirements.
Prediction Intervals for Meta-Analysis using Combined P-value Functions
David Kronthaler, Leonhard Held
Epidemiology, Biostatistics and Prevention Institute, Department of Biostatistics, University of Zurich, Switzerland
In random-effect meta-analysis (REMA), trustworthiness of statistical inference about the average effect has been criticized, especially in the presence of substantial between-study heterogeneity. In such cases, a prediction interval (PI), which captures a plausible range of potential future effects, is considered a more informative summary. Standard additive PIs, such as the Higgins-Thompson-Spiegelhalter PI, are restricted to symmetry due to additive construction and thereby lack the ability to reflect skewness of data.
Held et al. (2025) have proposed the Edgington combined p-value function as an alternative to standard REMA, providing confidence intervals for the average effect that reflect the skewness of data. We extend their approach by proposing PIs based on the confidence density - the derivative of the combined p-value function - of the average effect and incorporate uncertainty in the between-study heterogeneity estimate through it’s distribution derived via the generalized heterogeneity statistic.
The proposed PIs are unlike standard PIs not restricted to be symmetric around the point estimate and can thus reflect skewness in the data. Coverage levels and bias will be investigated in a simulation study for varying skewness, between-study-heterogeneity and number of studies included. In addition to PIs, our method also provides the complete predictive distribution for visualization or the computation of other summary statistics.
References
Held, L., Hofmann, F., & Pawel, S. (2025). A comparison of combined p-value functions for meta-analysis. arXiv preprint. https://www.arxiv.org/abs/2408.08135v2
Reliability Multiverse Analyses: Chances & Challenges
Mario Reutter
Julius-Maximilians-University Würzburg, Germany
During data analysis, researchers have to choose between multiple analytical decisions (e.g., cut-offs for outliers or model specifications), which often are similarly justifiable. This "garden of forking paths" facilitates questionable research practices like p-hacking and selective reporting. While preregistration of analysis pipelines offers one solution to this problem, data exploration enables new (meta-)scientific insights that can potentially advance whole research fields.
In this talk, I will argue that a systematic exploration of our researcher degrees of freedom in the form of multiverse analyses comprises an important complement to preregistered hypothesis testing. Specifically, elucidating the reliability of measurements in such a fashion may increase the robustness of results and thus contribute to overcoming the replication crisis as well as improving practical applicability of scientific results.
Of course, multiverse analyses do not only offer chances but also entail challenges: Especially for multivariate time series data like EEG or fMRI, combinatoric explosion of preprocessing steps result in virtually unlimited specifications that cannot be tested in a world with finite resources. Consequently, a prerequisite for multiverse analyses with this kind of data would be a systematic review on existing analysis pipelines in order to soundly derive the specifications to be tested. The feasibility and meta-scientific potential of such an endeavor is discussed.
|