Session | ||
Heterogeneity
| ||
Presentations | ||
Title: As Easy as It Gets: Exploring Parameter Heterogeneity with Individual Parameter Contribution Regression Humboldt-Universität zu Berlin, Germany Researchers working with empirical data are often asked: “Interesting finding, but did you consider the effect of variable X on parameter Y?” Especially in observational studies, there are countless ways in which model parameters may vary as a function of covariates. While the ideal is to identify and include all relevant covariates before analyzing the data, the reality is often different: concerns about covariate-related heterogeneity frequently arise only once the data are being analyzed—or during peer review. In this talk, we introduce you to individual parameter contribution regression (IPCR), an exploratory technique that helps identify whether and how the parameters of a fitted model vary as functions of covariates. The key strengths of IPCR are its flexibility (IPCs can be computed for virtually any parameter and a wide range of models), its simplicity (as easy to use as linear regression), and its low computational cost (results are available immediately). The main disadvantage is its data-driven and approximate nature, which makes other methods more suitable in fully confirmatory settings. Although IPCR can be applied to a wide range of parametric models, this presentation focuses on exploring parameter heterogeneity in structural equation models. We compare IPCR to approaches such as moderated nonlinear factor analysis, local structural equation modeling, and multi-group models, and discuss the strengths and weaknesses of each. We also provide hands-on R code using the latest version of the ipcr package. Reasoning with conditionals: A Bayesian latent-mixture model 1University of Potsdam; 2University of California, Irvine Suppose Jane and Jill were planning to go to Berlin last weekend. How justified is it to infer "they travelled by train" on the basis that "If they went to Berlin, then they travelled by train" and "they went to Berlin"? To answer this question, we need to assess how the statements that make up the inference are related to one another, and what the individual statements mean - In particular, what the if-then, or conditional, statement means. Theories of reasoning and of the meaning(s) of conditionals are often difficult to test empirically because they are specified only at the verbal level, and make overlapping predictions. To address this limitation, we specify current competing theories of reasoning with conditionals computationally. Building on the hypothesis that people try not to contradict themselves when reasoning, we derive sets of consistent conclusions for a range of inferences and interpretations of conditionals, and compare them using a Bayesian latent-mixture model. Applying the model to simulated and existing datasets, we illustrate how different combinations of inferences provide more or less information for distinguishing between competing theories based on the specificity and the degree of overlap in their predictions. Covariate-informed classification of qualitative individual differences with Bayesian hierarchical latent-mixture models 1University of Mannheim; 2University of California, Irvine Psychological researchers commonly rely on group-level means to describe psychological phenomena. This practice assumes – often implicitly – that individual effects vary only quantitatively. However, this assumption is often violated. For instance, while some individuals may exhibit a positive effect, others may show no effect or even an effect in the opposite direction. In such cases, where individual effects differ qualitatively, the mean may mask this heterogeneity and mislead the analyst. Therefore, distinguishing between qualitative and quantitative individual differences has received increasing attention in psychological research. A major advance in this area has been the development of Bayesian hierarchical models for assessing the global structure of individual differences. More recently, these models have been extended to hierarchical latent-mixture models, which support better generalization beyond the observed sample, enable explicit classification of individuals, and allow for fine-grained regularization of individual effect estimates based on latent class membership. In the present project, we further develop this approach by incorporating person-level covariates as predictors of latent class membership. This extension facilitates the assessment of potential explanatory factors underlying qualitative differences in psychological effects. Furthermore, our implementation in user-friendly software flexibly accommodates various data structures via link functions. We evaluate the validity and robustness of the method through extensive simulation studies and demonstrate its practical utility using empirical data. Thus, our development provides a powerful framework for detecting and explaining qualitative individual differences in psychological effects. Score-Based Tests for Parameter Instability in Ordinal Factor Models 1Deutsches Jugendinstitut (DJI), Germany; 2University of Zurich and École Polytechnique Fédérale de Lausanne; 3Department of Statistics, University of Munich (LMU) We present a novel approach for computing model scores for ordinal factor models, i.e. Graded Response Models (GRMs) fitted with a limited information (LI) estimator. The method makes it possible to compute score-based tests for parameter instability for ordinal factor models. This way, rapid execution of numerous parameter instability tests for Multidimensional Item Response Theory (MIRT) models is facilitated. We present a comparative analysis of the performance of the proposed score-based tests for ordinal factor models in comparison to tests for GRMs fitted with a full information (FI) estimator. The new method has a good Type I error rate, high power, and is computationally faster than FI estimation. We further illustrate that the proposed method works well with complex models in real data applications. The method is implemented in the lavaan package in R. |