Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
1.1: Disrupting Media Psychological Views (Position Paper)
Time:
Thursday, 11/Sept/2025:
8:45am - 10:15am

Session Chair: Hannah LOGEMANN
Location: LX1205


Show help for 'Increase or decrease the abstract text size'
Presentations

A New Generation of Screen Time Research: Taking Users’ Feelings Seriously

Jana DOMBROWSKI, Sabine TREPTE

University of Hohenheim, Germany

Screen time, i.e., the time users spend engaged with electronic devices, is widely studied but criticized for methodological and theoretical shortcomings (Kaye et al., 2020). This position paper revisits the debate and addresses how to move beyond these criticisms. We integrate previous research and new evidence from our own research, to systematize the concept of screen time while also proposing a fresh perspective – one that embraces rather than demonizes the subjectivity of screen time.

Generations of Screen Time Research

The first generation of screen time research relied on self-reports, asking how much time users spend with media. Assuming that self-reports accurately reflect actual media usage, scholars have examined for decades whether the time spent on violent games is related to aggression (Przybylski & Weinstein, 2019) or whether digital technology harms mental health (Orben & Przybylski, 2019). However, people tend to overestimate their screen time (Burnell et al., 2021; Scharkow, 2016; Verbeij et al., 2021), raising serious concerns about the validity of these self-reports (Parry et al., 2021).

Thus, the next generation of research called for objective measures, such as those provided by digital trace data from browsers, smartphones, or wearables (Ohme et al., 2024). For instance, studies in this area have shown that ‘raw’ screen time generally does neither harm nor benefit users' psychological functioning (e.g., Johannes, Vuorre, et al., 2022), further supporting the neglegible (though inconsistent) effects found regarding screen time’s impact (Meier & Reinecke, 2020; Sanders et al., 2023). However, such approaches would imply that media use can be treated like the dose of a substance (e.g., cigarettes, medication), even though the effects do not increase linearly with each additional minute of usage (Johannes, Dienlin, et al., 2022; Vanden Abeele et al., 2022). Moreover, using duration as a primary predictor overlooks person-specifics, the content users engage with, and underlying psychological mechanisms (Kaye et al., 2020).

While objective screen time poorly predicts individual media effects, self-reports still reveal meaningful patterns (Segovia Vicente et al., 2024; Sewall & Parry, 2021; Wu-Ouyang & Chan, 2022; [own evidence, study comparing objective and subjective measures, N=385]). Users seem to feel screen time impacts them and view managing it as an important part of their self-care (Vanden Abeele et al., 2022). Inspired by this, we call for a new generation of screen time research that does not neutralize screen time but recognizes that it is laden with meaning, affect, and expectations (Lee & Hancock, 2023; Wolfers, 2024). Recognizing how users feel time matters and opens up avenues for testing implicit assumptions about time perception in the theories and methods we employ.

Mapping Screen Time

Current research treats screen time as a monolithic quantity. Adopting this perspective, 'objective screen time' describes how long users actually engage with screens in units of a clock. Logging screen time may help to describe and characterize users’ daily media repertoire (e.g.,Hasebrink & Domeyer, 2012) but also aids scholars in studying the effects of the mere exposure to media (e.g., “Does screen time cause eye fatigue?”).

Viewing time as an objective measure neglects qualitative nuances that we propose incorporating into screen time research. We advocate studying 'subjective screen time', i.e., the perceived time users spend engaged with screens based on their self-assessments. This includes 'absolute subjective screen time', which reflects users’ estimated duration of past media use, a common measure in previous studies (e.g., "How long did you use social media yesterday?"). Here, our field can learn from laboratory studies conducted in other subdisciplines of psychology. These suggest that time estimates function as performance measures, as they are influenced by users’ ability to memorize and pay attention to temporal cues (Matthews & Meck, 2016). Thus, the previously observed overestimations of self-reported screen time are likely systematic, since methodological decisions such as sample characteristics, item/task wording, and stimulus characteristics affect users' estimation accuracy (Matthews & Meck, 2016). However, studying these estimates can also deepen our understanding of media phenomena: for example, fragmented screen use – as evident in media-multitasking – inflates estimates because each switch between tasks creates temporal markers, which are easily remembered by users (Xu & David, 2018).

Contrasting, 'relative subjective screen time' describes users’ phenomenological experience of time passage. Screen activities can alter the flow of time by changing the pace and context in which events unfold (Stojić & Nadasdy, 2024; [own evidence; survey study developing a measure of relative subjective screen time, N=1,030]). This aspect might be the most central to media psychology, since users’ experiences lie at the heart of our field. For example, entertaining content makes time seem to fly (Xu & David, 2018) and immersive VR sessions detatch users from clock-time during states of flow (Rutrecht et al., 2021). But even disciplines beyond media entertainment discuss intriguing links, requiring further exploration. For example, habitual scrolling may erode time, leading to feelings of guilt (Segovia Vicente et al., 2024), time distortion may reinforce compulsive media use behaviors (e.g., doomscrolling, Salisbury, 2023; addiction, Turel et al., 2018), and experiencing synchronicity can create feelings of interpersonal connectedness (Özkul & Humphreys, 2022), as well as threats to privacy (Liu, 2024).

Outlook and Discussion

Recognizing the subjectivity of screen time may help to explain why users experience its effects, even when objective measures fall short. Future studies can contribute by validating previous suggestions in naturalistic settings. Media psychologists are familiar with the mediated timescapes users engage with and can simultaneously address a hidden aspect of our theorizing and methods: the subjective perception of time. Contributing such evidence to public debates might be crucial, given widespread concerns but also misconceptions about screen time’s psychological impact central to discussions on restrictive mediation, smartphone bans in schools, and national policy (e.g., Australia’s social media ban; [own evidence, content analysis of articles in German news outlets referencing ‘screen time’, work-in-progress]). Additionally, temporal inequalities exist, highlighting how those who design mediated timescapes in politics, work settings, and leisure enact power over others (Sharma, 2014) – maybe even by intentionally manipulating users’ time perception (Wajcman, 2019).



Too Imprecise and Too Inconsistent! Challenges for Media Psychology Theorizing and Possible Solutions

Daniel Possler1, Adrian Meier2

1Hanover University of Music, Drama and Media, Germany, Germany; 2Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), Germany

The unsuccessful attempts to replicate many well-established psychological findings (Klein et al., 2014, 2018; Open Science Collaboration, 2015) have raised serious concerns. The causes of this ‘replication crisis’ are likely multifaceted (e.g., De Boeck & Jeon, 2018), but it can be attributed, at least in part, to inadequate theorizing (e.g., Eronen & Bringmann, 2021; Oberauer & Lewandowsky, 2019). In media psychology, the extent of the replication crisis is hitherto unclear, however, similar problems in theorizing can be observed (e.g., Trepte, 2013; Vorderer et al., 2021). In this position paper, we will highlight these problems and propose a way forward based on recent advances in psychological theory construction methodologies (e.g., Robinaugh et al., 2021; Scheel et al., 2021). Hence, this position paper aims to (i) systematize the problems in our field, (ii) identify the underlying, subject-specific causes and (iii) point the community toward potential solutions that address these causes.

A Few Preliminaries

There is no consensus definition of a theory or the theorizing process. Following Jaccard and Jacoby (2020), we understand a theory in a rather broad sense as “a set of statements about the relationship(s) between two or more concepts or constructs” (p. 28). To further improve conceptual clarity, we follow Borsboom et al. (2021) and distinguish phenomenon (i.e., a stable feature of the world; explanandum) and data (i.e., observations of the phenomenon) from theories (i.e., set of statements explaining the phenomenon; explanans). Finally, we assume that the necessary qualities of a theory are internal logical consistency, consistency with known data (postdiction), and testability (or empirical falsifiability; Jaccard & Jacoby, 2020). Moreover, a good theory should also display the desirable qualities of high explanatory and predictive power (i.e., provide plausible explanations and accurate predictions for a wide range of phenomena), a high capacity to integrate and organize existing knowledge, high clarity, and originality which, ideally, inspires future research (DeAndrea & Holbert, 2017; Jaccard & Jacoby, 2020; Shoemaker et al., 2004).

Two Core Problems

Drawing on previous literature and group discussions with media psychologists at all career stages during four theory workshops (N = ca. 70 participants), we identify two core problems in media psychological theory building. First, many of our theoretical concepts are too imprecise due to ambiguous terminology (i.e., jingle fallacies, Hanfstingl et al., 2024; Schmälzle & Huskey, 2023), poorly established definitions, and parallel but disconnected conceptualizations (i.e., jangle fallacies; Lawson & Robins, 2021). Additionally, we often lack the necessary descriptive information on the underlying phenomena (Besbris & Khan, 2017; Scheel et al., 2021). For example, to test mood management theory in non-experimental settings, we lack crucial information on the prevalence and boundary conditions of the phenomenon or the functional form (e.g., linear, sigmoid) that links media choice to mood fluctuations (Ernst et al., 2024).

Second, theory development is too inconsistent, as no theory canon exists that is collectively advanced; theories are therefore often incommensurable. For example, in research on interactivity, Bucy (2004, p. 375) identified a pattern that likely transfers to other areas: “following a period of initial fascination, researchers tend to lose interest and (after introducing their competing definition or typology) move on to other topics.” Yet, theoretical progress comes from continuous collective efforts to test, critique, extend, and revise a theory. Due to these two key problems (imprecision and inconsistency), we note that our field’s theories often fall behind the necessary and desirable quality criteria (see above).

Six Underlying, Field-Specific Challenges

To suggest solutions to these problems, we first identify underlying challenges. Partly the problems can be attributed to general challenges in psychological research, especially the complexity of the human psyche (Meehl, 1978) and academic incentive structures rewarding hypothesis testing over phenomenon description (Scheel et al., 2021) and innovation (e.g., “new” concepts) over theory refinement. Additionally, we argue that media psychologists face six field-specific challenges: (1) a large influence of media discourses on terminology, (2) rapid sociotechnical development, that constantly changes the explanandum (i.e., moving target problem), (3) high variance from two sources, humans (Meehl, 1978) and media (e.g., content, design; Reeves et al., 2016), (4) concepts embedded in convoluted construct hierarchies (Meier & Reinecke, 2021), (5) the need to distinguish media-specific from universal psychological effects (Walther, 2013) and (6) ‘import’ rather than development of genuine media psychological theories (Trepte, 2013).

A Way Forward

Against this background, we advocate a more systematic approach to theory development, building on recent advances in psychological theory construction methodologies. Specifically, theories may best be constructed along a “theoretical cycle” (Borsbloom et al., 2021, p. 762) or a “derivation chain” (Scheel et al., 2021, p. 746). While these approaches somewhat differ, they can be summarized in five steps: (I) identify phenomena and form concepts, (II) develop valid measures, (III) establish relationships between concepts, (IV) specify boundary conditions and auxiliary assumptions of effects, and – only then – (V) derive and test concrete statistical predictions that would falsify the assumptions (Borsbloom et al., 2021; Scheel et al., 2021).

The abovementioned field-specific challenges particularly obstruct phenomenon identification (step I). Following Scheel et al. (2021), we argue that media psychologists should therefore focus more strongly on non-confirmatory research practices such as describing, typologizing, and categorizing distinct media-related phenomena. We need to explore media psychological phenomena and their interrelations more thoroughly before we explain them. Additionally, the field-specific challenges complicate establishing logically consistent propositions about the relationships of concepts and their boundary conditions (steps III and IV). Formalizing theories (i.e., mathematical modeling) can help to overcome these challenges. The process of formalization not only requires scholars to specify their assumptions but also provides them with the means to evaluate their plausibility (e.g., simulations; Robinaugh et al., 2021; Van Rooij & Blokpoel, 2020). Additionally, non-confirmatory research practices such as parameter-range exploration help uncover the functional forms of concept relationships and boundary conditions.

Overall, this position paper argues that while media psychologists face some field-specific challenges in theory development, our field can benefit greatly from recent methodological developments in general psychological theory building.



Trivial or relevant? Making inferences about “small” media effects

Yannic MEIER

Universität Duisburg-Essen, Germany

The field of media psychology is concerned with the study and interpretation of media effects, which are often described as “small”. However, there is no clear consensus on how to determine when these effects are too small to be of practical relevance. Though not clearly defined, practical relevance can depend on several factors: whether an effect is statistically reliable, whether it impacts a sufficiently large number of people, or whether its consequences are severe. This lacking consensus can undermine both the planning of studies and the validity of empirical findings, leaving room for subjective interpretation. In this position paper, I argue that media effects should not be evaluated solely based on their absolute size. Rather, additional criteria need to be considered when deciding about the practical relevance of media effects.

Media effects are the within-person changes in variables such as cognition (e.g., attitudes or knowledge), affect (e.g., sadness or joy), or behavior (e.g., health-related behavior or aggression) that occur during or after media use (Valkenburg et al., 2016). Researchers often use established effect size thresholds both when planning studies (e.g., in power analyses using a smallest effect size of interest (SESOI)) and when interpreting the practical relevance of observed effects. However, widely used benchmarks for “small”, “medium”, and “large” effects (e.g., β = |.10|, |.30|, |.50|) are arbitrary, not empirically derived, and should only be used as a last resort when no better information is available (Anvari & Lakens, 2021; Cohen, 1988; Correll et al., 2020; Funder & Ozer, 2019). More recent attempts to establish empirical benchmarks for within-person effects suggest that traditional thresholds are often too high, leading researchers to dismiss meaningful effects as trivial even arguing that small effects can be as low as β = |.02| (Adachi & Willoughby, 2015; Orth et al., 2024). One question that remains unanswered, however, is how researchers can be sure that a particular effect is relevant in practice. I argue that solely relying on effect size benchmarks and magnitudes cannot provide a sufficient answer to the question of relevance.

In the following, I present five criteria that can be considered either as theoretical arguments or as part of the study itself in order to reasonably determine whether a found effect size is practically (ir)relevant. This list is neither exhaustive, nor is each criterion meant to stand in isolation to the other criteria. Rather, a combination of the following criteria should help to assess that a seemingly trivial effect is actually (ir)relevant.

1) Subjective Perception. Asking participants if they subjectively perceived any difference in a variable of interest (e.g., mood) after media exposure or at two different time points can easily inform researchers about practical relevance. By using anchor-based methods in surveys, a smallest subjectively experienced difference can be calculated which can inform both setting a SESOI and evaluating the relevance of an effect in turn (Anvari & Lakens, 2021; Devji et al., 2020). Including anchor-items into studies in which a change-score can be computed is a low-effort technique from which the field could benefit greatly as it can produce empirical evidence about the subjective relevance of effects. However, these methods also have drawbacks as they assume that people have accurate memories of past states. Also, even effects that might not be instantly perceivable might be (or become) practically relevant.

2) Appearance. Media effects differ in their appearance, which can either be immediate, cumulative, or delayed (Thomas, 2022). Knowledge about an effect’s appearance can lead to concluding that an effect has already reached its maximum or that it might accumulate over multiple occasions of media use or exposure (Funder & Ozer, 2019; Götz et al., 2022). Accordingly, deciding about the relevance of a “small” effect should also include evaluating the most likely frequency of media use or exposure. If the frequency is typically high, even effect sizes that appear to be trivial might cumulate over longer time periods.

3) Duration. Speculating about cumulative media effects requires to evaluate the duration of the effect, too. Media effects are theorized to either vanish immediately, fade over time, or persist continuously (Baden & Lecheler, 2012; Thomas, 2022). Media effects which vanish shortly after media use or exposure are very unlikely to cumulate and must accordingly be larger in size compared to effects that have a potential to cumulate. The duration of an effect could either be investigated or should at least be sufficiently justified when arguing about the cumulation of effect sizes.

4) Habituation. Repeated media use or exposure is not necessarily resulting in cumulated effects but can also be subject to counteracting mechanisms such as habituation (Anvari et al., 2023; Funder & Ozer, 2019). Habituation describes the process by which effect sizes decrease with repeated media exposure or use. This might even be true for media effects that appear to be immediately relevant. Thus, when assessing the practical relevance of a found effect, a researcher should either measure or evaluate the potential occurrence of habituation.

5) Context. Although effect sizes can be standardized to be comparable across studies, researchers must still consider the context in which an effect occurs: even a “large” effect might not be of practical relevance in one context whereas a seemingly trivial effect might be in another (Cortina & Landis, 2010). For example, when media effects are assessed through self-reports, effect sizes might be artificially inflated or biased and even “medium” or “large” effects might not be of practical relevance. Conversely, when actual media content or user behavior are assessed, even very “small” effects might be labelled as relevant because any non-zero effects might be informative.

To summarize, when assessing the practical relevance of media effects, researchers should either assess or theorize about the subjective experience, appearance, duration, and potential habituation of media effects and the study context in which the effect occurs. When these (and potentially further) factors are considered, the field can move beyond trying to answer the question of practical relevance from interpreting absolute magnitudes alone.

References can be found by following this (anonymized) OSF-link: https://osf.io/ay5kq?view_only=490e8edfad86417983f04d4d4ea5191b



A Theoretical and Methodological Framework for Analyzing Processes in Communication Research

Jens VOGELGESANG1, Hannah FRÜH2

1Universität Hohenheim, Germany; 2HAW Hamburg, Germany

Processes are central to all scientific disciplines, though their conceptualization and measurement vary. Generally, a process is a sequence of states evolving over time. In social sciences, most processes exhibit stochastic regularities, with exceptions like Engel’s and Hick’s Laws. In media psychology, a process perspective aids research on entertainment, mood management, suspense, affect, and knowledge acquisition. Despite methodological advancements, measuring processes with rigor — particularly regarding validity — remains challenging. While modern AI-based techniques became a strong driver of automatic statistical model identification, purely inductive approaches are insufficient, especially for causal hypotheses. This paper introduces the concept of the Data Generating Process (DGP) and illustrates how DGP-based thinking supports a systematic approach to process modeling. The proposed DGP framework, tailored for applied researchers, is rooted in Karl Popper’s hypothetico-deductive methodology.

Econometrician David F. Hendry introduced the DGP concept to highlight that statistical modeling aims to approximate the underlying stochastic mechanisms producing observed data (see Figure 1).

While originally applied to economic processes, this concept is equally relevant in media psychology. The DGP represents the mechanism that produces the observable phenomenon, serving as a key reference point for both theory and statistical modeling. Note that the true DGP--as a ground truth--is never observable except in the case of simulated data, where the underlying process is explicitly defined. Since the true DGP is unknown and often too complex to model directly, media psychologists must systematically simplify it while preserving its essential characteristics. The goal is to derive a well-specified statistical model that captures the key relationships governing the data.

Before applying the DGP framework, we first present a process model for illustration purposes. This model is intentionally simplistic for clarity.

The following example models the psychological effects of Instagram use on well-being through a two-level temporal dynamic framework, capturing both short-term (situational) and long-term (trait) effects.

Expectation Prior to Instagram Use (Baseline): Before using Instagram, individuals have high expectations that their situational well-being (State Well-Being, SWB) will improve. These expectations are generally positive, based on the belief that Instagram use will enhance their well-being.

Development of Situational Well-Being During Usage: While engaging with Instagram, situational well-being initially increases significantly. However, this increase reaches a plateau, where well-being stabilizes at a high level regardless of continued platform use.

Negative Discrepancy Following Use: After finishing an Instagram session, individuals experience a gap between their initial expectations and their actual post-usage well-being. Although situational well-being remains relatively high, it does not always match or exceed anticipated levels. This discrepancy is particularly significant due to its long-term psychological effects.

Long-Term Impact on General Well-Being: Over a 28-day period, the repeated experience of this post-usage discrepancy contributes to a gradual decline in trait-related well-being (General Well-Being, GWB). The more frequently an individual uses Instagram and encounters this expectation-reality gap, the greater the cumulative decline in long-term well-being.

Before proceeding with data simulation and statistical model parameterization, it is essential to systematically address key methodological considerations related to process modeling.

Processes are Temporally Extended: The process of Instagram use and its impact on well-being unfolds over time, with situational well-being (SWB) transitioning through distinct phases, including pre-use expectation, in-use elevation, post-use discrepancy, and long-term decline in general well-being (GWB). This sequence highlights the importance of accumulation effects, as repeated post-usage discrepancies contribute cumulatively to the decline in GWB.

Processes Have Identifiable States and Transitions: The Instagram well-being process consists of identifiable states and systematic transitions between them. Before usage, individuals enter the expectation state, characterized by high anticipated well-being. During usage, situational well-being initially increases but stabilizes due to a ceiling effect. After usage, individuals experience a discrepancy between their expected and actual well-being. Over time, repeated experiences of this discrepancy contribute to a gradual decline in GWB. In addition to these systematic transitions, the model must account for stochastic elements, such as daily mood variations, which may introduce fluctuations in state transitions.

Processes Have a Characteristic Dynamic: The process dynamics of Instagram use and well-being vary depending on the temporal perspective. The initial increase in SWB during usage occurs rapidly, while the long-term decline in GWB unfolds gradually over a 28-day period. The observed speed of change depends on the measurement perspective. When measured immediately after use, SWB appears relatively high, whereas longitudinal assessments reveal the cumulative decline in GWB.

Processes Can Be Regular or Chaotic: The Instagram well-being process follows a systematic trajectory in which expectations, in-use experiences, and post-use discrepancies contribute predictably to GWB decline. However, individual differences in usage patterns and external social factors introduce stochastic variability, making the exact trajectory difficult to predict for any single user. While the overall trend is regular, incorporating random variations in expectation formation, discrepancy intensity, and susceptibility to well-being decline is essential.

Processes Can Be Influenced by Internal and External Factors: Internal factors, such as prior expectations, mood states, and habitual Instagram use, may affect how users anticipate and respond to media consumption. External factors such as including algorithm-driven content exposure may also play a crucial role in shaping the perceived discrepancy.

Processes Have Multiple Levels of Analysis: The well-being effects of Instagram use manifest across multiple levels of analysis. At the micro-level, users experience moment-to-moment fluctuations in SWB before, during, and after usage. At the macro-level, the aggregation of individual experiences leads to broader trends in social media-related well-being concerns. To accurately capture these dynamics, the model should be structured to allow for hierarchical analyses.

Processes Can Be Reversible or Irreversible: The effects of Instagram use on well-being can be either reversible or enduring, depending on the timescale and individual susceptibility. In the short term, situational well-being exhibits a reversible pattern, with increases during usage and declines post-usage. However, the long-term decline in GWB may exhibit partial irreversibility, especially if frequent and prolonged negative discrepancies accumulate over time. By differentiating between transient and enduring effects, the model can more effectively capture whether individuals can recover from short-term well-being fluctuations or whether repeated exposure leads to lasting psychological consequences.