Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
3.5: Idelogical Bias and Motivated Reasoning in Digital Contexts
Time:
Thursday, 11/Sept/2025:
2:30pm - 4:00pm

Session Chair: Hannah DECKER
Location: LK061


Show help for 'Increase or decrease the abstract text size'
Presentations

Blinded by the Lies or Lifting the Blinds? Using Signal Detection Theory to Examine Correlates of Belief in Plausible vs. Implausible Conspiracy Theories

Lukas Fock1, Stephan Winter1, Lotte Pummerer2, Roland Imhoff3

1Rheinland-Pfälzische Universität Kaiserslautern-Landau, Germany; 2Universität Bremen; 3Johannes Gutenberg-Universität Mainz

Past research on conspiracy theories (CTs) has mostly either not been concerned with their plausibility or implicitly assumed CTs are inherently implausible (e.g., Nera & Schöpfer, 2023). We argue, however, that the plausibility of a specific CT should be explicitly considered, because this allows for a better understanding of the factors that contribute to (im)plausible conspiracy beliefs (c.f., Batailler et al., 2022). Hence, we provide and test a framework which considers the possibility that conspiracies do happen and can be plausible (e.g., the Iran-Contra affair), while simultaneously accounting for the epistemically problematic aspects of belief in implausible CTs (e.g., the belief that 9/11 was an inside job). Thus, drawing from Poth & Dolega (2023) normative criteria that make a specific CT plausible (e.g., balanced assessment of the available evidence) or implausible (e.g., lacking evidence, weakly grounded auxiliar beliefs, incorrect inferences) are distilled. On this basis we ask: Which psychological abilities help individuals to better discern plausible CTs from implausible CTs? To address this question, we assess the perceived credibility of 16 specific CTs (eight plausible, eight implausible) in two pre-registered (e.g., https://aspredicted.org/bdzh-4k7w.pdf) online surveys in Germany (N=1144) and the UK (N= 1181). The tendency to believe in CTs regardless of their plausibility (response-bias) vs. the ability to distinguish between plausible and implausible ones (discernment) are examined. We expect conspiracy mentality to be associated with response-bias (H1), as it may shift the perceived prior probability of conspiracies being at play, increasing belief in any specific CT regardless of its plausibility. For discernment, we predict cognitive reflection and actively open-minded thinking to increase analytical processing, fostering accurate judgments about the plausibility of any specific CT (H2 & H3). Additionally, news media knowledge could help to situate information that is attained through media usage, possibly leading to better discernment (H4).



Replying to Dissenters in Online Discussion Forums: An Experimental Test of Uncongeniality Bias

Moritz Paul VOGEL

RPTU University Kaiserslautern-Landau

Individual behaviour in online environments is often linked to a preference for like-minded content. Typically, this preference refers to individual’s reception of information (selective exposure). However, people’s behavioural patterns differ if there is an opportunity to reply (selective response). This circumstance is taken into account by uncongeniality bias (Buder et al., 2023). Uncongeniality bias refers to the tendency for people in online discussion forums to be more likely to reply to discussion comments if there is a greater conflict between the person’s attitude and the attitude expressed in the discussion comment. Various studies provide empirical evidence for uncongeniality bias (e.g., Buder & Anders, 2024; Buder & Said, 2023). So far, all studies on this topic have used a correlational design (i.e., given predictor variable, not manipulated). The current study (with at least 210 participants) will examine uncongeniality bias through an experimental design. To manipulate the independent variable (conflict), we intervene in the person’s attitude. This is ensured through framing techniques based on Cialdini’s principles. Thus, the study will begin with an intervention on the participants’ attitude. This is followed by a measurement of the attitude as a manipulation check and basis for calculating conflict. The attitude measurement is conducted using semantic differentials. Each discussion comment will be presented with the item for the dependent variable (willingness to reply). The topic used in the discussion forum is intended to be both controversial and with a low prior attitude. Therefore, we use a scenario about the reduction of cars in a fictitious German city. Due to the manipulation, we expect a low- and high-conflict group. We hypothesize that willingness to reply is greater in the group with high conflict compared to the group with low conflict. The current study is the first study that would allow a causal interpretation of uncongeniality bias.



Recognizing Truth, Updating with Bias? Critical Thinking and Motivated Belief Updating

Amancay Ancina1, Nicole Krämer1,2

1University of Duisburg-Essen, Germany; 2Research Center Trustworthy Data Science and Security of the University Alliance Ruhr, Dortmund, Germany

False information significantly impacts internet communication and societal perceptions, especially on social media, where disinformation spreads rapidly. Distinguishing true from false information is therefore crucial. Research indicates that individuals with higher critical thinking skills are better at differentiating fake from real news, while motivated reasoning suggests that preexisting beliefs may distort veracity assessments – leading individuals to attribute greater credibility to false information that aligns with their beliefs. Much of this research attributes such distortion to partisanship. In Germany, however, drawing clear links between political preferences and partisanship is challenging because parties hold overlapping stances. Moreover, solely asking for veracity assessments may not fully capture motivated reasoning. Therefore, we examine the interplay between motivated reasoning and critical thinking in shaping beliefs about information veracity and influencing belief updating using a novel experimental approach. This approach was introduced by Thaler (2024) and isolates motivated reasoning from Bayesian updating, enabling its identification. In an online experiment with 933 participants, we first assessed critical thinking abilities using the Cognitive Reflection Test (CRT), the Critical Thinking Disposition Scale (CTDS), and the Need for Cognition Scale (NCS-6) and asked about their preferences on topics currently polarizing German society (e.g., migration). We then utilized an adapted version of Thaler’s motivated reasoning paradigm, in which participants answered numerical questions on current German political topics (e.g., refugees and violent crime). After providing their median belief, participants received either true or false information that was randomly presented and either consistent or inconsistent with their preferences. Subsequently, participants assessed the information’s veracity and had the opportunity to update their initial belief. Our study aims to determine whether participants are more inclined to assess preference-consistent information as truthful, whether they adjust their beliefs in the direction of their preferences, and to what extent critical thinking abilities influence these processes.



Illuminating the Ideological Lens: The (Dis-)Connect Between Ideological Bias Awareness and Ideological Bias Expression and its Relevance for Political Communication

Tobias Rothmund, Carolin-Theresa Ziemer, Christine Finn, Arne Stolp

Friedrich-Schiller University Jena

Political thinking and behavior is shaped by ideological beliefs, a phenomenon that we describe as ideological bias. However, it is largely unclear to what extent individuals are aware of their own ideological bias, and how this awareness might be linked to bias expression in a polarized information environment. We introduce the concept of ideological bias awareness – a metacognitive belief structure reflecting how individuals recognize, worry about, and aim to control their own ideological biases. We distinguish three theoretical perspectives on the empirical relationship between ideological bias awareness and the expression of ideological bias indicating negative relation, no relation or positive relation. We outline the development and validation of an ideological bias awareness scale using three samples from Germany and one from the US (N1,German = 338, N2,US = 455, N3,German = 1,229, N4,German = 442). Finally, we empirically examine the proposed theoretical perspectives on the link between ideological bias awareness and bias expression. In an experimental feedback study with two points of measurement (Nt1 = 1229, Nt2 = 1001), we find evidence that people are able to control their ideological bias expression based on communicative feedback. However, across three correlational studies, we find no meaningful relationship between ideological bias awareness and ideological bias expression. Our findings are in line with research on bias blindness and advance our understanding of the challenges in mitigating ideological bias expression through metacognitive introspection and highlight the complexity of addressing ideological polarization in political communication.