Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
Session 17: Position Papers
Time:
Friday, 08/Sep/2017:
9:00am - 10:00am

Session Chair: Markus Huff
Location: Room CIV 260

Show help for 'Increase or decrease the abstract text size'
Presentations

Rediscovering field theory: Media psychological research from a situational point of view

Philipp Masur

Universität Hohenheim, Germany

Much of what we know about media effects is based on results from online surveys. Yet, surveys should only be used when the variables of interest are reasonably stable over a longer period of time (Brosius, Haas, & Koschel, 2016, p. 166) or when people are asked to recollect some form of historical information (Schwartz & Stone, 2007, p. 12). Assessing irregular, yet frequent user behavior in such a way is problematic as survey respondents have to perform an aggregation of several potentially diverging experiences in order to give coherent responses to a standardized set of items. These self-reports tend to be theory-driven and biased (Schwartz & Stone, 2007): Situational variance is flattened into broad self-reported measures that represent unnecessarily aggregated, artificial and ultimately unrealistic accounts of the behavior of interest.

As much research uses such aggregated measures, scholars might have put too much importance on person-specific variables and failed to recognize the intrapersonal variance in behavior as well as the power of the situation in determining people’s behavior. This is somehow surprising as the fundamental attribution error – the tendency to overestimate the effect of dispositional factors and to underestimate the influence of situational factors – has been described and studied already much earlier (Nisbett & Ross, 1980; Ross & Nisbett, 2011). It is particularly surprising, because almost all media psychological theories aiming at explaining behavior acknowledge that the person and the situation are inextricably interwoven in explaining the likelihood of a certain behavior (Rauthmann, Sherman, & Funder, 2015, p. 363).

In light of this, I argue for a situational point of view on media psychological research. Researchers of course use experiments to assess people’s feelings, perceptions, or behavior within a situation. As such, they represent valuable approaches to measure situational behavior. However, experiments likewise have several weaknesses (Czienskowski, 1996; Hoyle, Harris, & Judd, 2008). Experimental designs generally maximize the internal validity (i.e. isolating single effects and ruling out most alternative explanations), yet reduce the external validity (i.e., generalizability of the findings). This is consequential in two ways: First, laboratory experiments represent highly artificial situations. Second, such a setting is just one situation among a multitude of diverging situations encountered in daily life. Hence, they do not present a valuable alternative for research that aims at explaining general behavioral patterns.

As an alternative, I propose a theoretical framework that allows studying the influence of person as well as environmental factors on affects, perceptions, behaviors, and thinking processes across several different situations. Drawing from classical field theory (Lewin, 1951) and contemporary conceptions of person-environment interactions (Rauthmann et al., 2015), I define a situation as the entirety of circumstances that are affecting the perceptions, thinking processes, and behaviors of a person at a given time. I thereby recognize the basic formula by Lewin and differentiate between person factors and environmental factors. I further classify person factors into personality traits (e.g., neuroticism, extraversion…) and trait-like qualities (e.g., opinions, attitudes, skills, long-term goals...) that are generally stable across situations, and internal factors such as perceived duties, motivations, goals, or feelings that are rather fluctuating and thus vary across situations. Environmental factors, on the other hand, are mostly non-stable characteristics (i.e. varying across situations) and can further be distinguished into interpersonal factors and external factors. Interpersonal factors refer to the characteristics of other people present at the given time (e.g., number of people, interpersonal assessment of these people...). External factors refer to what Rauthmann, Sherman, and Funder (2015) describe as physical cues of the environment (e.g., objects, artifacts…).

Differentiating these broad clusters of situational circumstances allows grasping the entire diversity of situations through a limited number of variables. Although differentiations that are more precise are possible, I believe that the proposed factors present a reasonable framework for characterizing situations with enough specificity. Predicting variation in a certain behavior across situations becomes possible by identifying the combinations of enforcing and inhibiting factors that theoretically should affect the respective behavior in a given situation.

In the paper, I will exemplify the application of this framework in the context of self-disclosure in smartphone-based communication situations. This application will show how the proposed framework makes situational circumstances amendable to empirical investigation. I will additionally advance a multi-method research design that combines traditional survey, log data and experience sampling methods. Such a design represents the optimal way for explaining situationally varying behavior by quantitatively measuring the hierarchical structure of person and environment interactions in several situations. Results from an empirical study with N = 164 participants who took part in a two-week experience-sampling study (N = 1,112 self-disclosure events) will be presented and discussed in light of the proposed framework. The study reveals that taking such a situational perspective results in surprisingly different findings compared to traditional research designs and in more accurate assessments of media psychological phenomena.

References

Brosius, H.-B., Haas, A., & Koschel, F. (2016). Methoden der empirischen Kommunikationsforschung: Eine Einführung (7. überarbeitete und aktualisierte Auflage). Studienbücher zur Kommunikations- und Medienwissenschaft. Wiesbaden: Springer VS.

Czienskowski, U. (1996). Wissenschaftliche Experimente: Planung, Auswertung, Interpretation. Weinheim: Beltz, Psychologie-Verl.-Union.

Hoyle, R. H., Harris, M. J., & Judd, C. M. (2008). Research methods in social relations (7. edition). Fort Worth, Tex.: Wadsworth.

Lewin, K. (1951). Field theory in social science: Selected theoretical papers. New York: Harper & Brothers.

Nisbett, R. E., & Ross, L. (1980). Human inference: Strategies and shortcomings of social judgment. Englewood Cliffs, N.J.: Prentice-Hall, Inc.

Rauthmann, J. F., Sherman, R. A., & Funder, D. C. (2015). Principles of situation research: Towards a better understanding of psychological situations. European Journal of Personality, 29(3), 363–381.

Ross, L., & Nisbett, R. E. (2011). The person and the situation: Perspectives of social psychology. London: Pinter & Martin.

Schwartz, J. E., & Stone, A. A. (2007). The analysis of real-time momentary data: A practical guide. In A. A. Stone, S. Shiffman, A. A. Atienza, & L. Nebeling (Eds.), The science of real-time data capture. Self-reports in health research (pp. 76–113). Oxford, New York: Oxford University Press.


The Empirical Foundation of Media Psychology: Standards, Old and New

Malte Elson1, Andrew K Przybylski2,3

1Ruhr University Bochum, Germany; 2Oxford Internet Institute, University of Oxford; 3Department of Experimental Psychology, University of Oxford

Concerns have been raised about the integrity of the empirical foundation of psychological science, such as the average statistical power and publication bias (Schimmack, 2012), availability of data (Wicherts et al. 2006), and the rate of statistical reporting errors (Nuijten et al. 2015). Currently, there is little information to which extent these issues also exist within the media psychology literature. Therefore, to provide a first prevalence estimate, we surveyed availability of data, errors in the reporting of statistical analyses, and statistical power of all 146 original research articles published in the Journal of Media Psychology (JMP) between volumes 20/1-28/2.

DATA AVAILABILITY

Historically the availability of research data in psychology has been poor (Wicherts et al., 2006). Our sample of JMP publications suggests that media psychology is no exception to this, as we were not able to identify a single publication reporting a link to research data in a public repository or the journal’s supplementary materials.

STATISTICAL REPORTING ERRORS

Most conclusions in empirical media psychology are based on Null Hypothesis Significance Tests (NHSTs). Therefore, it is important that all statistical parameters and NHST results be reported accurately. We scanned all JMP publications with statcheck (1.2.2; Epskamp & Nuijten, 2015), an R package that works like a spellchecker for NHSTs by automatically extracting reported statistics from documents and checking whether reported p-values are consistent with reported test statistics and degrees of freedom.

Statcheck extracted a total of 1036 NHSTs reported in 98 articles. 129 tests were flagged as inconsistent (i.e., reported test statistics and degrees of freedom do not match reported p-values), of which 23 were grossly inconsistent (the reported p-value is <.05 while the recomputed p-value is >.05, or vice-versa). 41 publications reported at least one inconsistent NHST, and 16 publications reported at least one grossly inconsistent NHST. Thus, a substantial proportion of publications in JMP seem to contain inaccurately reported statistical analyses, of which some might affect the conclusions drawn from them.

STATISTICAL POWER

High statistical power is paramount in order to reliably detect true effects in a sample and, thus, to correctly reject the null hypothesis when it is false. Low power reduces the confidence that a statistically significant result actually reflects a true effect. A generally low-powered field is more likely to yield unreliable estimates of effect sizes and low reproducibility of results.

We are not aware of any previous attempts to estimate average power in media psychology. Further, searching all papers for the word “power” yielded only a single article reporting an a priori determined sample size.

Another strategy is to examine the power for different effect sizes given the average sample size found in the literature. As in other fields, surveys tend to have healthy sample sizes apt to reliably detect medium to large relationships between variables. The median sample size for survey studies is 327, allowing researchers to detect small bivariate correlations of r=.1 at 44% power (rs=.3/.5 both > 99%).

For (quasi-)experiments, the outlook is a bit different, with a median sample size of 107. Across all types of designs, the median condition size is 30.67. Thus, the average power of experiments published in JMP to detect small differences between conditions (d=.20) is 12% (d=.50 at 49%, d=.80 at 87%).

Again, we currently do not have reliable estimates of the average true, expected, or even observed effect size in media psychology. But even when assuming that the effects examined could be as large as those in social psychology (average d = .43 according to Richard et al., 2003), our results indicate that the chance that an experiment published in our field will detect them is worse (at 38%) than flipping a coin - an operation that would also be considerably less expensive.

WHERE TO GO FROM HERE

Media psychologists use a wide range of methodologies to enhance our understanding of the role of media in human behavior. Unfortunately, as in other fields of social science, much of what we think we know may be based on a tenuous empirical foundation. Our analysis indicates that materials and data of few, if any, media psychology reports are openly available; many lack the statistical power required to reliably detect the effects they were set out to detect; and a substantial number contain statistical errors of which some might alter the conclusions the research draws. Although these observations are deeply worrying, they also provide some clear guiding points on how to improve our field.

SELF REFLECTION

Similar analyses in other fields suggest that the issues we discuss here go far beyond media psychology (or the JMP): About half of all articles in major psychology journals report at least one inconsistent NHST (Nuijten et al., 2015). Estimates of average statistical power in social psychology are similar to those in JMP (Fraley & Vazire, 2014), but as low as 18% in neuroscience (Button et al., 2013). Thus, we would like these findings, troubling as they are, to be taken not as a verdict, but as an opportunity for researchers, journals, and organizations to reflect similarly on their own practices and hence improve the field as a whole.

PREREGISTRATION

Exploratory work - research meant to introduce new ideas and generate hypotheses - often involves taking a look at the results of a study and flexibly listening to “what the data have to say”. This mode is fundamental to social science work in general and also creatively informs media psychology. Confirmatory work, by contrast, involves theory testing, a process that requires research questions and hypotheses to be clearly stated in advance of data collection. It also requires researchers to propose a sampling plan with sufficient power based on the expected effects. This mode allows researchers and audiences to trust the results of a study that rigorously tests a specific prediction.

CONCLUSIONS

The promise of building an empirically-based understanding of how we use, shape, and are shaped by technology is an alluring one. We firmly believe that incremental steps taken towards scientific transparency and empirical rigor will help us realize this potential.



 
Contact and Legal Notice · Contact Address:
Conference: Mediapsychology 2017
Conference Software - ConfTool Pro 2.6.111
© 2001 - 2017 by H. Weinreich, Hamburg, Germany