They are observing us anyway" - Exploring the correlates of social media users' knowledge on vertical privacy
Universität Duisburg-Essen, Germany
In the age of information, data protection and the protection of privacy are continuously discussed in the public sphere. In this context, the privacy paradox, referring to the mismatch between people’s cognitions about privacy and their privacy-related behavior, has become a prominently attended phenomenon. Recently, scholars have proposed that the privacy paradox may be resolved by focusing on users’ specific literacy in online privacy and data protection. To this end, this study examined social media users’ literacy in vertical privacy defined as the knowledge about how institutions such as governments, companies, or social networking platform providers can access users’ data and how to regulate one’s communication behavior in light of this knowledge. In contrast to other forms of online privacy literacy (e.g., regulating whether and how peers can access one’s personal information), users’ understanding of vertical privacy is less intuitive and commonly requires active knowledge gathering. Following the idea of the privacy paradox, we were interested in how users’ vertical privacy concerns, knowledge and protective behaviors online were interconnected. These relationships were investigated in a survey study with 221 participants (Facebook users). In this online study, subjects answered different self-report scales measuring their privacy protection behavior (e.g., "I have restricted access to who can see my postings."), concerns about vertical privacy (e.g., “How concerned are you, that provider of social network sites could sell your data (e.g., pictures) to a third party”), attitudes toward psychological, social and informational privacy (e.g., “I think, giving information about my identity online is… very bad – very good.”), and their knowledge on vertical privacy online (e.g., “According to European law it is legal to hand anonymous data to marketing research.”).
First, results demonstrate that neither users’ privacy concerns (rs = .12, p > .05) nor their attitudes towards privacy (rs = -.05, p > .05) were significantly associated with their actual privacy protection behavior - supporting the notion of the privacy paradox. Considering the knowledge as an intervening variable in the privacy paradox, results showed that users’ knowledge about vertical privacy was positively related to the amount of reported privacy protection behavior. The more users knew about vertical privacy, the more they took measures to protect their privacy and their data (rs = .14, p < .05). Moreover, it was tested whether knowledge about vertical privacy moderates the relationship between privacy attitude/privacy concerns and the protective behavior. However, results of multiple moderated regressions did not support this assumption.
The correlational pattern of the present findings suggests that social media users’ knowledge about vertical privacy may increase their protective behavior online. Based on this evidence, it seems promising to empower users to protect themselves by increasing their knowledge about how institutions are capable of collecting, and potentially abusing, user data. Especially females know less about this topic even though they value the informational privacy higher than males. Nevertheless, future research and projects seem necessary to foster users’ literacy and knowledge about vertical privacy in social media.
If my friend thinks that’s wrong, I do so, too. The interplay of in-group member’s opinion and epistemic trust on opinion about news shared on Facebook
University of Hohenheim, Germany
The social networking site Facebook is popular for redistributing and consuming news articles. Here, the users come into contact with topics and opinions that are considered worth sharing by people they are related to. Thus, the perception of journalistic news is embedded in interpersonal communication. The question guiding my research is, how this specific condition of news perception affects an individual’s interpretation of news content. I assume that users draw on cues like the sharing person’s opinion when making sense of the information. According to shared reality theory (Hardin & Higgins, 1996), this roots in humans’ strong need for sharing their understanding of the world with others. However, individuals do not want to achieve a shared reality with anyone. Rather, epistemic trust in the other person is crucial. This usually applies to in-group members (Echterhoff, Higgins, & Levine, 2009). In this study, I investigated the effects of Facebook user group membership (FUGM) on perceived in-group membership, epistemic trust and opinion about the topic of a news article. I tested whether FUGM indirectly effects opinion through perceived in-group membership and epistemic trust.
I conducted an experiment with 226 university students (70 % female, age M = 21 years, SD = 2.66) based on a 2 (FUGM: in-group vs. out-group) x 2 (Facebook-user opinion: positive vs. negative) + 1 (control) between-subjects design. To manipulate FUGM, I presented a status update with a linked news article posted by 22-year old Julia, who was either a student from the same university (in-group) or an electronics technician (out-group). The article was a balanced synopsis of European Internet policies published on Spiegel Online. In the status update, Julia either praised the European Union for going in the right direction or criticized it for being on the wrong track. Participants first saw Julia’s post with her comment and a teaser of the article and subsequently read the article. The control group received a neutral Facebook post containing the same article from the Spiegel Online Facebook channel.
There was no effect of FUGM on participant opinion in the positive comment condition (β = -.18, p =.14). However, in the negative comment condition, participants evaluated the EU Internet policy more negatively when Julia was from their in-group than from the out-group (β = -.25, p =.047). This effect was serially mediated (b -.0387 [95% CI=-.1088; -.0005]): A stronger perception of Julia as being a member of one’s in-group resulting from the FUGM manipulation (β = .39, p <.001) predicted epistemic trust in Julia (β = .29, p =.04) which in turn was associated with a more negative evaluation of the EU Internet policy (β = -.33, p =.008).
The study provides first insights on how the opinion of Facebook users, who are considered as in-group members and enjoy epistemic trust affects opinion about the topic of a news story. As I observed the proposed effects only for the negative comment condition, further investigations of the interplay between Facebook user opinion, news article properties and epistemic trust are necessary.
The effect of information processing of political news about kept and broken election pledges on political trust.
1University of Koblenz-Landau, Germany, Germany; 2University of Mannheim, Germany, Germany
Citizens receive almost all information on politics through the media. This is also the case for parties’ election pledges and their enactment on it. However, little attention has been given to the question on how information processing of political news about broken and kept election pledges affects citizens’ attitudes towards the political elite. This is surprising, given that news on government’s performance should not only have consequences for citizens’ evaluation of the government, but also for peoples’ subsequent voting behavior. A central implicit psychological mechanism of the presumed expectation might be a positive or negative assessment of politicians’ credibility and trustworthiness as a consequence of politicians’ reliability. In our research, we thus expect information on broken election pledges to affect trust in political actors negatively, while information on kept election pledges will have a positive effect. The results of four completed studies are mixed. In a first study (N=470), we found no direct link between media coverage on broken or kept pledges and the perceived trustworthiness of politicians in general. However, in a second study (N =250) we found that both, the information of broken and of kept election pledges, showed lower values for trust towards the federal government and politicians in general, but only when compared to a control group. The findings of two smaller follow-up studies indicated that the low credibility of information regarding held election pledges and negative priming effects of the keyword “election pledge” cannot account for the negative effect of held election pledges. In an ongoing two-wave panel experiment (N = 750) we thus measure changes in trust levels due to information on broken/held election pledges on an intra-individual level. With this research design, we draw a comprehensive picture of the effects of information on election pledge fulfillment on trust in political actors and thus contribute to explaining the role of the media in the context of political opinion-forming and decision-making processes.
An alternative strategy for regulating privacy on online Social Networking Sites: Motives and influencing factors for using the Super Log-Off
University Duisburg-Essen, Germany
Nowadays, Social Networking Sites (SNSs), such as Facebook, are firm components of peoples’ everyday life. With a view on the myriad of disclosed personal information on SNSs, it becomes important to focus on users’ awareness of online privacy. Providing sensitive information to others on online SNSs is usually grounded in a process of weighing up risks (loss of privacy) and benefits (e.g. social support, relationship development, self-representation, Taddicken & Jers, 2011; Trepte & Reinecke, 2013) of respective self-disclosures. On the one hand, users of SNSs state that it is important to care about online privacy. But, on the other hand, the users’ intention to withhold information sometimes fades into the background (Barnes, 2006). Nevertheless, prior research revealed that some users implement alternative privacy risk reducing strategies such as Whitewalling (deleting postings or comments from ones’ profile, boyd, 2010; Raynes-Goldie, 2010), Social Steganography (using a secret language, boyd & Marwick, 2014) and Super Log-Off (deactivating ones’ Facebook account, boyd, 2010). So far, it isn’t comprehensively clear how users’ intentions to use such strategies are related to their personality and individual intentions, and whether implementing the Super Log-Off, which describes the deactivation of users’ personal Facebook accounts for a specific temporal period, is exclusively based on privacy protective motives, or on conceivable other ones. Since the Super Log-Off has mainly been analyzed qualitatively (boyd, 2010), we conducted a quantitative online study that aims at analyzing relations between the users’ personality, characteristics, literacy and their intentions to use the Super Log-Off, as well as investigating the actual usage of the Super Log-Off.
127 test subjects (53.5% female), aged between 19 and 48 years (M=25.70, SD=4.45), participated in the study. For a balanced level of knowledge concerning the Super Log-Off, a short explanation was provided. For further analyses, a sub-sample which solely comprised of users who already had experiences with the Super Log-Off (22 persons, 40.9 % female) was created.
Results revealed a positive relation between the users’ privacy concerns and their attitude toward using the Super Log-Off (R²=.17, F(1, 125)=24.69, p<.001) as well as between their privacy concerns and the intention to use the Super Log-Off as a privacy risk reducing strategy (R²=.15, F(1, 125)=22.94, p<.001). Furthermore, a bivariate regression analyses revealed a positive relation between the users’ online self-disclosure behavior and their intention to use the Super Log-Off as a privacy risk reducing strategy (r(125)=.28, p=.001). Further, this relation is mediated through users’ impression management. The strongest motive for using the Super Log-Off revealed to be “protection against personal attacks” (R²=.20, F(1, 125)=30.51, p<.001, β=.44, t(125)=5.52) and “avoid distraction” (R²=.26, F(1, 125)=44.58, p<.001, β=.51, t(125)=6.68), whereby the second motive isn’t grounded on privacy protection motivation. Finally, the users’ motives for using Facebook was positively related to their intention to use the Super Log-Off (r(125)=.22, p=.015).
Current findings reveal useful insights regarding an alternative privacy regulation strategy to improve online privacy. It further provides valuable implications for future studies concerning motives for using the Super Log-Off or other privacy protection strategies.