Conference Agenda

Social Media and Discourse
Wednesday, 18/Mar/2020:
2:10pm - 3:40pm

Session Chair: Kristoffer Nielbo
Location: Hall D
Level -1

Long Paper (20+10min)

“Memes” as Activism in the Context of the US and Mexico Border

Martin Camps

University of the Pacific, US

Memes function as “digital graffiti” in the streets of social media, a cultural electronic product that satirizes current popular events, and can be used to criticize those in power. The success of a meme is measured by its “virality” and the mutations that are reproduced like a germ or a part of the genetic trend of subculture societies. I am interested in analyzing these eckphrastic texts in the context of the construction of the wall between the US and Mexico. I examine popular memes in Mexico and the US from both sides of the border. I believe these “political haikus” work as an escape valve for the tensions generated in the culture wars that consume American politics. The border is an “open wound” (Mexican writer Carlos Fuentes dixit) that was opened after the War of 1847 and resulted in Mexico losing half of its territory. Currently, the wall functions as a political membrane barring the “expelled citizens” of the Global South from the economic benefits of the North. Memes help to expunge the gravity of a two-thousand-mile concrete wall in a region that shares cultural traits, languages, and natural environment, a region that cannot be domesticated with symbolic monuments to hatred. Memes are rhetorical devices that convey the absurdity of a situation, as in a recent popular meme that shows a colorful piñata on the edge of the border, a meme that infantilizes the State-funded project of a fence. The meme’s iconoclastography sets in motion a discussion of the real issues at hand—global economic disparities and the human planetary right to migrate.

The term meme was coined by Richard Dawkins, a British evolutionary biologist, in 1976 in his book The Selfish Gene as a unit of cultural transmission. He wrote: “We need a name for the new replicator, a noun which conveys the idea of a unit of cultural transmission, or a unit of imitation. ‘Mimeme’ comes from a suitable Greek root, but I want a monosyllable that sounds a bit like ‘gene’. I hope my classicist friends forgive me if I abbreviate mimeme to meme.” (The Selfish Gene 192). There are many popular memes that relate to different cultural trends, such as “Leave Britney Alone,” “Gangnam Style,” “Situation Room,” “Advise Dogs,” “LolCats,” “Success Kid”, but in this presentation, I will concentrate on the genre of “border memes”. I offer a cross-cultural study of border memes, a cultural software, produced in Mexico and the United States about the mutual issue of the border wall that was raised during the 2016 American presidential campaign and continues until now.

Short Paper (10+5min)

Assembling the Unrepresentable: Allegories of Violence on Digital Platforms in Latvia

Kārlis Lakševics

University of Latvia, Latvia

While ‘the Internet’ is often criticized for providing access to graphic materials representing spectacular forms of violence, the designed, managed and loosely regulated spaces of social media, news platforms and e-learning interfaces engage with violence in a rather singular digital aesthetic (Galloway, 2012). Set by the rules of platform participation (Panagrazio, 2019), regimes of visibility (Bucher, 2012) and dominant databases of stock pictures, violence on digital platforms ranges from spectacles of intimacy to affective fusion of text and the stock image.

In the paper I discuss 3 recent studies of violence within various digital platforms: (1) qualitative research on youth perspectives on cyberbullying on social media; (2) qualitative research on e-learning production and user experience of an e-learning course for kindergarten safety; (3) media analysis on representation of violence on Latvian news platforms. By comparing these cases, I argue that the affordances of platforms proliferate singular images of violence, thus demeaning it, and individualizing harm as a personal reaction ‘behind the screen’ that replicates neoliberal ideologies and other forms of victim-blaming. “Īt’s just the internet!”, a phrase often used by students, becomes both a digitally literate reading of an encounter and a demeaning of violence, harm and othering.

Following the framework of Galloway, I argue that engagements with violence within the digital aesthetic of the studied interfaces work on the realm of the allegorical and unrepresentable. From one side, there are harmful variations of messaging, commenting and unsolicited publishing of content that can be recognized as having violent effects and that can proliferate on loosely regulated media. At the same time, these often consist of messages that relate to online othering and normative shaming and in this way are legitimized by people contributing this content. From the other side, there are designed and curated efforts to combat various forms of violence through media publications and e-learning courses that use the digital aesthetic for condemning violence and providing strategies for non-violent practices. If mean comments often feel the same and in prolificating cases become memes and objects or ridicule themselves, articles and e-learnings use singular affective fusions of data. What is common to both sides is the singularity of the image and ambiguity of violence that at the same time provokes and resists the control of the interface and moderating bodies. This way, the interface not only reproduces but embodies ideologies of violence.

Short Paper (10+5min)

Impact of Technologies on Political Behaviour: What Does it Mean to be “Good Digital Citizen”

Ieva Strode

University of Latvia, Latvia

Although the opinion that political activity of society has declined is relatively common (also confirmed by studies), there exist objections that activity may not have diminished, but its forms have changed, replacing traditional forms of participation to others. Technological development, which offers a new environment and new forms for participation in the political community, also plays a particular role in these changes. Both “digital citizens” that have emerged as a result of the technological transformation and political institutions now need to redefine what does it mean to be a “good citizen” in terms of rights, duties and participation in this digital world.

Some changes in political behaviour means just the “movement” of existing norms and traditional behaviour patterns to the digital environment: interest in politics, expression of political views, participation in some political activities may essentially preserve traditional content, while activities take place using new tools, platforms (e.g. through social networks, new political communities etc.). However, the new digital environment also requires and creates new norms related to “good digital citizens’ ” duties (including behaviour (e.g. following etiquette, obeying laws and rules that are specific for digital world), obtaining specific education and knowledge (e.g. digital literacy etc.)) and rights (e.g. access to digital resources, equality within society regarding access to these resources etc.). It is also important to assess activity of digital citizens in the political events of non-digital world (e.g. elections if e-voting is not available).

In my proposed presentation, I will discuss how traditional rights and duties of “good citizens” are transferred to digital world, what kind of new rights and duties have emerged due to digital environment, and analyse Latvian population survey data regarding citizens’ opinions about norms of good digital citizens and their actual behaviour.

Short Paper (10+5min)

Human-Centered Humanities: Using Stimulus Material for Requirements Elicitation in the Design Process of a Digital Archive

Tamás Fergencs, Dominika Illés, Olga Pilawka, Florian Meier

Aalborg University Copenhagen, Denmark

This study proposes the use of so-called stimulus material during interviews for requirements elicitation as part of the design process of a digital archive. Designing complex systems like digital archives is not straightforward as users of these systems have specific needs and tasks that designers need to be aware of before the implementation phase can begin. Stimulus material can support the requirements elicitation to collect domain- and content-specific user tasks and needs, which might get overlooked otherwise. We supplemented semi-structured interviews with observational sessions in which print-outs of historical pamphlets and office supplies were handed to participants to give them the opportunity for in-depth study of the material. We found that the use of stimulus material helps participants to focus on the task at hand and articulate their actions and workflow steps more easily. Via thematic analysis the participants statements were turned into a coding schema that serves as requirements specification for an initial prototype.

Short Paper (10+5min)

3D and AI technologies for the development of automated monitoring of urban cultural heritage

Tadas Ziziunas, Darius Amilevicius

Vilnius University, Lithuania

Preservation of urban heritage is one of the main challenges for contemporary society. It’s closely connected with several dimensions: global-local rhetoric, cultural tourism, armed conflicts, immigration, cultural changes, investment flows, infrastructures development and etc . Nowadays very often organizations responsible for heritage management constantly have to deal with lack of resources, which are crucial for proper heritage preservation, maintaining and protection. Particularly it is problematic for countries with low GDP or unstable political situation.

The possible solution of these problems could be automated heritage monitoring software system, based on the 3D data and AI technologies, which increase monitoring efficiency (financial, timewise, and data objectiveness factors). The system prototype was developed and tested by Vilnius University and Terra Modus Ltd. in frame of project “Creation of automated urban heritage monitoring software prototype” (2014). Next step is creation of full-capability software which is under development by Vilnius University on framework of project “Automated urban heritage monitoring implementing 3D and AI technologies”. Project financed by Research Council of Lithuania (project time 2018-2022) . At this paper only general pipeline of the 1st stage of project is presented.

Proposed digital monitoring technique is based on effective reality capture and comparisons of data in time. 3D laser scanners and digital photogrammetry are the most capable, accurate enough data collection methods. Collected information from different time period measurements could serve as data for artificial intelligence analysis, which can automatically identify needed valuable elements and its changes during the particular time period. Such monitoring can possibly be performed in a remote, non-destructive, and cost-effective way . Accordingly, main principles of suggested solution are listed below.

Digital monitoring is based on seven conditions. First: all objects in the monitoring process are tangible. Second: physical valuables could be expressed as simple geometrical forms or mathematical expression. Third: monitored objects could be fully scanned of photogrammetrically processed. Fourth: data from Lidar devices and data derived from photogrammetry are same quality (density, coverage, etc.). Fifth: detection of cultural heritage could be analysed by static and machine learning algorithms. Sixth: digitally processed results should be able to be checked. Seventh: digital monitoring is based on non-destructive and non-invasive 3D view technologies and analytical technologies.

Regarding of digital data there are two possible ways to perform detection and comparison of selected valuables. First case scenario mainly means lack of comparable data of the older status quo. This means that there are no earlier 3D data of selected cultural heritage. Newly collected data is compared with mathematical rules which can be written in coded form. These set of rules describes geometrical parameters of selected valuables of the cultural heritage. In the second case scenario there are two data sets from different time period. This data is compared with each other. In both cases comparison needs interpretation.

The first level of interpretation is in demonstrating some facts of geometrical change. The second level depends on the particular legal status and local legislation for managing cultural heritage, e.g. meaning of detected changes depends on legislation). First level of interpretation could be evaluated by logical operators, for example alteration is described as “status quo unchanged”, “reduction in volume by 65%”, etc. Second level of interpretation could be legal analysis of first level results, for example, “reduction in volume = fact of illegal demolition works”.

According to the most frequent alteration of the Vilnius Old Town’s buildings’ valuables, a list could be stated: a) elements of the roof; b) shapes of the roof; c) cornices; d) doors; e) gates; f) the primary height and width of height buildings; g) the primary housing intensity of site; h) windows; i) chimneys. These are main valuables which can be traced in the manner of geometrical changes.

In order to perform the detection of valuables, we first need to train the AI algorithms to identify the desirable valuables from the data – 2D pictures or 3D point clouds. Google “Tensorflow” with DeepLab v. 3+ with default settings was used .

These are semantical segmentation procedures where some already annotated and trained data could be used. However, there are very little open data quality content for such topic. Hence, for performing the digital monitoring processes, a new database was established. Concerning future software’s usage for different oldtowns of Europe, only database with additional 2D pictures of elements or 3D scans are needed.

The newly established database consists of collected pictures from the main streets of the Vilnius’ Old Town. For data annotation, Labelbox is used. Currently there are 420 high-resolution photos (12 megapixels) where the first two classes (valuables) are created: windows and doors. All doors and windows are manually annotated in 420 photos. Annotations were performed so that an algorithm could identify the kind of pixels that denote windows as well as what pixels stand for doors. For performing the training task, the currently most powerful open data algorithms of Google’s Tensorflow were used. In this case, an XML file is the result of annotation. This means that the annotated information in the c++ language is described according to the standard of Pascal VOC. This standard is one of the most popular and widely used. To sum up, two types of files are exported from Labelbox: XML and JPG. The further process could be described as follows:

1. JPG and XML are converted into RGB. The results are PNG files with segmentation masks – SegmentationClass;

2. Additionally, some PNG raw files with a semantical segmentation object contour are exported – SegmentationClassRaw;

3. JPG, PNG files (SegmentationClass) and PNG files (SegmentationClassRaw) are manually separated into two parts: “Train” (for training) and “Val” (for validation). The Train part is also automatically separated into tech and test parts in order to identify how accurate the training results are compared with human manual annotation. Hence, some extra Train, Val and Train/Val index are generated;

4. According to an index of JPEG, PNG, and PNG (Raw) files, we generated files special formats that were required by Tensorflow training – TFRecord (Train, Val, and TrainVal);

5. The system is trained using TFRecord files. In order to get the most accurate results, many hyper parameters should be optimized. This process is analysed in detail by J. Bergstra and Y. Bengio .

One of the biggest problems with hyper parameter optimization is overfitting. In the context of heritage monitoring, this would cause that newly presented valuables – windows, for example – could not be identified properly. In order to avoid overfitting, various techniques could be applied, e.g. early stopping. Once the progress shows that mistakes stopped reducing, all processes are then being stopped. That calculation of the quality of prediction is described as loss function. There are various methods on how to calculate the loss function, but in this experiment, a default “cross entropy” is used. The experiment results demonstrated that training progress was performed properly because the loss function was gradually decreasing and data were not overfitted. However, a powerful computer resources are needed for finalizing the whole experiment with all groups of valuables.

To sum up, presented project is still at an early stage; however, the results of the first laboratory experiments with the primary version of the pooled data resource achieving 80% accuracy in semantic segmentation of objects into two classes (windows and doors) suggest that the chosen technology solutions and developed methodology will be adapted successfully to achieve project objectives.