Conference Agenda

Session
Plenary Session 1
Time:
Wednesday, 21/Oct/2020:
10:30am - 12:30pm

Session Chair: Mikko Tolonen, University of Helsinki,
Location: Ziedonis Hall / Zoom
Ground floor

Session Abstract

Virginia Dignum,
Professor of Social and Ethical AI at Umeå University; Scientific Director of WASP-HS; member of the EU High Level Expert Group on AI

The last few years have seen a huge growth in the capabilities and applications of Artificial Intelligence (AI). Hardly a day goes by without news about technological advances and the societal impact of the use of AI. Not only are there large expectations of AI's potential to help to solve many current problems and to support the well-being of all, but also concerns are growing about the impact of AI on society and human wellbeing. Currently, many principles and guidelines have been proposed for trustworthy, ethical or responsible AI.

In this talk, I argue that ensuring responsible AI is more than designing systems whose behavior is aligned with ethical principles and societal values. It is above all, about the way we design them, why we design them, and who is involved in designing them. This requires novel theories and methods that ensure that we put in place the social and technical constructs that ensure that the development and use of AI systems is done responsibly and that their behavior can be trusted.


External Resource: https://zoom.us/j/92290824685
Presentations
Keynote speaker (50min)

Responsible Artificial Intelligence

Virginia Dignum

Umeå University, Sweden

The last few years have seen a huge growth in the capabilities and applications of Artificial Intelligence (AI). Hardly a day goes by without news about technological advances and the societal impact of the use of AI. Not only are there large expectations of AI's potential to help to solve many current problems and to support the well-being of all, but also concerns are growing about the impact of AI on society and human wellbeing. Currently, many principles and guidelines have been proposed for trustworthy, ethical or responsible AI.

In this talk, I argue that ensuring responsible AI is more than designing systems whose behavior is aligned with ethical principles and societal values. It is above all, about the way we design them, why we design them, and who is involved in designing them. This requires novel theories and methods that ensure that we put in place the social and technical constructs that ensure that the development and use of AI systems is done responsibly and that their behavior can be trusted.



Keynote speaker (50min)

A Vaccine Against Fake News

Jon Roozenbeek

Cambridge Social Decision-Making Lab, University of Cambridge

Jon Roozenbeek will be talking about online misinformation and what to do against it. The problem has been highly pervasive, and governments, social media companies, think tanks and civil society have found it difficult to find sustainable, scalable solutions. Jon will discuss what he and his colleagues have been doing to combat online misinformation, by combining insights from social psychology with gamification. He will present the game Bad News, an online browser game in which players take on the role of a fake news creator and must spread as much fake news as they can in order to win. We will then discuss the research that has been conducted on the effects of the game: does it actually make people better at spotting misinformation? And if so, how can this solution be used at scale?