Keynote talk by Tarleton Gillespie
Public debate about content moderation has overwhelmingly focused on removal: social media platforms deleting content and suspending users, or opting not to. But removal is not the only available remedy. Reducing the visibility of problematic content is becoming a commonplace part of platform governance. These platforms use machine learning classifiers to identify content that is misleading enough, harmful enough, offensive enough that, while they do not warrant removal according to the site’s guidelines, they warrant reducing their visibility by demoting them in algorithmic rankings and recommendations, or excluding them entirely. This talk documents this shift and explains how reduction works. Tarleton Gillespie raises questions about how and why platforms now use the decision of what and what not to recommend as a form of content moderation. Despite our distrust of platforms and how they have conducted content moderation up to this point, reduction policies may be the most mature step platforms have taken yet. On the other hand, this concentrates even more curatorial power in the hands of these elite, private, Western, profit-oriented intermediaries – using techniques that, at least so far, remain completely invisible to existing apparatuses of accountability.
Short bio
Tarleton Gillespie is a senior principal researcher at Microsoft Research and an affiliated associate professor in the Department of Communication and Department of Information Science at Cornell University. His most recent book is Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (Yale, 2018).
Practical info
Digital Society Conference 2021
29 November 2021
13.00 – 13.55 CET
This talk will be in English
To join this keynote talk, please register for free via the link