Data and algorithms are nowadays ubiquitous in many aspects of our society and lives. They are playing an increasing role in scenarios that can be received as low-risk – getting recommendations on which movies to watch, but they are also being streamlined at or by various institutions, covering high-stakes domains such as healthcare provision or criminal justice. As part of the Responsible Data Science theme of the Digital Society research programme, we analyzed a variety of case studies where the irresponsible use of data and algorithms created and fostered inequality and inequity, perpetuated bias and prejudice, or produced unlawful or unethical outcomes. We used these case studies to identify a set of requirements that we believe need to be addressed by both data science researchers and practitioners to make responsible use of data and algorithms a responsible practice. Challenges, however, come with requirements and responsibility. Thus, we also make an attempt to extract some general research challenges that we consider to be important. Finally, we make a series of suggestions for what changes can be made to better facilitate research into responsible use of data and algorithms, which cover several landscapes such as computer science, social science, ethical, legal and societal, among others.
Our roadmap on responsible use of data and algorithms or read it below. The roadmap was written by (with the support of the coordinators of the Responsible Data Science theme):
- Peter Bloem – Vrije Universiteit Amsterdam
- Oana Inel – Delft University of Technology
- Linda Rieswijk – Maastricht University