Recent developments in smart technologies that seamlessly link the networks and user-generated data via tracking tools and algorithms, have added another dimension to the unequal distribution of digital benefits. Algorithmic processing assigns categorical meaning to data that users generate during their online and digital activities, without direct participation of the users. The designs of algorithms often have a specific purpose, which may not result in equal benefits to users, creating algorithmic discrimination. We contextualise this issue within the social and digital exclusion frame, where the lack of digital engagement creates new forms of disadvantage while reinforcing existing social inequalities. The aim of this paper is to investigate how automated digital tools that aim to provide better services to users are linked to the existing issues of social and digital exclusion. Are there exclusionary practices that are built in the algorithmic design that generate users’ data? How are these systems led by existing perceptions and biases that exclude certain forms of participating? What are the potential consequences of the entanglement between inclusion of data and exclusionary practices? In order to investigate the interplay between social, digital and data exclusion, we adopt a case study approach where we examine how exclusionary practices are embedded in the design of welfare services; Australian Government’s Centrelink automated debt recovery system, Department of Human Services client avatar ‘Nadia’ and Cashless welfare card. We examine how asymmetry in data access, control and use result in exclusionary practices at the design stage of tools that are intended to improve user interactions with services.
|Number of pages||1|
|Publication status||Published - 2018|
|Event||Data Justice Conference - Cardiff University, Cardiff, United Kingdom|
Duration: 21 May 2018 → 22 May 2018
|Conference||Data Justice Conference|
|Period||21/05/18 → 22/05/18|