Levels of Abstraction and Morality

Richard Lucas

Research output: A Conference proceeding or a Chapter in BookChapterpeer-review

Abstract

In this chapter, I examine the work of Floridi (with Sanders) on the notion of Levels of Abstraction (LoA) and its importance for the morality of artificial agents. I critique their attempt to characterise artificial agents specifically (and systems generally) as moral agents through the use of LoA, threshold functions, and computer systems concepts such as state transitions and interactivity. I do this by first examining their notion of morality and then their notion of agency, particularly contrasting agents versus patient and the agent as system. Essentially, they view moral agents as systems viewed through a particular LoA; this moral level of abstraction they specify as LoA 2 . Their use of interactivity, autonomy, and adaptability is criticised and difficulties are noted. To cache out levels of abstraction, they give several examples. These, I claim, are particularly problematic. I then provide a systematic and comprehensive table of the relationships between interaction, autonomy, and adaptability to suggest where these relationships might be strengthened. Finally, I take issue with the notion of natural LoAs, claiming that there are no natural LoAs. In the end, I conclude that the construction of LoA 2 is too artificial and too simple to count as a natural characterisation of morality.
Original languageEnglish
Title of host publicationLuciano Floridi's Philosophy of Technology: Critical Reflections
EditorsHilmi Demir
Place of PublicationNetherlands
PublisherSpringer
Pages43-63
Number of pages21
Volume8
Edition1
ISBN (Electronic)9789400742925
ISBN (Print)9789400742918
DOIs
Publication statusPublished - 2012

Cite this