In this chapter, I examine the work of Floridi (with Sanders) on the notion of Levels of Abstraction (LoA) and its importance for the morality of artificial agents. I critique their attempt to characterise artificial agents specifically (and systems generally) as moral agents through the use of LoA, threshold functions, and computer systems concepts such as state transitions and interactivity. I do this by first examining their notion of morality and then their notion of agency, particularly contrasting agents versus patient and the agent as system. Essentially, they view moral agents as systems viewed through a particular LoA; this moral level of abstraction they specify as LoA 2 . Their use of interactivity, autonomy, and adaptability is criticised and difficulties are noted. To cache out levels of abstraction, they give several examples. These, I claim, are particularly problematic. I then provide a systematic and comprehensive table of the relationships between interaction, autonomy, and adaptability to suggest where these relationships might be strengthened. Finally, I take issue with the notion of natural LoAs, claiming that there are no natural LoAs. In the end, I conclude that the construction of LoA 2 is too artificial and too simple to count as a natural characterisation of morality.
|Title of host publication||Luciano Floridi's Philosophy of Technology: Critical Reflections|
|Place of Publication||Netherlands|
|Number of pages||21|
|Publication status||Published - 2012|