Skip to main navigation Skip to search Skip to main content

EMOTION RECOGNITION TECHNOLOGIES AND DIGNITY IN AI-BASED SURVEILLANCE CAPITALISM

Research output: Contribution to journalArticlepeer-review

12 Downloads (Pure)

Abstract

Businesses, governments and other entities are increasingly presented
with AI-based ‘emotion recognition’ biometric systems, promoted as tools
offering robust insights into the honesty, comprehension or health support
needs of individuals, particularly students and employees. Australian
universities may consider adopting this technology as they expand their AI
engagement in learning/assessment platforms and student support
systems. Automated emotion recognition systems pose legal and human
rights challenges arising from their potential to be used deterministically;
their potential lack of reproducibility, replicability and validity; and their
susceptibility to bias, notwithstanding their possible utility. Further, they
rely on non-consensual or co-opted participation of individuals whose
dignity is eroded by consequent reduction from persons to data subjects.
This article evaluates such systems through a dignitarian human rights
lens, highlighting the need for a precautionary approach.
Original languageEnglish
Pages (from-to)1-17
Number of pages17
JournalGriffith Journal of Law Human and Dignity
Volume12
Issue number2
DOIs
Publication statusPublished - 2025

UN SDGs

This output contributes to the following UN Sustainable Development Goals (SDGs)

  1. SDG 3 - Good Health and Well-being
    SDG 3 Good Health and Well-being

Fingerprint

Dive into the research topics of 'EMOTION RECOGNITION TECHNOLOGIES AND DIGNITY IN AI-BASED SURVEILLANCE CAPITALISM'. Together they form a unique fingerprint.

Cite this