Abstract
Businesses, governments and other entities are increasingly presented
with AI-based ‘emotion recognition’ biometric systems, promoted as tools
offering robust insights into the honesty, comprehension or health support
needs of individuals, particularly students and employees. Australian
universities may consider adopting this technology as they expand their AI
engagement in learning/assessment platforms and student support
systems. Automated emotion recognition systems pose legal and human
rights challenges arising from their potential to be used deterministically;
their potential lack of reproducibility, replicability and validity; and their
susceptibility to bias, notwithstanding their possible utility. Further, they
rely on non-consensual or co-opted participation of individuals whose
dignity is eroded by consequent reduction from persons to data subjects.
This article evaluates such systems through a dignitarian human rights
lens, highlighting the need for a precautionary approach.
with AI-based ‘emotion recognition’ biometric systems, promoted as tools
offering robust insights into the honesty, comprehension or health support
needs of individuals, particularly students and employees. Australian
universities may consider adopting this technology as they expand their AI
engagement in learning/assessment platforms and student support
systems. Automated emotion recognition systems pose legal and human
rights challenges arising from their potential to be used deterministically;
their potential lack of reproducibility, replicability and validity; and their
susceptibility to bias, notwithstanding their possible utility. Further, they
rely on non-consensual or co-opted participation of individuals whose
dignity is eroded by consequent reduction from persons to data subjects.
This article evaluates such systems through a dignitarian human rights
lens, highlighting the need for a precautionary approach.
| Original language | English |
|---|---|
| Pages (from-to) | 1-17 |
| Number of pages | 17 |
| Journal | Griffith Journal of Law Human and Dignity |
| Volume | 12 |
| Issue number | 2 |
| DOIs | |
| Publication status | Published - 2025 |
UN SDGs
This output contributes to the following UN Sustainable Development Goals (SDGs)
-
SDG 3 Good Health and Well-being
Fingerprint
Dive into the research topics of 'EMOTION RECOGNITION TECHNOLOGIES AND DIGNITY IN AI-BASED SURVEILLANCE CAPITALISM'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver