Abstract
While Large Language Models (LLMs) such as ChatGPT offer exciting opportunities in education and society more broadly, one major issue emerging in almost all discussions of LLMs is our tendency to interpret the interactions we have with these systems as ‘conversations’ with another sentient being. This pattern of anthropomorphising (attributing human traits to non-humans constructs) and over-attribution bias (assuming inappropriate causal explanations) has potentially problematic and even dangerous consequences. As use of LLMs expands, users need to understand that, despite being very powerful in certain ways, these systems are not sentient - they do not think, reason, know and remember like we do, despite our propensity to assume they do. As experts in human behaviour and cognition, psychology educators have an important role to play in enabling students to have a more accurate and nuanced understanding of what these systems are, and what they are not.
Original language | English |
---|---|
Pages | 1-1 |
Number of pages | 1 |
Publication status | Published - 9 Sept 2023 |
Event | Australian Psychology Learning and Teaching Conference 2023 - Hobart, Australia Duration: 8 Sept 2023 → 10 Sept 2023 |
Conference
Conference | Australian Psychology Learning and Teaching Conference 2023 |
---|---|
Country/Territory | Australia |
City | Hobart |
Period | 8/09/23 → 10/09/23 |