No, ChatGPT isn’t ‘lying’ to you: How psychology educators can help address anthropomorphism to scaffold more accurate understanding of Large Language Models

    Research output: Contribution to conference (non-published works)Otherpeer-review

    Abstract

    While Large Language Models (LLMs) such as ChatGPT offer exciting opportunities in education and society more broadly, one major issue emerging in almost all discussions of LLMs is our tendency to interpret the interactions we have with these systems as ‘conversations’ with another sentient being. This pattern of anthropomorphising (attributing human traits to non-humans constructs) and over-attribution bias (assuming inappropriate causal explanations) has potentially problematic and even dangerous consequences. As use of LLMs expands, users need to understand that, despite being very powerful in certain ways, these systems are not sentient - they do not think, reason, know and remember like we do, despite our propensity to assume they do. As experts in human behaviour and cognition, psychology educators have an important role to play in enabling students to have a more accurate and nuanced understanding of what these systems are, and what they are not.
    Original languageEnglish
    Pages1-1
    Number of pages1
    Publication statusPublished - 9 Sept 2023
    EventAustralian Psychology Learning and Teaching Conference 2023 - Hobart, Australia
    Duration: 8 Sept 202310 Sept 2023

    Conference

    ConferenceAustralian Psychology Learning and Teaching Conference 2023
    Country/TerritoryAustralia
    CityHobart
    Period8/09/2310/09/23

    Fingerprint

    Dive into the research topics of 'No, ChatGPT isn’t ‘lying’ to you: How psychology educators can help address anthropomorphism to scaffold more accurate understanding of Large Language Models'. Together they form a unique fingerprint.

    Cite this