TY - JOUR
T1 - Technology in Suicide Prevention
T2 - Fears and Functionality for Crisis Supporters
AU - Mazzer, Kelly
AU - Rickwood, Debra
AU - HOPKINS, Danielle
N1 - Publisher Copyright:
© 2024 Danielle Hopkins et al.
PY - 2024/8
Y1 - 2024/8
N2 - Background: Crisis supporters at Lifeline Australia consistently engage with distressed and often suicidal help-seekers. The development of technological methods, such as machine learning (ML), in suicide prevention may complement their support work. Investigating attitudes towards the use of ML in crisis support is an important first step.Aims: The current study is aimed at investigating crisis supporters' attitudes towards ML in crisis support/suicide prevention, beliefs about the effect of technology on the service and help-seeking, and concerns/opinions about any future technology implementation.Methods: Two hundred fifty-five crisis supporters aged 20-84 years were recruited through Lifeline Australia. Participants voluntarily completed an anonymous questionnaire, including measures of attitudes towards technology as well as open-text options, which provided the data for a thematic analysis.Results: Crisis supporters were neutral to negative on an adapted measure of ML use in crisis support. Less than one-third held the belief that technology would enhance Lifeline services, and over half of the participants felt help-seekers would be less likely to contact Lifeline if technology was implemented. Thematic analysis of the open-text questions revealed loss of human connection and mistrust of algorithms to be the most prominent barriers to future technological adoption by Lifeline crisis supporters.Limitations: Clearly defining terms of ML and technology was difficult to do in this hypothetical context, potentially impacting the attitudes expressed.Conclusions: Any new technology to support crisis supporters needs to be carefully codesigned with the workforce to ensure effective implementation and avoid any potential or perceived negative impacts on help-seekers.
AB - Background: Crisis supporters at Lifeline Australia consistently engage with distressed and often suicidal help-seekers. The development of technological methods, such as machine learning (ML), in suicide prevention may complement their support work. Investigating attitudes towards the use of ML in crisis support is an important first step.Aims: The current study is aimed at investigating crisis supporters' attitudes towards ML in crisis support/suicide prevention, beliefs about the effect of technology on the service and help-seeking, and concerns/opinions about any future technology implementation.Methods: Two hundred fifty-five crisis supporters aged 20-84 years were recruited through Lifeline Australia. Participants voluntarily completed an anonymous questionnaire, including measures of attitudes towards technology as well as open-text options, which provided the data for a thematic analysis.Results: Crisis supporters were neutral to negative on an adapted measure of ML use in crisis support. Less than one-third held the belief that technology would enhance Lifeline services, and over half of the participants felt help-seekers would be less likely to contact Lifeline if technology was implemented. Thematic analysis of the open-text questions revealed loss of human connection and mistrust of algorithms to be the most prominent barriers to future technological adoption by Lifeline crisis supporters.Limitations: Clearly defining terms of ML and technology was difficult to do in this hypothetical context, potentially impacting the attitudes expressed.Conclusions: Any new technology to support crisis supporters needs to be carefully codesigned with the workforce to ensure effective implementation and avoid any potential or perceived negative impacts on help-seekers.
UR - http://www.scopus.com/inward/record.url?scp=85202578000&partnerID=8YFLogxK
U2 - 10.1155/2024/6625037
DO - 10.1155/2024/6625037
M3 - Article
AN - SCOPUS:85202578000
SN - 2578-1863
VL - 2024
SP - 1
EP - 9
JO - Human Behavior and Emerging Technologies
JF - Human Behavior and Emerging Technologies
M1 - 6625037
ER -