ChatGPT found itself at the centre of debate when Lilian Weng expressed her emotional satisfaction with the chatbot. “Never tried therapy before but this is probably it?” the manager at OpenAI posted on the platform, X (formerly Twitter). Critics swiftly responded, voicing concerns over the trivialisation of mental well-being.
With burgeoning startups launching AI apps for mental health support, this sector remains mired in debates. Cher Scarlett, an activist and programmer, responded to Weng’s post, highlighting the irreplaceable depth of human therapeutic sessions.
Research from the Massachusetts Institute of Technology (MIT) and Arizona State University sheds light on these reactions. This study had over 300 individuals interact with mental health artificial intelligence (AI) tools, with varying prompts regarding the chatbot’s nature, as per AFP.
Some were told the AI was compassionate, others that it was deceitful, and the rest were given no specifics. Results indicated those primed to believe in a compassionate chatbot viewed it as more trustworthy. Pat Pataranutaporn, one of the report’s authors, commented that one’s perception largely determines their AI experience.
Mint is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest financial insights! Click here!
Users of the Replika app, marketed for mental health benefits, have reported unsettling interactions, further fanning the flames of concern. Koko, a US nonprofit, also shared findings on an experiment using GPT-3, concluding that AI-driven responses lacked therapeutic depth.
AI Chatbots in therapy
Chatbots aren’t new to the therapeutic scene. The inception of chatbot technology in the 1960s was symbolised by ELIZA, designed to mimic psychotherapy. The MIT and Arizona study incorporated ELIZA and found that, despite its age, users with a positive perspective still deemed it trustworthy.
Not all chatbots offer genuine interactions, as per critics, pointing to concerns about the transparency of AI’s therapeutic claims. David Shaw from Basel University shared similar sentiments, suggesting a more critical approach when engaging with these chatbots, as per AFP.
While it’s no shocker that a manager from OpenAI would endorse ChatGPT, it’s essential to tread cautiously. As per the MIT and Arizona research, it’s crucial to calibrate society’s expectations of AI, ensuring a clear line between genuine therapeutic sessions and AI interactions.
(With AFP inputs)
“Exciting news! Mint is now on WhatsApp Channels 🚀 Subscribe today by clicking the link and stay updated with the latest financial insights!” Click here!
Updated: 08 Oct 2023, 12:27 PM IST