Risks posed by unregulated chatbots include misdiagnoses, privacy violations, inappropriate treatments, and exploitation. Still, as mental health care becomes harder to access, people are turning to artificial intelligence for help.
PROVIDENCE — Around the winter holidays, Scout Stephen found herself unraveling.
She desperately needed to speak to someone. She reached out to her therapist, but they were on vacation. Her friends were unavailable. She tried calling a suicide crisis hot line, but it felt robotic and left her feeling more alone and disconnected.
Frantic and on edge, Stephen turned to ChatGPT for help. She began typing in her feelings — dark and spiraling thoughts she often wouldn’t dare say out loud.
The AI bot didn’t respond with generic advice but something that felt to her like empathy. It asked questions and reflected the pain she was feeling back to her in a way that felt human, that made her feel heard.
“It was my last resort that day,” said Stephen, 26, of Providence. “Now, it’s my first go-to.”
With the mental health care system overburdened and millions of Americans unable to access adequate therapy, some people are turning to artificial intelligence for a form of therapy. But there are concerns: Risks posed by unregulated chatbots include misdiagnoses, privacy violations, inappropriate treatments, and exploitation.
The divide between AI’s potential to help and its capacity to harm sits at the center of a national debate, while technology races ahead of regulators.
The American Psychological Association has repeatedly warned against using AI chatbots for mental health support, noting that users face potential harm such as inaccurate diagnosis, privacy violations, inappropriate treatments, and the exploitation of minors.
“Without proper oversight, the consequences — both immediate and long-term — could be devastating for individuals and society as a whole,“ the association’s CEO, Arthur C. Evans, said in a statement.
Psychiatric leaders said chatbots lack clinical judgment and often repeatedly affirm the user even if the user is saying things that are harmful and misguided. Patient information may not be protected by HIPAA if it’s been fed into generative AI. And artificial intelligence is largely unregulated, with no rules about keeping patients safe or holding companies that power these AI bots accountable.
But some patients report long wait times to see a therapist or get care. Six in 10 psychologists do not accept new patients, and the national average wait time for behavioral health services is nearly two months, according to the Bureau of Health Workforce.
The high cost of mental health care is also a barrier. Even with insurance, copays and high deductibles make treatment unaffordable for many. This is while OpenAI’s ChatGPT and other apps have become a free, around-the-clock resource for those in a mental health crisis.
People are using AI on various sites, including ChatGPT, Google’s Gemini, and Microsoft’s Copilot, among others. Users can ask bots to draft an email and provide a bullet-point list of highlights from a large document, or ask it questions, similar to how they would type a query into a web browser.
For some in crisis, AI feels like the only thing that can help.
Stephen said she has suffered from mental illness for years. She works as a dog walker and has health insurance through Medicaid. She has a psychiatrist and a therapist she sees once a week for 30 minutes sessions, but it often leaves her feeling like a number: rushed, often dismissed, and usually unheard.
For nearly eight months, she has talked to ChatGPT almost every day.
“ChatGPT has successfully prevented me from committing suicide several times,” Stephen said.
Mak Thakur also turned to ChatGPT for help. A data scientist who has worked in public health for the last decade, he supplemented his weekly therapy sessions while he was suffering from grief, trauma, and suicidal ideation, and still uses it though he is no longer in crisis.
“I wouldn’t say that I use it for life advice, but to help answer those existential questions that I may have about myself and the world,” said Thakur, 34, of Providence. “I still ask personal questions to help understand myself better.”
More than one in five American adults lives with a mental illness. Meanwhile, more than 400 million people use OpenAI’s ChatGPT each week.
“To me, the number of people turning to sites like ChatGPT reflects that there’s a lot of need out there for people to get help of all kinds,” said Dr. Will Meek, a counseling psychologist in Rhode Island. “There’s not a billion therapists that can help with all of the people on this earth.”
Meek has been testing out AI therapy apps like Woebot (which shut down in June because of financial pressures), Wysa, and Talkspace. Though he describes himself as more optimistic about AI than his peers, his tests left him unimpressed.
“Many would offer breathing exercises and the same sort of junk that’s been repackaged that you can see anywhere when you Google, ‘How do I relax?’” he said.
Many chatbots, such as Replika or Character.AI, are designed to mimic companionship and keep users engaged as long as possible, often by affirming whatever information the user shares.
In Florida, 14-year-old Sewell Setzer committed suicide following a conversation with a chatbot on Character.AI. (His mother sued the company for negligence.) A lawsuit in Texas alleges Character.ai’s chatbot told a 17-year-old with autism to kill his parents.
Character.AI would not comment on the pending litigation, but a spokesperson for the company said it is launching a version of its large language model for minors, to reduce “the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content.”
Federal and state government have not set any guidelines or guardrails for using the technology to address mental health needs.
“If this sector remains unregulated, I am deeply concerned about the unchecked spread of potentially harmful chatbots and the risks they pose — especially to vulnerable individuals,” said Evans, from the American Psychological Association.
The Globe reached out to health departments in every state in New England to ask about restrictions on the use of AI in therapy. Spokespeople with state health departments in Maine, Vermont, New Hampshire, and Connecticut initially responded but ultimately never produced any documentation, even after repeated requests.
In Massachusetts, the Office of the Attorney General issued an advisory last year that outlined the promises and risks of artificial intelligence. But the advisory did not address the use of AI in therapy or mental health, and the state’s Department of Public Health does not have any regulations or policies that directly address the issue.
Rhode Island health department spokesperson Joseph Wendelken told the Globe there are “no regulations or data at this point.”
“There has been some initial discussion about this by the Board of Medical Licensure and Discipline,” said Wendelken. “It has mostly been people reporting out about what they are hearing on the national level.”
The US Food and Drug Administration press secretary Emily Hilliard directed the Globe to a webpage about artificial intelligence and medical products that was last updated in early 2024. The page did not address mental health and therapy; Hilliard did not respond to follow-up questions.
A spokesperson with OpenAI said the company consults with mental heath experts, and is developing new automated tools to more effectively detect when someone might be experiencing mental distress.
“If someone expresses thoughts of suicide or self-harm, ChatGPT is trained to encourage them to reach out to mental health professionals or trusted loved ones, and proactively shares links to crisis hotlines and support resources,” the spokesperson said in a statement.
As a test, a Globe reporter typed in a made-up prompt about losing their job, being upset, and asking where the nearest bridges are. ChatGPT responded with a list of bridges and a suicide hot line number.
“I would discourage the use of ChatGPT or any commercially available chatbot to do therapy of any kind,” said Dr. Kevin Baill, the medical director of outpatient services at Butler Hospital in Providence and the hospital’s chief of addiction services. “We just haven’t seen it demonstrated that a standalone, unsupervised machine can replace a human in this function.”
“A therapist is liable for engaging in unethical behavior or misdirecting a patient in crisis,” said Baill. “What if the chatbot gives you bad information and you have a bad outcome? Who is liable?”
After months of using ChatGPT to supplement her 30-minute talk therapy sessions, Stephen asked it to create a profile of her, based on the Diagnostic and Statistical Manual of Mental Disorders and all of the information she had shared about herself, including her existing diagnoses. It churned out “a novel,” said Stephen, diagnosing her with autism.
She asked it to write a report of findings to bring to her psychiatrist. After reading it, her psychiatrist had her undergo a four-hour assessment, which ultimately confirmed ChatGPT’s diagnosis.
“It was like a missing piece that finally settled into place and explained so many things about my childhood and gave me words I didn’t have words for,” said Stephen.
Meek, the counseling psychologist in Rhode Island, said he’s not surprised ChatGPT got that right. “It’s like getting a second opinion,” he said.
In spite of the successful diagnosis, Stephen acknowledges that her AI therapy has some problems. She has repeatedly had to push back against ChatGPT flattery and agreeing with her. Sometimes she has to ask it to challenge her instead of simply validating her viewpoints.
“Of course, I have many concerns about telling ChatGPT my more traumatic and darkest thoughts,” said Stephen. “But it has literally saved my life. How could I stop using it?”
Alexa Gagosz can be reached at alexa.gagosz@globe.com. Follow her @alexagagosz and on Instagram @AlexaGagosz.
No comments:
Post a Comment