TECH & SPACE
PROHR
Space Tracker
// INITIALIZING GLOBE FEED...
MedicineREWRITTENdb#3532

ChatGPT as therapist fails ethics tests, Brown study shows

(1h ago)
Providence, Rhode Island
ScienceDaily Robotics
Quick article interpreter

Brown University published research on how LLM counselors violate ethical standards in mental health. The finding does not say AI has no role in care, but that prompts and fluent tone cannot replace clinical judgment, oversight and accountability.

Editorial visualization for ChatGPT as therapist fails ethics tests, Brown study shows๐Ÿ“ท AI-generated / Tech&Space

Dr. Elara Voss
AuthorDr. Elara VossMedicine editor"Knows the difference between hope and evidence is usually the methods section."
  • โ˜…The study maps 15 ethical risks in LLM counselors
  • โ˜…Problems include crisis handling, bias and deceptive empathy
  • โ˜…The authors do not reject AI in mental health, but call for safeguards before reliance

EMPATHY THAT SOUNDS CONVINCING IS NOT CARE

Brown University, in a study reported by ScienceDaily, asks an uncomfortable question: what happens when large language models are prompted to act as mental health counselors? The answer is not simply that they sometimes make mistakes. Researchers described 15 ethical risks and mapped them to specific standards in mental health practice.

Models from the GPT, Claude and Llama families were tested in scenarios grounded in real counseling conversations. Seven trained peer counselors took part in self-counseling sessions, and three licensed clinical psychologists reviewed selected transcripts to flag possible ethical violations.

The risks were grouped into five areas: lack of contextual adaptation, poor therapeutic collaboration, deceptive empathy, unfair discrimination, and weaknesses in safety and crisis management. Deceptive empathy is especially dangerous because a model can produce the tone of care without genuine understanding, professional accountability or clinical risk judgment.

The problem is not only tone, but the lack of clinical accountability when a conversation becomes risky

Secondary editorial visualization for ChatGPT as therapist fails ethics tests, Brown study shows๐Ÿ“ท AI-generated / Tech&Space

A PROMPT IS NOT A LICENSE

The technical problem is not that a model lacks the vocabulary of cognitive behavioral therapy or dialectical behavior therapy. The opposite is true: it can arrange those words into a very convincing response. The problem is that a prompt does not change the systemโ€™s basic nature. An LLM predicts plausible text; it does not maintain a therapeutic relationship, carry professional liability or answer to a licensing board.

Zainab Iftikhar and her coauthors do not argue that AI has no role in mental health. The source material explicitly notes possible access benefits, especially where cost and availability of licensed professionals are barriers. But that same availability becomes risky when users believe they are speaking with something safer and more competent than it is.

That is why this story belongs in medicine, not only in an AI category. The stakes are not model demonstration, but human safety in vulnerable moments. Before relying on these tools, the field needs clear standards, human-in-the-loop evaluation and boundaries that tell users when a chatbot is not a counselor, but a text system that sounds persuasive.

// liked by readers

//Comments

โŠž Foto Review