"We provided mental health support to about 4,000 people — using GPT-3. Here’s what happened"

rvallee

Senior Member (Voting Rights)
This twitter thread has been making a splash, and I am very confused at the arguments considering the mindlessness of the BPS model and especially the massive growth in apps for mental health.



How is that any different from CBT apps? In the end GPT has learned from the professionals, the only difference is the absence of an editor who chooses exactly what will be shown to patients. It's even pretty well-established by now that those apps give identical outcomes to even "highly-trained" therapists, it's fully generic and automatable.

But recently several apps have incorporated "AI", although only as a gimmick. This is all coming from official sources and is beloved, now wait probably requires an all-caps BELOVED, by the medical profession.

I don't even understand the controversy given that the participants couldn't tell the difference. It's not clear how they learned that an AI was writing the answers, but the fact is they can't tell the difference, and the sympathy involved in some of the specific phrases like "that sounds hard" and "I understand" are just as performative as in the fast-food model of mental health.

In the BPS model they force on us, they explicitly instruct to feign sympathy to build trust, which shows how they really don't understand what trust actually means, you can't build trust on the basis of lies, this is perfidy.

Lots of talk about ethics, even though this is guaranteed to happen soon from the usual BPS circles. The patients can't tell the difference, that's how generic the whole thing is. And the only goal is to cut costs. All this criticism seems hollow and performative to me.

I don't know if being able to tell the difference was part of the experiment, that's one reason not to tell the patients. But the whole ethical issue seems to be about this, even though it changes nothing since the patients can't even tell the difference.

Really bizarre, there is a lot of obsessive focus on meaningless trivia from people who don't even object to making invalid claims out of open trials with subjective overlapping outcomes. Awful misallocation of priorities.
 
Seems that the goal was to see whether therapists could learn from GPT's answers. Which is a bit odd considering where GPT's answers come from.
The model was used to suggest responses for help providers, who could opt in to use it or not.


All this crap about ethical approval considering the many shady stuff by our BPS overlords, especially Crawley's many violations that were whitewashed by mislabeling research as service evaluation. The double standards are ridiculous.

I mean FFS the entire basis of the BPS model for chronic illness is manipulation and gaslighting. Having manipulation and gaslighting approved by an IRB sounds massively more problematic to me than the concerns here. The entire BPS approach to chronic illness is 100x more unethical than this and everyone loves it.
 
So the patient's weren't directly interacting with the machine. All it was doing was offering therapists possible replies they could use with their patients. It reminds me of when I was teaching decades ago and some people provided lists of possible phrases/sentences for teachers to write on reports to parents about their children.
 
We were not pairing people up to chat with GPT-3, without their knowledge. (in retrospect, I could have worded my first tweet to better reflect this).
We offered our peer supporters GPT-3 to help craft their responses. We wanted to see if this would make them more effective. People on the service were not chatting with the AI directly.


The only concern I see voiced in the scathing responses is the bureaucratic checkbox, even though that bureaucratic checkbox approves a lot of unethical crap. This would be a strong point if IRBs were actual checks against unethical pseudoresearch, but as we know they rubber-stamp all the unethical BPS crap. It's a very arbitrary process where friends in high places can do wonders for getting the unethical approved.

This is all about compliance, not ethics. Unethical studies get approved all the time. Hell, unethical practices that ignore basic consent aren't even a problem in our case.
 
More machine-ifying of the sausage machine

In my experience of low intensity CBT with IAPT the young woman was already pretty much robotically sticking to a script for our 30 min sessions. It would be more honest to have it delivered by AI.
 
More machine-ifying of the sausage machine

In my experience of low intensity CBT with IAPT the young woman was already pretty much robotically sticking to a script for our 30 min sessions. It would be more honest to have it delivered by AI.
That's really the thing, though. Is a scripted response by a human who was taught by another human actually less robotic than an adaptive response from a machine that learned from how professionals handle those situations? Responses so scripted they can be turned into a small program. Which they are, with the same results: not much, nothing objective that can be counted. It's still the words of humans being delivered by a program, except one is smart-ish and the other is fully editorialized.

I'd say the scripted response is even more robotic. Lots of people are focusing on how the same words being delivered by a human make them fake. They're the same words, they're just as fake coming from a human who is only following a script. Even coming from humans they are obviously mere simulation of caring and empathy. Seems like what people want is for other humans to care, but that's a different thing and definitely not true in the fast-food model of cheap BPS mental health care.

Since I posted this the comments have been flowing and people seem basically ready to chase him out of town with torches and pitchforks. For something that is barely average in terms of ethics. It's really weird, especially considering the complete lack of concerns over far worse violations of ethical behavior in mental health. If only people could care 1% as much about the blatantly unethical way people with chronic illness are mistreated.

Seems like the issue is over what they feel is impersonation than anything else, which doesn't really apply. If the issue is over lying then it's clearly very selective.
 
Back
Top Bottom