rvallee
Senior Member (Voting Rights)
This twitter thread has been making a splash, and I am very confused at the arguments considering the mindlessness of the BPS model and especially the massive growth in apps for mental health.
How is that any different from CBT apps? In the end GPT has learned from the professionals, the only difference is the absence of an editor who chooses exactly what will be shown to patients. It's even pretty well-established by now that those apps give identical outcomes to even "highly-trained" therapists, it's fully generic and automatable.
But recently several apps have incorporated "AI", although only as a gimmick. This is all coming from official sources and is beloved, now wait probably requires an all-caps BELOVED, by the medical profession.
I don't even understand the controversy given that the participants couldn't tell the difference. It's not clear how they learned that an AI was writing the answers, but the fact is they can't tell the difference, and the sympathy involved in some of the specific phrases like "that sounds hard" and "I understand" are just as performative as in the fast-food model of mental health.
In the BPS model they force on us, they explicitly instruct to feign sympathy to build trust, which shows how they really don't understand what trust actually means, you can't build trust on the basis of lies, this is perfidy.
Lots of talk about ethics, even though this is guaranteed to happen soon from the usual BPS circles. The patients can't tell the difference, that's how generic the whole thing is. And the only goal is to cut costs. All this criticism seems hollow and performative to me.
I don't know if being able to tell the difference was part of the experiment, that's one reason not to tell the patients. But the whole ethical issue seems to be about this, even though it changes nothing since the patients can't even tell the difference.
Really bizarre, there is a lot of obsessive focus on meaningless trivia from people who don't even object to making invalid claims out of open trials with subjective overlapping outcomes. Awful misallocation of priorities.
How is that any different from CBT apps? In the end GPT has learned from the professionals, the only difference is the absence of an editor who chooses exactly what will be shown to patients. It's even pretty well-established by now that those apps give identical outcomes to even "highly-trained" therapists, it's fully generic and automatable.
But recently several apps have incorporated "AI", although only as a gimmick. This is all coming from official sources and is beloved, now wait probably requires an all-caps BELOVED, by the medical profession.
I don't even understand the controversy given that the participants couldn't tell the difference. It's not clear how they learned that an AI was writing the answers, but the fact is they can't tell the difference, and the sympathy involved in some of the specific phrases like "that sounds hard" and "I understand" are just as performative as in the fast-food model of mental health.
In the BPS model they force on us, they explicitly instruct to feign sympathy to build trust, which shows how they really don't understand what trust actually means, you can't build trust on the basis of lies, this is perfidy.
Lots of talk about ethics, even though this is guaranteed to happen soon from the usual BPS circles. The patients can't tell the difference, that's how generic the whole thing is. And the only goal is to cut costs. All this criticism seems hollow and performative to me.
I don't know if being able to tell the difference was part of the experiment, that's one reason not to tell the patients. But the whole ethical issue seems to be about this, even though it changes nothing since the patients can't even tell the difference.
Really bizarre, there is a lot of obsessive focus on meaningless trivia from people who don't even object to making invalid claims out of open trials with subjective overlapping outcomes. Awful misallocation of priorities.