1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 18th March 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

"We provided mental health support to about 4,000 people — using GPT-3. Here’s what happened"

Discussion in 'Other health news and research' started by rvallee, Jan 7, 2023.

  1. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,299
    Location:
    Canada
    This twitter thread has been making a splash, and I am very confused at the arguments considering the mindlessness of the BPS model and especially the massive growth in apps for mental health.

    https://twitter.com/user/status/1611450197707464706


    How is that any different from CBT apps? In the end GPT has learned from the professionals, the only difference is the absence of an editor who chooses exactly what will be shown to patients. It's even pretty well-established by now that those apps give identical outcomes to even "highly-trained" therapists, it's fully generic and automatable.

    But recently several apps have incorporated "AI", although only as a gimmick. This is all coming from official sources and is beloved, now wait probably requires an all-caps BELOVED, by the medical profession.

    I don't even understand the controversy given that the participants couldn't tell the difference. It's not clear how they learned that an AI was writing the answers, but the fact is they can't tell the difference, and the sympathy involved in some of the specific phrases like "that sounds hard" and "I understand" are just as performative as in the fast-food model of mental health.

    In the BPS model they force on us, they explicitly instruct to feign sympathy to build trust, which shows how they really don't understand what trust actually means, you can't build trust on the basis of lies, this is perfidy.

    Lots of talk about ethics, even though this is guaranteed to happen soon from the usual BPS circles. The patients can't tell the difference, that's how generic the whole thing is. And the only goal is to cut costs. All this criticism seems hollow and performative to me.

    I don't know if being able to tell the difference was part of the experiment, that's one reason not to tell the patients. But the whole ethical issue seems to be about this, even though it changes nothing since the patients can't even tell the difference.

    Really bizarre, there is a lot of obsessive focus on meaningless trivia from people who don't even object to making invalid claims out of open trials with subjective overlapping outcomes. Awful misallocation of priorities.
     
    alktipping, Peter Trewhitt and Ariel like this.
  2. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,299
    Location:
    Canada
    Seems that the goal was to see whether therapists could learn from GPT's answers. Which is a bit odd considering where GPT's answers come from.
    https://twitter.com/user/status/1611582827224797185


    All this crap about ethical approval considering the many shady stuff by our BPS overlords, especially Crawley's many violations that were whitewashed by mislabeling research as service evaluation. The double standards are ridiculous.

    I mean FFS the entire basis of the BPS model for chronic illness is manipulation and gaslighting. Having manipulation and gaslighting approved by an IRB sounds massively more problematic to me than the concerns here. The entire BPS approach to chronic illness is 100x more unethical than this and everyone loves it.
     
    alktipping, Peter Trewhitt and Ariel like this.
  3. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,299
    Location:
    Canada
    And the concerns over informed consent? Puh-lease. The entirety of psychosomatic medicine does worse every single day.
     
    alktipping, Peter Trewhitt and Ariel like this.
  4. CRG

    CRG Senior Member (Voting Rights)

    Messages:
    1,857
    Location:
    UK
    RedFox and Peter Trewhitt like this.
  5. Trish

    Trish Moderator Staff Member

    Messages:
    51,871
    Location:
    UK
    So the patient's weren't directly interacting with the machine. All it was doing was offering therapists possible replies they could use with their patients. It reminds me of when I was teaching decades ago and some people provided lists of possible phrases/sentences for teachers to write on reports to parents about their children.
     
  6. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,299
    Location:
    Canada
    https://twitter.com/user/status/1611775514590740480


    The only concern I see voiced in the scathing responses is the bureaucratic checkbox, even though that bureaucratic checkbox approves a lot of unethical crap. This would be a strong point if IRBs were actual checks against unethical pseudoresearch, but as we know they rubber-stamp all the unethical BPS crap. It's a very arbitrary process where friends in high places can do wonders for getting the unethical approved.

    This is all about compliance, not ethics. Unethical studies get approved all the time. Hell, unethical practices that ignore basic consent aren't even a problem in our case.
     
    alktipping and Peter Trewhitt like this.
  7. Ariel

    Ariel Senior Member (Voting Rights)

    Messages:
    1,055
    Location:
    UK
    I hope the CBT people played themselves and will all someday be replaced by chatbots.
     
    Sean, alktipping and Peter Trewhitt like this.
  8. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    13,145
    Location:
    UK West Midlands
    More machine-ifying of the sausage machine

    In my experience of low intensity CBT with IAPT the young woman was already pretty much robotically sticking to a script for our 30 min sessions. It would be more honest to have it delivered by AI.
     
    CRG, RedFox, Sean and 7 others like this.
  9. Solstice

    Solstice Senior Member (Voting Rights)

    Messages:
    1,154
    Think it might be less harmful to have them administer CBT to chatbots tbh.
     
  10. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,299
    Location:
    Canada
    That's really the thing, though. Is a scripted response by a human who was taught by another human actually less robotic than an adaptive response from a machine that learned from how professionals handle those situations? Responses so scripted they can be turned into a small program. Which they are, with the same results: not much, nothing objective that can be counted. It's still the words of humans being delivered by a program, except one is smart-ish and the other is fully editorialized.

    I'd say the scripted response is even more robotic. Lots of people are focusing on how the same words being delivered by a human make them fake. They're the same words, they're just as fake coming from a human who is only following a script. Even coming from humans they are obviously mere simulation of caring and empathy. Seems like what people want is for other humans to care, but that's a different thing and definitely not true in the fast-food model of cheap BPS mental health care.

    Since I posted this the comments have been flowing and people seem basically ready to chase him out of town with torches and pitchforks. For something that is barely average in terms of ethics. It's really weird, especially considering the complete lack of concerns over far worse violations of ethical behavior in mental health. If only people could care 1% as much about the blatantly unethical way people with chronic illness are mistreated.

    Seems like the issue is over what they feel is impersonation than anything else, which doesn't really apply. If the issue is over lying then it's clearly very selective.
     
  11. Sean

    Sean Moderator Staff Member

    Messages:
    7,044
    Location:
    Australia
    An algorithmic circle-jerk of confirmation bias.
     
    alktipping, CRG and Solstice like this.

Share This Page