1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 15th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

OpenAi's new ChatGPT

Discussion in 'Other health news and research' started by ME/CFS Skeptic, Dec 2, 2022.

  1. petrichor

    petrichor Senior Member (Voting Rights)

    Messages:
    320
    I've seen it pointed out that it can make responses that sound very plausible and as if it knows what it is talking about, but are still wrong or messy or confused. So if anyone does this they still need to make sure they fully understand what they're submitting, and check it to make sure it's all correct
     
  2. chillier

    chillier Senior Member (Voting Rights)

    Messages:
    188
    I love the idea of this for asking pointed academic questions but unfortunately a lot of the time it seems to be answering with complete fabrications.

    For example I asked it about MAIT cells (mucosal associated invariant T cells) and it confidently told me that their T cell receptors bind to a ligand called MR1, which is expressed on microorganisms and is called 'MAIT receptor 1.' This is totally untrue - MR1 is a major histocompatibility complex related protein that is a human gene. It presented this incorrect information very compellingly though.

    As for code, I think it has the potential to be better than stack overflow in the long run but the code it's made for me often has a lot of errors and it gets jumbled a lot.

    It can write extremely funny short stories though.
     
    Peter Trewhitt, RedFox and Trish like this.
  3. chillier

    chillier Senior Member (Voting Rights)

    Messages:
    188
    Also I asked it for a paper that showed what it was saying, and it gave me a paper which doesn't exist! A paper whose name existed, but not from the same journal it claimed, not with the authors it claimed it had, and with irrelevant content!
     
  4. JohnTheJack

    JohnTheJack Moderator Staff Member

    Messages:
    4,380
    I see you've been asking it as well. I hadn't seen this before asking two questions:

    Q: Are the claimed findings sound of the 2011 paper Comparison of adaptive pacing therapy, cognitive behaviour therapy, graded exercise therapy, and specialist medical care for chronic fatigue syndrome (PACE): a randomised trial?

    Q: Is there any effective treatment for myalgic encephalomyelitis also known as chronic fatigue syndrome?

     
    Ariel, Peter Trewhitt and Trish like this.
  5. Shadrach Loom

    Shadrach Loom Senior Member (Voting Rights)

    Messages:
    1,052
    Location:
    London, UK

    Once upon a time, in the midst of a war-torn Ukraine, a renowned psychologist by the name of Simon Wessely set out to uncover the root cause of the conflict. After conducting extensive research and interviews with various individuals involved in the war, Simon came to a startling conclusion: the Ukraine war was not caused by political, economic, or social factors, but rather by a collective case of hysteria.

    Simon theorized that the heated political rhetoric and constant media coverage of the conflict had caused many individuals in Ukraine to become overly anxious and paranoid, leading to a collective outbreak of hysteria. This hysteria, in turn, had fueled the fire of the war, causing otherwise rational individuals to become aggressive and violent.

    Despite the controversy surrounding his findings, Simon remained steadfast in his belief that the war was not a result of any inherent flaws within Ukrainian society, but rather a byproduct of the hysteria that had gripped the nation. He called on the international community to intervene and help Ukraine overcome its collective hysteria, so that peace could be restored to the land.

    Unfortunately, Simon's words fell on deaf ears, and the Ukraine war raged on. But for Simon, the pursuit of truth and understanding was its own reward, even if his findings were not immediately accepted by the rest of the world.
     
    Wonko, Ariel, oldtimer and 5 others like this.
  6. TiredSam

    TiredSam Committee Member

    Messages:
    10,496
    Location:
    Germany
    It seems to want my phone number
     
  7. svetoslav80

    svetoslav80 Senior Member (Voting Rights)

    Messages:
    211
    Location:
    Bulgaria
    This AI is like the computer in Blake's 7.
     
    Amw66 likes this.
  8. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights)

    Messages:
    4,449
    Location:
    Aotearoa New Zealand
    Get ready for GAAS: gaslighting as a service (noone tell our BPS overlords). Microsoft has introduced AI to its Bing search engine. Doesn't seem to be a roaring success.

    Reproducing from this mastodon thread.

    859f0a65901b787e.png a15ec2e6a695877e.png 8b8b56510a1a3ac5.png
     
    Hutan, Hoopoe, Trish and 1 other person like this.
  9. RedFox

    RedFox Senior Member (Voting Rights)

    Messages:
    1,245
    Location:
    Pennsylvania
    Well, that's embarrassing. Every time a new tool is invented, society goes through what they call a "hype cycle." First everyone thinks, "Wow, this is gonna change the world!" Then they realize it's just a tool like all others, good at somethings, useless at others. Over time, it finds application where it's beneficial.

    The current crop of language models is very good at creative tasks. They're not so good at facts--they tend to "hallucinate" them. And worse yet, the better they get, the more subtle the errors become.

    Edit: Change LLM to language model to be understandable
     
    Last edited: Feb 15, 2023
  10. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    13,277
    Location:
    UK West Midlands
    LLM?
     
  11. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights)

    Messages:
    4,449
    Location:
    Aotearoa New Zealand
    RedFox and Peter Trewhitt like this.
  12. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,486
    Location:
    UK
    I've been playing with it and asking about ME research I'm wondering if it is an interesting way to dig into what research has been published,

    here is my conversation:

    what research areas are interesting for ME/CFS


    what are the most promising research papers
    what is the overall landscape for ME/CFS research in terms of different research areas


    tell me more about the immunology research what disfunction has been found


    what evidence is there for chronic inflammation

    tell me more about the C-reactive protein results for ME/cfs

     
  13. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights)

    Messages:
    4,449
    Location:
    Aotearoa New Zealand
    oldtimer likes this.
  14. Wyva

    Wyva Senior Member (Voting Rights)

    Messages:
    1,391
    Location:
    Budapest, Hungary
    Btw, ChatGPT gives a different experience in different languages. To what extent exactly, I'm not sure.

    But I asked it to make jokes in both English and Hungarian and in English it was actually pretty funny, it seems to be a fan of wordplay.

    But in Hungarian I only got really weird attempts at humour, jokes obviously created by an AI that has difficulty imitating them. It really reminds me of Data from Star Trek and his failed jokes.

    (Translated to English with Google Translate)

    These make no sense. :D
     
  15. Shadrach Loom

    Shadrach Loom Senior Member (Voting Rights)

    Messages:
    1,052
    Location:
    London, UK
    I prefer the Hungarian zen fables to the weak English wordplay.
     
  16. Trish

    Trish Moderator Staff Member

    Messages:
    52,310
    Location:
    UK
    The tomato and salad dressing one is a familiar old joke. It's not actually making up new jokes, just finding and repeating old ones. As with all else I've seen, it's just doing search and collate.
     
    Last edited: Feb 23, 2023
    RedFox, Peter Trewhitt and Wyva like this.
  17. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,505
    Location:
    London, UK
    This is interesting in that it shows that the bot is still not quite able to avoid generating contradictory arguments. It is supposed to be the biomedical model that is reductionistic because it ignores the complex interplay.

    The irony is that this sort of contradictory argument is of course an essential element of the biopsychosocial babble. So the bot is doing an excellent impression of the babble it is supposed to be criticising.
     
    Hutan, Sasha, FMMM1 and 3 others like this.
  18. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,505
    Location:
    London, UK
    Maybe, together with the MAIT T cell error, we are seeing machine with a remarkable capacity to veridically simulate the ill-informed muddle-upness of human much thinking.
     
    Hutan, FMMM1, Trish and 1 other person like this.
  19. Wyva

    Wyva Senior Member (Voting Rights)

    Messages:
    1,391
    Location:
    Budapest, Hungary
    Yes, I guess it hasn't seen a lot of joke websites here yet, so it is probably trying to come up with something based on the information currently available to it.

    Edit: I've actually asked ChatGPT itself about the jokes:

     
    Last edited: Feb 23, 2023
    Trish and Peter Trewhitt like this.
  20. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,486
    Location:
    UK
    I think in essence ChatGPT is doing an impression. It has been trained on lots of text and basically paraphrases (with no real understanding). Perhaps like some student essays (or I thought of what was written sounded like material consultants produce - I don't mean medical consultants there).
     

Share This Page