Illness beliefs and treatment outcome in chronic fatigue syndrome, 1998, Deale, Chalder and Wessely

Discussion in 'Psychosomatic research - ME/CFS and Long Covid' started by Hutan, Apr 13, 2025.

  1. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,059
    Location:
    London, UK
    That was my thought. When doing a trial in house, as I have done, it is very easy to pick patients that suit your narrative. For RA we have objectives measures that reflect that selection. Here it is much looser.
     
  2. Evergreen

    Evergreen Senior Member (Voting Rights)

    Messages:
    470
    Right, so in RA papers you can see at a glance whether selection bias might be affecting results?

    As always, the devil is in the detail. This is the detail of Fulcher & White's selection:
    In Deale et al. 1997:
    They get points for consecutive referrals.

    @Hutan , here's the detail of that exclusion:
     
  3. Trish

    Trish Moderator Staff Member

    Messages:
    58,975
    Location:
    UK
    Would exclusion of those with somatisation disorder, whatver that is, mean the sample selected primarily experienced fatigue, and not other somatic symptoms such as pain. Given that these were done in the 1990's, they probably used Oxford criteria, which also allowed inclusing of the milder forms of depression and anxiety. So what were they actually studying?
     
    Sean, alktipping, Deanne NZ and 4 others like this.
  4. jnmaciuch

    jnmaciuch Senior Member (Voting Rights)

    Messages:
    571
    Location:
    USA
    I had the same question. I’m guessing they were using DSM IV criteria, which came out in 1994:

    So….excluding probably everyone who might have met any of the more stringent ME/CFS criteria unless they happened to insist that they didn’t have a certain number of those miscellaneous symptoms. Because I’m sure that the vast majority of pwME would eventually say “okay sure, yeah, I’ve experienced that at some point” if probed with a laundry list of symptoms, especially when they’re being assessed at any time.

    The “starting before age 30” criteria also sticks out to me—it would end up excluding most people who have been living with the illness for years that developed it during one of the most common periods for ME.

    So I think you’re right that it’s just…people who said they were tired?
     
    alktipping, Deanne NZ, Hutan and 3 others like this.
  5. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,568
    Location:
    Canada
    What bothers me is the framing of an opinion as a belief. They are not the same, and the opinion that this is a physical problem (in an infectious illness kind of way, not a broken bone one) is rational and arrived at using the facts available, senses, experiences and so on. Framing this as a belief is a choice, one made without actually validating anything. Because beliefs cannot be validated.

    But it's framed as beliefs. The questions are very oddly phrased for this reason. Their ambiguity makes it precisely moldable to a desired outcome. Anyone answering the questionnaire might ask "what do you mean by this question?", and the universal answer, of course, is: "answer it as you understand it". So everyone is answering a slightly different question, but it's always interpreted in a particular way. Which they don't share.

    But they're not asking about beliefs. They're asking about opinions, while analyzing them as beliefs. Which is highly dishonest.

    I can't say I agree that this study is compelling. As a rhetorical device, sure, but scientifically speaking this is biased and useless. It takes prior beliefs to accept the interpretation as they are. Without those beliefs, the interpretation is just weird. Change the people running the study, and you can change the outcome entirely. Same with the participants, but here they are clearly a reflection of the researchers and what they sought. What they sought was confirmation of their model. They were always going to argue in support of it, and the design reflects it.

    In my (old) profession, software development, I wouldn't even look at information built in such a lousy way. It's an entirely useless way of assessing knowledge or making decisions. It would be so easy to take random attributions and argue similarly about anything.
     
    alktipping, Deanne NZ, Hutan and 2 others like this.
  6. Yann04

    Yann04 Senior Member (Voting Rights)

    Messages:
    2,061
    Location:
    Romandie (Switzerland)
    It’s very intentional. Every word they pick is thought out with predetermined conclusions.

    When they measure inactivity they call it “fear avoidance”, which is assuming an unproven mechanism.
    When they measure pain levels they call it “central sensitivity”, assuming an unproven mechanism.
    When they measure the effect of symptom burden on mental health, they call it catastrophising.
    When they measure idiopathic chronic fatigue, they call it CFS/ME.

    Again and again, they use words with much thought, every word they use, thing they measure with questionnaires, comes with unproven underlying assumptions. It’s creation of narratives with survey results. Choose the right surveys, and you can make anything true. The overreliance of medicine on poorly thought out surveys which has so many biases in administration is terrible methodology.
     
    Sean, rvallee, alktipping and 6 others like this.
  7. Nightsong

    Nightsong Senior Member (Voting Rights)

    Messages:
    1,109
    The service is located at the Maudsley. How many patients would accept a referral to one of the most well-known psychiatric hospitals in the country if they didn't think they had a psychiatric condition? Their patients are almost certainly going to have a much higher rate of psychiatric conditions than ME/CFS patients elsewhere.
     
    Last edited: Apr 14, 2025
    Trish, alktipping, Liie and 6 others like this.
  8. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights) Staff Member

    Messages:
    6,688
    Location:
    Aotearoa New Zealand
    Sean, rvallee, Trish and 4 others like this.
  9. Evergreen

    Evergreen Senior Member (Voting Rights)

    Messages:
    470
    I thought this was a nice illustration of how one centre can be different to others.

    In the Trial Management Group minutes from the PACE trial, a number of issues are mentioned regarding King's:
    So patients being referred from King's for the trial wanted CBT, were less disabled and fewer had CFS.

    A particular problem is also noted at King's with regard to recording adverse events:
    This is echoed in the Trial Steering committee minutes:
    My understanding is that King's is the centre Chalder and Wessely are, and were, affiliated with, but am happy to be corrected on that.
     
    bobbler, Wyva, Trish and 1 other person like this.
  10. Robert 1973

    Robert 1973 Senior Member (Voting Rights)

    Messages:
    1,684
    Location:
    UK
    I think they were studying whatever they thought had the best chance of producing a positive result, in trials designed to maximise the probability of producing a result which could be reported as positive, with the intention of applying the reported findings to anyone who met their broad criteria, including those who would be unlikely to have been included their trials and may have been excluded.
     
    rvallee, Deanne NZ, Utsikt and 2 others like this.

Share This Page