BMJ: Rapid response to 'Updated NICE guidance on CFS', 2021, Jason Busse et al, Co-chair and members of the GRADE working group

Grading gets you into the business of 'you've got to have it to make you healthy' or 'I'm not going to offer it because we are skint and I don't think you deserve it'.

Presumably one of the reasons it was introduced? In order to make money out of healthcare, there has to be a way of disposing of patients who keep needing interventions but just stay obstinately ill anyway.

It won't be long before it's grading more apps than medicines...until they can grade themselves, of course.
 
They have evidence that doctors like their advice:

From:
Ignacio Neumann, Holger J. Schünemann (2020), Guideline groups should make recommendations even if the evidence is considered insufficient
CMAJ Jan 2020, 192 (2) E23-E24; DOI: 10.1503/cmaj.19014, https://www.cmaj.ca/content/192/2/E23
There is an interesting response from a GP / PCP to Schünemann's article:
RE: Guideline groups should make recommendations even if the evidence is considered insufficient
This commentary is all about how physicians should respond to uncertainty; and I would strongly disagree with the approach advocated by the authors. They point to evidence that physicians are uncomfortable with uncertainty and wish someone would just tell them what to do. Fair enough, but uncertainty will not go away just because we don't like it. Graded recommendations based on the strength of available evidence and the magnitude of benefit are what we need from guidelines. When enough uncertainty exists that no one really knows what to do, then we should admit that. It is easily plausible that guideline recommendations based on expert opinion alone could standardize care without providing any benefit. They could also make it harder to obtain evidence. If surgery X is the standard of care despite very poor quality evidence then it will be harder to convince ethics boards that it is reasonable to withhold surgery X from a control group. Physicians already struggle with the concept that guidelines contain both strong and weak evidence based recommendations. Let us not confuse them further by adding more consensus based recommendations. When the evidence is strong we should use it to standardize practice, when it is very weak there is no benefit to standardization. If physicians want suggestions on what to do in the face of uncertainty then they can look to narrative reviews. Uncertainty is part of life and pretending otherwise is not healthy for the medical community.

Competing Interests: None declared.
 
Mind you there are certain forms of finger lifting, involving two at a time.....which may be quite appropriate for patients to use!
Vigorously.

I never got beyond one eyebrow, but my husband can alternate.
I also got stuck on one.

They point to evidence that physicians are uncomfortable with uncertainty and wish someone would just tell them what to do. Fair enough, but uncertainty will not go away just because we don't like it. Graded recommendations based on the strength of available evidence and the magnitude of benefit are what we need from guidelines. When enough uncertainty exists that no one really knows what to do, then we should admit that.

Yep. Just admit it. Patients will respect you far more than if you lie to them, which is basically what they are advocating.
 
Last edited:
That's weird. So are they saying 'recommend something, even if the evidence for it is very weak and it may cause harm, so doctors feel more comfortable, and to prevent them doing something that may be even worse (or better).'

Surely the aim when you don't have an effective treatment should be 'first do no harm', and be honest with patients that there is no known effective treatment. And provide support, including ensuring the patient has care and financial support, and symptomatic treatment where available.

That was my problem with the draft NICE guideline for ME/CFS. They rejected unevidenced treatments, but then made recommendations for CBT and physical activity provision that is equally unevidenced. It was like saying we have all these CBT therapists and OT's and physios, so we have to give them something to do to keep them happy and employed, and somewhere for the GP to send patients to do something that's in the guideline.
Especially as it systematically leads to:
Doing so should not preclude, and may encourage, future relevant research
Deliberately. In fact they use it to justify themselves doing more denying research, which itself further stifles relevant research. It creates a cycle of failure, demonstrably. Problem is some failures have a very high approval rating and happen to be very convenient in making for easy career advancement. People always forget that people will people no matter what and just err.
 
There is an interesting response from a GP / PCP to Schünemann's article:
This is especially relevant given the weird boasts in BPS about personalized this and holistic that. Treatment that is fully unique to the person, which cannot be standardized, and this is something they are finally having to deal with, is essentially useless. At the very least it cannot scale in the same way as an artisanal industry that manufactures everything by hand simply cannot meet the needs of a large population.

By definition treatment that requires mostly judgment and is wholly personalized (even if it actually isn't and people are pretending, like psychics and astrologers do) is too unreliable to use in practice, for reasons that include economics but also go to the fact that people are simply too flawed to do this reliably. It is precisely because individual judgment tends to be poor, largely because it is uninformed, that standardized treatments are valuable.

There is a whole range between holistic-woo-stuff and one-size-fits-all and this is clearly where most treatments should be. Ironically both are happening, applying generic CBT-GET to everyone while pretending there are customized elements that make them unique to every individual. In reality the people behind this know very well that lengthy sessions with therapists who received "advanced training" have the exact same outcomes as a 10-minute pamphlet and that their thing is fully generic, the "customization" is strictly a matter of branding.

It's frankly hard to even give proper context to how irrelevant physician discomfort is, especially as they have to do far more uncomfortable things on a daily basis anyway. Another cheap excuse.
 
The pressure people who do not like the result are putting on NICE is a clear demonstration of the insanity of a system like GRADE. If it generates results that can be open to lobbying it is subjective and if it is subjective then there is no point in having what appears to be an objective numbering system.
Hopefully NICE will recognise that the pressure being applied to them is no different to that which has been applied down the years to anyone disagreeing with them, be they ME/CFS patients, scientists, whoever. They respond with a mixture of poor-science flimflam, and/or attacks against their critics to cover their lack of scientific counter argument.
 
I think thesis great, because we have Gordon Guyatt, Mr GRADE himself, weighing in and saying his system would have rated PACE as reliable evidence.

It would have been easy to see the ME/CFS kerfuffle as a backwater in the evidence-baed world but I think this makes it clear that the NICE committee decision is a real threat to the cosy EBM system.

Perhaps the problem is that EBM was intended to beat up the drug companies not to point out the failings of academics. So now those who pushed for EBM are now faced with the issue that their and their friends work doesn't stand up to scrutiny. Of course the issue is made worse as information is public and those outside of a small clique can look, comment and assess just how poor quality the work is.

Where as the drug companies are cleaning up their act as the FDA require them to present proper evidence.
 
I see it as great for people who would like to see unfounded opinions out in the open rather than hidden behind dissembling and obfuscation.
what we are seeing now is enthusiasts for bad trials fighting amongst themselves. Turner-Stokes and Wade have upset Guyatt and Garner. Bring it on. The more these people argue with each other the more they will expose the idiocy of their analysis.
Exactly. When people whose modus operandi is to fight from the shadows start coming out into the light, it often signals desperation to prevent their control slipping away. I see it as recognition their game is up. Hopefully NICE will see through them.
 
The approach by Cochrane is useful if you want to see the effect when all interventions are combined but there are also some arguments why they shouldn't be combined. Some say for example that the Wallman trial was more like pacing than GET. Other trials used pragmatic rehabilitation which included not only GET but also patients education inspired by CBT. Some trials used treatment as usual as comparison while others relaxation therapy etc. The results in the Cochrane meta-analysis suffers from high heterogeneity, possibly because they combined these different approaches.
Yes, if you mash too many variables together that are supposedly similar but in reality have nuanced but significant differences, then in a way all bets are off. End up with too many unknowns, whose existence are potentially not even realised and presumed not to exist.
 
Perhaps the problem is that EBM was intended to beat up the drug companies not to point out the failings of academics. So now those who pushed for EBM are now faced with the issue that their and their friends work doesn't stand up to scrutiny. Of course the issue is made worse as information is public and those outside of a small clique can look, comment and assess just how poor quality the work is.

Nail. On. Head.
 
I’ve just jumped in and read this one comment, apologies if it’s not helpful.

was struck by how eminently sensible this approach would be (and wondering that/if it is not done already):
thank you Roger Suss of Manitoba!

Suss's letter seems to be saying something a bit different as a whole - which I agree is very sensible. He is against recommendation where evidence is weak. I think in this sentence he is just saying if evidence is weak do not give a strong recommendation, say we are pretty uncertain.

To me the whole idea of grading is phoney. I think a guideline committee can helpfully give their opinion on the strength of the evidence - in whatever words are applicable to that situation. Any concession to a grade is by definition a blunting of meaning because the grade can only approximate to some clear words.

I don't see any value in grading recommendation though, at least in most cases. The prescribing doctor and the patient should decide whether the evidence justifies using the treatment. If it is considered too expensive to provide on public funds or insurance over that is a different issue, which can get decided by the body paying.
 
Here's an extract but it's all pretty daming.

Although we originally planned to use actigraphy as an outcome measure, as well as a baseline measure, we decided that a test that required participants to wear an actometer around their ankle for a week was too great a burden at the end of the trial. We will however test baseline actigraphy as a moderator of outcome.

From the original FAQs:

q25_pace_faqs.png


However, looking at the trial management committee minutes, the decision to drop actigraphy as an outcome measure seems to have been made because they knew it wouldn’t confirm that participants had increased their activity, and not because it was “too great a burden” for participants. If anything, it seems that it was too great a burden for the analysis team, as there seemed to be problems extracting the baseline data and getting it into an analysable format. Five pages of data needed to be completed by the centre staff for each participant, so I suspect there were loads of issues with missing or incomplete data that hampered any chance of getting any useful measurements even at baseline. There were also issues with the availability of the actiwatches themselves (too few per centre had been ordered). It very much seems that use of actigraphy was not properly field tested before the trial started. For such an important trial, this seems to be a massive oversight.

From TMG minutes (5 Nov 2004):"


Umh ---- your contacted to do a study, paid for out of public money [who - which Government Department "oversaw" the contract?], and which would become the basis of Government policy, and you and a charity [Action for ME doubtless] decide to scrap objective monitoring. Sorry why did they get paid? You don't do what your contracted to do and you get paid --- OK I've done that in a private capacity but this was public money.

Was the change to the project signed off by the management group i.e. Government side? I wonder if the public accounts committee looked at this - seems that the contract wasn't properly completed but it was paid for!

Finally emailed my MP i.e. to try to raise the change in the methodology (outcome indicators - objective to subjective) in the PACE protocol. If there's revised NICE guidance, which (at least in part) undoes the mistake, then this may be an opportune time to remind Government of the need to learn lessons.
Probably a lot of flaws in this*!

*"My MP,
there's a Government funded study on the use of Graded Exercise Therapy [GET] and Cognitive Behavioural Thereapy (CBT) in Myalgic Encephalomyelitis (ME)
- the study is called PACE. PACE was used for the current NICE guidance for ME.

The original protocol for the PACE study used objective monitoring of activity levels - actmetry - think of a Fitbit type device. However, the study protocol was revised to use subjective outcome measures (questionnaires). The reasons given for the change to the study protocol differed and were not convincing. Subjective measurement of activity consistently overestimates activity levels [https://www.sciencedirect.com/science/article/abs/pii/S0022399921000623]. In this case the use of subjective outcomes led to a policy [NICE guidance] which caused harm to people with ME/CFS.

The NICE guidance is currently being revised, and the review panel have downgraded studies, like PACE, which used subjective outcome measures.

I think the change to the study protocol, i.e.to replace objective outcome measures with (biased) subjective outcome measures, should be raised through the relevant oversight committee in Westminster - the public accounts committee?

I would be grateful if you would advise how to raise the issue of the change to the PACE study protocol to subjective (biased) outcome measurements.

Thank you for your assistance & happy to discuss
Francis
 
Was the change to the project signed off by the management group i.e. Government side? I wonder if the public accounts committee looked at this - seems that the contract wasn't properly completed but it was paid for!

Unfortunately, I think the inadequate HRA report whitewashed PACE to a great extent and will make it hard to get official and legitimate concerns about misconduct to stick. The dropping of the actimeters was clearly based not on concern about it being a burden but because they learned that it didn't match subjective results. The whole study is bogus because they didn't have informed consent, given that they violated their promise to observe the Helsinki Declaration. But getting people to act on that has also been a challenge.
 
Unfortunately, I think the inadequate HRA report whitewashed PACE to a great extent and will make it hard to get official and legitimate concerns about misconduct to stick. The dropping of the actimeters was clearly based not on concern about it being a burden but because they learned that it didn't match subjective results. The whole study is bogus because they didn't have informed consent, given that they violated their promise to observe the Helsinki Declaration. But getting people to act on that has also been a challenge.
There's also this really important point that @Mithriel has just made on the general PACE thread (https://www.s4me.info/threads/a-general-thread-on-the-pace-trial.807/page-48#):
Mithriel said:
They claim that PACE looked for harms but found none so GET is safe so they do not deny that one of the aims was to see if the treatments were safe yet the patients were told AT THE BEGINNING that the treatment was safe and could not cause them any harm or worsening of the disease so they should ignore it if they felt bad.

This is mentioned as being unscientific but just imagine the outcry if they gave AIDS sufferers a drug, told them it was safe and then sat back to see how many became sicker.

Being told all risk of harm from a trial you enter is a basic tenet of medicine so how could they get away with this. It seems criminal as well as unethical to me.
 
"Although we originally planned to use actigraphy as an outcome measure, as well as a baseline measure, we decided that a test that required participants to wear an actometer around their ankle for a week was too great a burden at the end of the trial."
I find this interesting. If it was not too great a burden at the beginning of the trial, why was it too great a burden at the end of the trial, when people were supposedly improved?

Moreover, if it was a burden, then the very act of removing that burden would have inevitably resulted in improved subjective impressions of how participants would have felt. If I went on a hike with a backpack, and then the backpack was removed, any subjective impressions I gave would be improved because I could not help but feel more positive, given the lightened burden.
 
From pages 81~82 of the PACE GET Participant Manual:

"Example of a setback plan: (your plan might have some differences)
  1. Setbacks are a normal part of recovery: it is the overall trend that is important
  2. Setbacks are likely to become less severe and last for less time than previously as I get stronger
  3. I should try to maintain as much physical activity as I can, even though this may feel more difficult than normal
  4. I need to remember that there is no evidence to suggest that my symptoms are causing me any harm, even though they feel very uncomfortable
  5. I should try to keep to my physical activity or exercise plan as much as possible, in order to maintain my physical health during this time
  6. Resting too much may feel like the right thing to do now, but in the long run is likely to worsen my condition
  7. Resting for a week could lead to my muscles weakening by 10% - this will
    make it much harder to get back to the activity I was doing
  8. I can reduce activity if I absolutely have to, but should try to avoid this
    where possible and build up again as soon as I can
  9. I should try to get back into any activity I have avoided as soon as I can"
[my underline/italics]

This, in a trial supposedly trialling the efficacy and safety of the very treatment the participants are being told is safe! Talk about expectation bias! Scientific disaster on every level.
 
Unfortunately, I think the inadequate HRA report whitewashed PACE to a great extent and will make it hard to get official and legitimate concerns about misconduct to stick. The dropping of the actimeters was clearly based not on concern about it being a burden but because they learned that it didn't match subjective results. The whole study is bogus because they didn't have informed consent, given that they violated their promise to observe the Helsinki Declaration. But getting people to act on that has also been a challenge.

Thanks @dave30th could you explain what "HRA" is? If I can get this moving (unlikely) then it may be useful to have details of the project/study protocol (links etc.).
 
Back
Top Bottom