In a review posted to Amazon UK of Fiona Fox's book about the Science Media Centre, discussed here
United Kingdom: Science Media Centre (including Fiona Fox),
@adambeyoncelowe includes much detail about the NICE Guideline review process. With his permission I have copied it in its entirety below and I have bolded the first paragraph of the guideline section - the original can be found
here and if you have an Amazon account you can vote it helpful if you found it to be so.
Review - Bad science, sloppy writing and a history of dodgy stories
"This is an outright attempt to diminish critics by juxtaposing them with the horrors of the Holocaust. The implication is that ME/CFS patients (and their advocates) who deign to object to medically obsolete treatments are, in fact, sinister malefactors aiming to overthrow the healthcare system by incrementally purging it of innocent doctors who just want to administer CBT. You know, just like the Nazis." This quote from Brian Hughes shows just what the problem with this book -- and the SMC -- is. The subject is the third chapter of the book, where Fox portrays all opposition to the science she touts (in this case, about the illness ME) as coming from Nazis. In doing so, she also repeats an old antisemitic trope about the evil cabal who control things from the shadows for inexplicable reasons (yes, the cognitive dissonance is rife in this book). This time, the cabal comprises sick patients all over the world, largely on benefits or living off family, who magically have the power to bend organisations like NICE, NAM (formerly the IOM) and the CDC to their whim.
George Monbiot wrote eloquently about Fox and the SMC, and their ties to both the Revolutionary Communist Party (she was a leading member, not just a person who halfheartedly joined for a while in college, as she claims) and Living Marxism (later LM).
The same clique eventually created the Institute of Ideas and Spiked. Yes, that Spiked, which so assiduously speaks for the alt-right and peddles fake news.
Let's be clear here: the Science Media Centre is a misnomer. It is often a propaganda machine supporting vested interests (from GM to oil to cosmetics companies), posing as objective scientific advocacy, run and founded by people who have past form in pretending to be one thing while pushing a contrarian, libertarian agenda. Such as, for example, denying there was a genocide in Rwanda (as Fox did in one of her columns). Sometimes they publish stuff of actual use, but look at their past list of funders and you'll quickly see the problem.
This book is poorly researched, as she herself sloppily admits in her disclaimer (it's all from her memory; she boasts that she refused to fact check, despite offers). So she runs with gossip and presents it as rational truth.
For instance, she claims the NICE guideline committee for ME was subverted by (very sick and disabled) patient activists. I.e., people who just want to get better. She notes herself that she doesn't understand why people would object to such ostensibly useful research -- research that would, if true, make them better -- but has no curiosity to explore why that is. She just makes uninformed assumptions and leaves it at that.
But the fact is, patients have been very consistent for decades: GET makes many people worse, and curative CBT for ME ranges from useless to gaslighting (despite that, most patients say they're happy with supportive CBT that helps them live better with their ME, so long as it doesn't try to cure them via telling them their illness isn't real).
As it happens, I was on the NICE committee. I was there for the meetings -- where there was a huge level of consensus on most issues right from the beginning, and in the end, there was consensus on the entire guideline. It couldn't have been signed off without it, and unlike most of the studies Fox supports on the issue, we were painfully transparent (NICE had records of comments I had made under articles, for example, because their COI process was so thorough).
Those who left did so after the guideline had been agreed in meetings. They had also signed off the draft guideline the year before, which was very similar. I can't speak for them, but there weren't huge fallings out. People got on and we liked each other. We had a lot in common, and it was only on the precise interpretation and expression of a few issues where we seemed to diverge for a time. Even then, we got to a place where everyone in the room was happy and we signed off the guideline together (twice). So it seems disingenuous to portray this as a huge scandal when that wasn't the experience in the room.
There were 21 of us on the committee, plus a technical team of NICE experts who ran the numbers, analysed the data, and presented it all to us. Outside observers from NICE and the Royal College of Physicians' National Guideline Centre recorded every decision we made and our rationale for doing so. If you read the rationale, we go into a number of reasons why we made the decisions we did with the evidence we had. The evidence reviews and supporting documentation run to thousands of pages in total. It's much more transparent than most clinical trials, and all the trials Fox writes about.
There were five lay members (three patients, one patient who was also a carer, and one carer). Of the patients, one was only 17 at the time, and came to represent children and young people on the committee. Most NICE guidelines used to have two lay members, but this has recently increased to four for most guidelines. We had five because children and young people were identified as a specific area we needed to address, so that accounts for the fifth lay member. It also has to be borne in mind that four of us were all ill and not every patient could attend every meeting, meaning five lay members would ensure decent representation of the patient voice in all meetings.
There were 16 professional members on the committee, by comparison, with degrees, qualifications, and decades of experience between them. There's no way we could have subverted them or the process, especially not in front of the technical team (and those who were there to record every single decision we made). Each person in there could stand their own ground, and they did.
Together, we read 900 pages of analysis on clinical trials (plus new qualitative data commissioned by NICE), which showed two things: 89% of the evidence was of very low quality. The other 11%? That was merely low quality. Qualitative data consistently showed frustration with the existing NHS treatment, diagnosis and management of ME -- across all age ranges and severities. Hardly a ringing endorsement is it?
There was another issue, of course, with the evidence Fox speaks of. Approaching it in good faith, and assuming the benefits reported were true, despite the poor quality, the cost to treat patients was above the limit NICE and the NHS set. This might not have been an issue, as NICE will recommend treatments that cost a little more, provided a committee can show that such treatments are unique and exceptional. Sadly, given the low and very low evidence quality, we could hardly make that case.
Even the professionals in the room who delivered such treatments agreed that GET, as described in clinical trials (based on the 'deconditioning and exercise intolerance [and avoidance] theories' of ME, with fixed weekly or biweekly increases of exercises (usually 10 or 20%), despite increases in symptoms) was not helpful for patients. In other words, even the barely-there weak gains some trials seemed to show, in a context of high risk of bias and indirectness, were for a treatment modality most professionals seemed to disagree with and which everyone felt had the risk to harm patients.
There were other issues which were factored into the decision. Firstly, all the trials used lax and outdated criteria for diagnosing ME. This wasn't inevitable, as we have had half-decent criteria since 2003. We agreed to allow trials using older criteria to be upgraded by one category (or rather, not downgraded for indirectness in the first place), where they recorded that patients had the cardinal symptom of post-exertional malaise (a symptom known about since about 1988-9).
Many studies mentioned PEM, but failed to report whether it was a required symptom, how prevalent it was, or how they measured it. When they did describe it, they seemed to misunderstand what it was -- labelling it a type of fatigue, rather than the upswing of symptoms (and onset of new symptoms) that occur after too much exertion, which can result from physical, cognitive and emotional exertion, often appear with a delay, are disproportionate to the triggering exertion, and take a prolonged time to recover from.
Our description of PEM was detailed and based on thousands of observations in clinical trials and clinical practice. Theirs were cursory and often shallow, suggesting it wasn't important to diagnosis or their understanding of the illness.
In other words, we couldn't be sure they were looking at the same patient group we are talking about today. A big problem. The data became harder and harder to rely on.
Secondly, the trials all had the same inbuilt weaknesses: a blend of subjective measures in an unblinded trial, using very broad questionnaires that often have overlap with the symptoms of depression and other illnesses, and poor use of control arms.
Subjective measures are fine when blinding occurs. Unblinded treatments are fine when objective measures are used. When both occur together, the results become that much less reliable.
You can negate some of that by having control groups, but that isn't foolproof. Where a control group comprises of standard medical care (SMC) but the main treatment arm is SMC + something else, you have to ask if any effects are due to the treatment or simply to *more* treatment (i.e., is more attention from a clinician more beneficial than less attention from a clinician).
Control arms also need to be free of bias or unfair expectation setting. So if, for example, the newsletters for your trial talk about how wonderful GET and CBT are (which happened in the PACE trial, by the way), those on the control arms (because treatment is unblinded, so they know what they're getting, remember?) feel they're missing out, and those getting the 'good stuff' feel grateful.
This contributes to expectation bias -- we expect the good treatment to work and so rate it better on surveys; while we expect the bad treatment not to work, so rate it worse. People's displeasure at getting the supposed dud treatment makes the control arms appear worse than they might be.
This also exacerbates what's called roleplay in medical practice and clinical trials -- we all want to be grateful patients, so there is a pressure to say thank you, and that the treatment helped a lot, even if it didn't.
This was worsened in the PACE trial when the researchers relaxed their own recovery criteria after the trial began. You needed a 65 to enter the trial but only a 60 to be recovered. So you could lose 5 points on the questionnaire and be better. Somehow.
In their own words, they did this because it would give results more in line with what they expected from clinical practice. I.e., to make the results align with their own preconceived notions. This is not how you conduct a clinical trial, unless you want to prove what you already wanted to be true.
Finally, in PACE and other trials, any gains seemed to disappear at long-term follow-up. By two years, there was no difference between those who undertook costly GET or CBT and those who got nothing. This means that any initial 'improvements' recorded on surveys, perhaps due to placebo or expectation, vanished. The triallists in PACE said this is because they contaminated the trial arms (i.e., patients in the control arms clamoured for CBT and GET, so they let everyone have those treatments) which adds two more problems: again, it was sloppy, and made the results harder to interpret; but also, it confirms that there was bias in how the treatment arms and control arms were presented to patients, making it clear that there was higher expectation for CBT and GET. The trial was therefore subject to all kinds of biases which likely influenced the results.
All of these problems together made the data quite uninterpretable. And the results were still uniformly weak! At best we could say there was a small chance the treatments had had a small benefit in the short-term, if you give them the benefit of the doubt on everything.
Bearing in mind that an unblinded trial of rituximab showed far greater results than GET and CBT ever did, only to show a null result when blinded, it seemed very feasible that patients were so desperate for a treatment -- any treatment -- that expectation had a huge impact in subjective reporting in unblinded trials. People wanted to get better, and perhaps felt a duty to help other patients get better, so that coloured their questionnaire responses.
A final deathblow was struck when we analysed data on harms. Prior to the start of the meetings, patients had completed a survey showing almost 80% of them got worse from GET. That's an outrageously high number! A meta-analysis of other surveys showed at least 54% were worsened by the treatment, but the general trend was that older surveys showed a smaller rate of harm while more recent ones showed a higher rate of harm (reflecting, perhaps, the general loss of goodwill when the initial excitement of having any kind of treatment, and therefore hope, wore off -- see above).
Even bearing in mind response bias (people with worse outcomes are more likely to respond to such surveys), the numbers are stark, and covered about 15,000 people over a decade or so. That's a lot of people.
But it gets worse. 20,000 people wrote into NICE to get the old guidelines (which recommended GET and CBT) reviewed and a new guideline written. That's the biggest response NICE has ever had, beaten only by the same patient community when NICE initially delayed publication of the guideline to placate the Royal Colleges (25,000 signed this time). There are thought to be 250,000 people with ME in the UK, so the equivalent of 1/10 of them signed that petition.
This shows that there was a huge desire for change, whether some researchers accepted it or not. Moreover, most clinics couldn't provide data on improvement, recovery and harms. A Freedom of Information request sent to numerous clinics found they didn't record this information at all. Some boasted high recovery rates but without any evidence to support it. So they had no proof that the treatments worked in practice, either.
In the end, the NICE guideline was welcomed by all the national ME charities, politicians from across the political spectrum, and even the British Association for CFS/ME. That's right: even the official body representing the clinics and the people who work in them accepted the guideline and said they could work with it. It matches guidelines put out by the CDC and NAM (then IOM).
The hissy fits that followed came from two main quarters: the Royal Colleges (essentially trade unions whose role is keep their members in jobs) and a few researchers who had made a career out of the research now labelled low and very low.
On the whole, the Royal Colleges play very little role in the day to day treatment of ME. They were invited to a roundtable anyway, where they got to put forward their views. It was clear from those present that they had been given half a story by people with vested interests and cushy roles (some of them in the RCs), and their arguments didn't stand up to scrutiny. So it was agreed at the meeting (as you can see from the minutes) that the guideline would be published with some extra clarifications (mainly pulled from the rationale section and put in text boxes in the shorter, bulleted guideline, so that it was easier to see the context for each decision). The RCs were there and either agreed, too, or at least didn't disagree (as is more likely, because they knew they couldn't win in a fair discussion).
But once the guideline was published, they had another hissy fit, published their own statements rejecting the guideline as ignoring their opinions (which hadn't stood up to scrutiny, remember), and then did the equivalent of stamping their feet and going, 'lalalalala we can't hear you'. It was disingenuous, cowardly and entitled. Their main argument was that 'what clinicians offer isn't GET anyway, so you can't ban GET or it'll stop them offering their not-GET'. Which is as silly and ignorant as it sounds -- if they totally don't offer GET, it shouldn't matter if we banned GET; it would only matter if they do offer GET and they just want to be allowed to keep doing it exactly as they always have.
And now Fiona Fox is happy to be on the side of those who have behaved so abominably. Those who admitted under oath at a QMUL tribunal that their harassment comprised of being heckled once. Those who said that politicians discussing their research in Parliament was harassment. Those who wrote to a woman MP and told her that her behaviour was 'unbecoming' for challenging their research (patronising or what?). Because that's what it boils down to: they are patronising, and think they always know best. We plebs (especially women, because 80% of ME patients are women) don't know what's best for us, so we should shut up and obey the rules as laid down by these researchers, pushed by the SMC, and protected by the Royal Colleges.
If you're happy to side with those people, then buy Fiona's book. If you want a proper conversation, the rest of us are at the adult's table.
For some good reading, check out Brian Hughes. David Tuller is also very amusing, but you should be warned that his forthright American style may take some adjusting to for those of us brought up on British ideas of politeness (but he is oh so very funny!)."