Evidence based care for people with chronic fatigue syndrome and myalgic encephalomyelitis, 2021, Sharpe, Chalder & White

See 1.6 in the Standards in public life

holders of public office should be truthful
Just like "duty" of candor.

Words have no power. Enforcing them does. It's people who give power to words, by acting on them. Refusing to act on them is also a way of acting on them, as it is here, the refusal to apply standards is a choice, deliberate actions.

Without enforcement, everything is an honor code, and not everyone is honorable, especially when they stand to benefit from being dishonest, even more so when they have approval from their immediate culture.
 
They repeat the canard of representing "Three clinicians resigned from the NICE guideline committee before publication" as though the three were disassociated from the final publication, which they certainly were not. The duplicity of this is clearly conscious because they avoid making their intended point oblique by acknowledging that Charles Shepard (also a clinician as it happens) also resigned from the Committee before final publication.

Not sure if it's worth the effort for @Michelle (or someone else ?) to follow up on that point - it is a gross misrepresentation though.

It is incredibly naughty because those clinicians cannot speak for themselves, for all we know they could have thought it had been too soft on BPS. And I'm very sure that if the biomed side had inferred in the way these guys have then BPS would have noted that fact.

Putting words in the mouths of people who cannot speak which suggest something that the physical evidence notes did not happen based on timing - how is a professional allowed to get away with that without repercussions to their career? Yet the Chinese whispers and happy to pass it on makes it systemic.

Not a good look of either 'we don't check our facts before we spout it out' or 'we tell lies and help spread them so we are all on the same page' as a profession really is it, but everyone who re-uttered it is responsible for doing that.
 
"Gold standard" is double-blinded randomized placebo-controlled with objective outcomes, which none of their trials even come close to be, and they know it. The journal editor knows it and lets it pass anyway because this is all politics.
Same argument is used in Norway re the new Lightning Process study. It's "gold standard", so patients have nothing to complaint about. It wouldn't have taken a journalist long to see that it is no way comes close to being "gold standard", but instead they are allowed to continously claim it while patients speaking out are ridiculed.
 
Same argument is used in Norway re the new Lightning Process study. It's "gold standard", so patients have nothing to complaint about. It wouldn't have taken a journalist long to see that it is no way comes close to being "gold standard", but instead they are allowed to continously claim it while patients speaking out are ridiculed.

Perhaps we should start labelling it as fools-gold-standard.

A diversion about fool's gold has been moved here
 
Last edited by a moderator:
My letter has now been published: https://link.springer.com/article/10.1007/s11606-022-07715-x

I hope it raises some eyebrows.


Robert Saunders (aka McMullen) on Twitter:

"My letter on ME/CFS in the Journal of General Internal Medicine in response to Sharpe et al: https://t.co/0SR2aawqRc The history of how GET & CBT have been promoted, prescribed & researched is alarming. Sadly, these mistakes are now affecting people with #LongCovid. 1/“ / Twitter



Robert Saunders (aka McMullen) on Twitter:

"In response, the authors point out that 3 clinicians resigned from the NICE guideline committee before publication. This is true but misleading. NICE’s minutes state: “The whole guideline was agreed by the committee … before there were resignations." https://t.co/8f6Ogwigrj 2/ https://t.co/aUmjZ512Pd / Twitter



Robert Saunders (aka McMullen) on Twitter:

"It should also be noted that an FOI revealed that a representative of one the Royal Colleges privately texted the NICE chief executive before the guideline was published to try to persuade her tamper with the independent evidence review: https://t.co/3lcB94K8Ag 3/ https://t.co/McuhANHiui / Twitter



Thank you to the forum member who provided a transcript of that text message:

"There is a way. You go ahead with the recommendations but hold back with the evidence. You take over the evidence and correct the errors of fact. You over rule the committee to ensure that the evidence documents are now accurate. So you correct the mistakes. And publish the evidence when you can. Ok there is now a disconnect between the evidence abd the recommendations. But well, you are doing what the committee want but to those who look under the bonnet it's clear that the committee made they decisions not supported by evidence. Which is totally true they didn't. But the patient groups may not care too much because they have what they wanted. And your reputation for fairness and competence remains intact. It's messy. The committee will scream but all that does is draw more attention to the fact that they got it wrong. So Perhaos they won't. But the 135 week thing was their idea. They refused to admit it was indefensible. . And also went abd did the direct opposite with the Powell trial. And you have to conclude that although the committee continues to insist on their own

"diagnostic criteria actually they can keep their views if they want but they have to accept that as there is no good evidence for indirectness on that basis ,and that the evidence in so much as it exists points the other way, the trials are regraded to reflect this. NICE is neutral on this. You stay honest. The researchers at least know they are not incompetent patient hating charlatans. The guidelines are as they are but it woukd mean they won't have the cachet and status of the full NICE quality approval stamp. They are what they - opinion based.

"That's the best I can do !
After all. You have said you will correct errors of fact. The 135 week thing is an error of fact. The fact is that is totally daft and would never happen in any other review. And indeed the committee stupidly know this because they correctly did not use the last recorded outcome on Powell for exactly the reasons that they shouldn't have used the 135 week on pace."

"Finally. I am very very sorry !!!!you and paul have been dealt a crap hand. Not your fault."


Source:

https://domsalisbury.github.io/mecfs/nice-mecfs-guideline-pause/

Edit:
If anyone could add here a short explanation why the "The 135 week thing" was not an error of fact but why the person is so angry about it, that would be great.
 
Last edited:
Edit:
If anyone could add here a short explanation why the "The 135 week thing" was not an error of fact but why the person is so angry about it, that would be great.

If I understood properly, Adam @adambeyoncelowe explained it very well in his review on Fiona Fox' book:[*]

"[...] in PACE and other trials, any gains seemed to disappear at long-term follow-up. By two years, there was no difference between those who undertook costly GET or CBT and those who got nothing.

"This means that any initial 'improvements' recorded on surveys, perhaps due to placebo or expectation, vanished.

"The triallists in PACE said this is because they contaminated the trial arms (i.e., patients in the control arms clamoured for CBT and GET, so they let everyone have those treatments) which adds two more problems: again, it was sloppy, and made the results harder to interpret; but also, it confirms that there was bias in how the treatment arms and control arms were presented to patients, making it clear that there was higher expectation for CBT and GET."

"The trial was therefore subject to all kinds of biases which likely influenced the results."

And more about the resignations from the NICE committee etc:

For instance, she claims the NICE guideline committee for ME was subverted by (very sick and disabled) patient activists. I.e., people who just want to get better. She notes herself that she doesn't understand why people would object to such ostensibly useful research -- research that would, if true, make them better -- but has no curiosity to explore why that is. She just makes uninformed assumptions and leaves it at that.

But the fact is, patients have been very consistent for decades: GET makes many people worse, and curative CBT for ME ranges from useless to gaslighting (despite that, most patients say they're happy with supportive CBT that helps them live better with their ME, so long as it doesn't try to cure them via telling them their illness isn't real).

As it happens, I was on the NICE committee. I was there for the meetings -- where there was a huge level of consensus on most issues right from the beginning, and in the end, there was consensus on the entire guideline. It couldn't have been signed off without it, and unlike most of the studies Fox supports on the issue, we were painfully transparent (NICE had records of comments I had made under articles, for example, because their COI process was so thorough).

Those who left did so after the guideline had been agreed in meetings. They had also signed off the draft guideline the year before, which was very similar. I can't speak for them, but there weren't huge fallings out. People got on and we liked each other. We had a lot in common, and it was only on the precise interpretation and expression of a few issues where we seemed to diverge for a time. Even then, we got to a place where everyone in the room was happy and we signed off the guideline together (twice). So it seems disingenuous to portray this as a huge scandal when that wasn't the experience in the room.

There were 21 of us on the committee, plus a technical team of NICE experts who ran the numbers, analysed the data, and presented it all to us. Outside observers from NICE and the Royal College of Physicians' National Guideline Centre recorded every decision we made and our rationale for doing so. If you read the rationale, we go into a number of reasons why we made the decisions we did with the evidence we had. The evidence reviews and supporting documentation run to thousands of pages in total. It's much more transparent than most clinical trials, and all the trials Fox writes about.

There were five lay members (three patients, one patient who was also a carer, and one carer). Of the patients, one was only 17 at the time, and came to represent children and young people on the committee. Most NICE guidelines used to have two lay members, but this has recently increased to four for most guidelines. We had five because children and young people were identified as a specific area we needed to address, so that accounts for the fifth lay member. It also has to be borne in mind that four of us were all ill and not every patient could attend every meeting, meaning five lay members would ensure decent representation of the patient voice in all meetings.

There were 16 professional members on the committee, by comparison, with degrees, qualifications, and decades of experience between them. There's no way we could have subverted them or the process, especially not in front of the technical team (and those who were there to record every single decision we made). Each person in there could stand their own ground, and they did.

Together, we read 900 pages of analysis on clinical trials (plus new qualitative data commissioned by NICE), which showed two things: 89% of the evidence was of very low quality. The other 11%? That was merely low quality. Qualitative data consistently showed frustration with the existing NHS treatment, diagnosis and management of ME -- across all age ranges and severities. Hardly a ringing endorsement is it?

There was another issue, of course, with the evidence Fox speaks of. Approaching it in good faith, and assuming the benefits reported were true, despite the poor quality, the cost to treat patients was above the limit NICE and the NHS set. This might not have been an issue, as NICE will recommend treatments that cost a little more, provided a committee can show that such treatments are unique and exceptional. Sadly, given the low and very low evidence quality, we could hardly make that case.

Even the professionals in the room who delivered such treatments agreed that GET, as described in clinical trials (based on the 'deconditioning and exercise intolerance [and avoidance] theories' of ME, with fixed weekly or biweekly increases of exercises (usually 10 or 20%), despite increases in symptoms) was not helpful for patients. In other words, even the barely-there weak gains some trials seemed to show, in a context of high risk of bias and indirectness, were for a treatment modality most professionals seemed to disagree with and which everyone felt had the risk to harm patients.

There were other issues which were factored into the decision. Firstly, all the trials used lax and outdated criteria for diagnosing ME. This wasn't inevitable, as we have had half-decent criteria since 2003. We agreed to allow trials using older criteria to be upgraded by one category (or rather, not downgraded for indirectness in the first place), where they recorded that patients had the cardinal symptom of post-exertional malaise (a symptom known about since about 1988-9).

Many studies mentioned PEM, but failed to report whether it was a required symptom, how prevalent it was, or how they measured it. When they did describe it, they seemed to misunderstand what it was -- labelling it a type of fatigue, rather than the upswing of symptoms (and onset of new symptoms) that occur after too much exertion, which can result from physical, cognitive and emotional exertion, often appear with a delay, are disproportionate to the triggering exertion, and take a prolonged time to recover from.

Our description of PEM was detailed and based on thousands of observations in clinical trials and clinical practice. Theirs were cursory and often shallow, suggesting it wasn't important to diagnosis or their understanding of the illness.

In other words, we couldn't be sure they were looking at the same patient group we are talking about today. A big problem. The data became harder and harder to rely on.

Secondly, the trials all had the same inbuilt weaknesses: a blend of subjective measures in an unblinded trial, using very broad questionnaires that often have overlap with the symptoms of depression and other illnesses, and poor use of control arms.

Subjective measures are fine when blinding occurs. Unblinded treatments are fine when objective measures are used. When both occur together, the results become that much less reliable.

You can negate some of that by having control groups, but that isn't foolproof. Where a control group comprises of standard medical care (SMC) but the main treatment arm is SMC + something else, you have to ask if any effects are due to the treatment or simply to *more* treatment (i.e., is more attention from a clinician more beneficial than less attention from a clinician).

Control arms also need to be free of bias or unfair expectation setting. So if, for example, the newsletters for your trial talk about how wonderful GET and CBT are (which happened in the PACE trial, by the way), those on the control arms (because treatment is unblinded, so they know what they're getting, remember?) feel they're missing out, and those getting the 'good stuff' feel grateful.

This contributes to expectation bias -- we expect the good treatment to work and so rate it better on surveys; while we expect the bad treatment not to work, so rate it worse. People's displeasure at getting the supposed dud treatment makes the control arms appear worse than they might be.

This also exacerbates what's called roleplay in medical practice and clinical trials -- we all want to be grateful patients, so there is a pressure to say thank you, and that the treatment helped a lot, even if it didn't.

This was worsened in the PACE trial when the researchers relaxed their own recovery criteria after the trial began. You needed a 65 to enter the trial but only a 60 to be recovered. So you could lose 5 points on the questionnaire and be better. Somehow.

In their own words, they did this because it would give results more in line with what they expected from clinical practice. I.e., to make the results align with their own preconceived notions. This is not how you conduct a clinical trial, unless you want to prove what you already wanted to be true.

Finally, in PACE and other trials, any gains seemed to disappear at long-term follow-up. By two years, there was no difference between those who undertook costly GET or CBT and those who got nothing. This means that any initial 'improvements' recorded on surveys, perhaps due to placebo or expectation, vanished. The triallists in PACE said this is because they contaminated the trial arms (i.e., patients in the control arms clamoured for CBT and GET, so they let everyone have those treatments) which adds two more problems: again, it was sloppy, and made the results harder to interpret; but also, it confirms that there was bias in how the treatment arms and control arms were presented to patients, making it clear that there was higher expectation for CBT and GET. The trial was therefore subject to all kinds of biases which likely influenced the results.

All of these problems together made the data quite uninterpretable. And the results were still uniformly weak! At best we could say there was a small chance the treatments had had a small benefit in the short-term, if you give them the benefit of the doubt on everything.

Bearing in mind that an unblinded trial of rituximab showed far greater results than GET and CBT ever did, only to show a null result when blinded, it seemed very feasible that patients were so desperate for a treatment -- any treatment -- that expectation had a huge impact in subjective reporting in unblinded trials. People wanted to get better, and perhaps felt a duty to help other patients get better, so that coloured their questionnaire responses.

A final deathblow was struck when we analysed data on harms. Prior to the start of the meetings, patients had completed a survey showing almost 80% of them got worse from GET. That's an outrageously high number! A meta-analysis of other surveys showed at least 54% were worsened by the treatment, but the general trend was that older surveys showed a smaller rate of harm while more recent ones showed a higher rate of harm (reflecting, perhaps, the general loss of goodwill when the initial excitement of having any kind of treatment, and therefore hope, wore off -- see above).

Even bearing in mind response bias (people with worse outcomes are more likely to respond to such surveys), the numbers are stark, and covered about 15,000 people over a decade or so. That's a lot of people.

But it gets worse. 20,000 people wrote into NICE to get the old guidelines (which recommended GET and CBT) reviewed and a new guideline written. That's the biggest response NICE has ever had, beaten only by the same patient community when NICE initially delayed publication of the guideline to placate the Royal Colleges (25,000 signed this time). There are thought to be 250,000 people with ME in the UK, so the equivalent of 1/10 of them signed that petition.

This shows that there was a huge desire for change, whether some researchers accepted it or not. Moreover, most clinics couldn't provide data on improvement, recovery and harms. A Freedom of Information request sent to numerous clinics found they didn't record this information at all. Some boasted high recovery rates but without any evidence to support it. So they had no proof that the treatments worked in practice, either.

In the end, the NICE guideline was welcomed by all the national ME charities, politicians from across the political spectrum, and even the British Association for CFS/ME. That's right: even the official body representing the clinics and the people who work in them accepted the guideline and said they could work with it. It matches guidelines put out by the CDC and NAM (then IOM).

And of course there was also the roundtable, moderated by Dame Carol Black:

On the whole, the Royal Colleges play very little role in the day to day treatment of ME. They were invited to a roundtable anyway, where they got to put forward their views. It was clear from those present that they had been given half a story by people with vested interests and cushy roles (some of them in the RCs), and their arguments didn't stand up to scrutiny.

So it was agreed at the meeting (as you can see from the minutes) that the guideline would be published with some extra clarifications (mainly pulled from the rationale section and put in text boxes in the shorter, bulleted guideline, so that it was easier to see the context for each decision). The RCs were there and either agreed, too, or at least didn't disagree (as is more likely, because they knew they couldn't win in a fair discussion).

But once the guideline was published, they [the Royal Colleges, MSE] had another hissy fit, published their own statements rejecting the guideline as ignoring their opinions (which hadn't stood up to scrutiny, remember), and then did the equivalent of stamping their feet and going, 'lalalalala we can't hear you'. It was disingenuous, cowardly and entitled. Their main argument was that 'what clinicians offer isn't GET anyway, so you can't ban GET or it'll stop them offering their not-GET'. Which is as silly and ignorant as it sounds -- if they totally don't offer GET, it shouldn't matter if we banned GET; it would only matter if they do offer GET and they just want to be allowed to keep doing it exactly as they always have.

I think Adam's conclusion about Fiona Fox's misleading take on the NICE committee also applies to the article by Chalder, Sharpe and Wessely, as well as to Lancet commentary they reference (perhaps no surprise that authors of the latter have a history of co-working with the authors of the former) :
And now Fiona Fox is happy to be on the side of those who have behaved so abominably. Those who admitted under oath at a QMUL tribunal that their harassment comprised of being heckled once. Those who said that politicians discussing their research in Parliament was harassment. Those who wrote to a woman MP and told her that her behaviour was 'unbecoming' for challenging their research (patronising or what?). Because that's what it boils down to: they are patronising, and think they always know best. We plebs (especially women, because 80% of ME patients are women) don't know what's best for us, so we should shut up and obey the rules as laid down by these researchers, pushed by the SMC, and protected by the Royal Colleges.

Why it is so hard for those people to realize, to quote Robert Saunders' comment, that...

"it would be a far greater disservice to patients to prescribe ineffective and potentially harmful therapies than to tell them the truth."

If some people still object to the work done by the NICE guideline committee, they also object to all those members who left the committee after they had signed off on the new guideline, they also object to the view that the roundtable was a success, and I think this also implies that they even object to the the round table's moderator Dame Carol Black.

The Round table minutes are available on the NICE guidelines website:

https://www.nice.org.uk/guidance/ng206/history

Direct link to the minutes:
https://www.nice.org.uk/guidance/ng206/documents/minutes-31

Direct link to the presentation:
https://www.nice.org.uk/guidance/ng206/documents/workshop-notes-4


[*] In her book, Fiona Fox misrepresented the bmj's speculations about the resignations -- basically a quote by Paul Garner (referenced as source in the mentioned Lancet commentary by Flottorp et al) -- as if they had reported them as factual news. Fiona Fox: "the media reported that three members of the committee had resigned because they felt unable to sign up the final guideline".

(Edited to add some links.)
 
Last edited:
The Scottish Government has also just stated that clinicians' refusal to accept the new NICE guidelines is symptomatic of the general disbelief patients experience when dealing with clinicians.

So we need to start bringing that up too. It is clearly a continuation of their gaslighting.FW-znxKXkAMILBU.jpeg
 
Excellent response by Michiel Tack, a masterpiece of clarity and brevity yet covering all the key points.
The response by Chalder et all goes back over the old flawed arguments. I just hope readers can see through them.

I doubt most readers will -- Tacks' and Saunders' comments aren't even referenced in their response.

But I hope someone will request the editors and authors to reference the comments and also to correct some crass misrepresentations, not only of the comments by Tack but also of other sources they reference. They also made misleading implications in their response to Saunders.

Three points I'm aware of:

1) The combination of relying on subjective outcomes alone in unblinded trials:

Chalder, White and Sharpe write:

"Tack is concerned about the use of patient-reported outcome measures (PROMS) in trials of CBT and GET for patients with CFS/ME. On the contrary, we think that PROMS are essential for illnesses that are defined entirely by patient report."

Of course, Tack is only concerned about relying solely on subjective outcomes in trials that cant' be blinded. Yet Chalder et al. imply that Tack doesn't validate PROMS in other than these particular settings.

Chalder, White and Sharpe also write:

"Furthermore, any response bias from the use of such measures has been reported to be minimal.3 "

They reference the MetaBlind study [*] to back up their claim.

There has been some critique of that study. [**]. Yet that "any response bias from the use of such measures has been reported to be minimal" is not even what the MetaBlind authors claim -- they say it's only a possibility. They conclude their findings could also reflect...

"meta-epidemiological study limitations, such as residual confounding or imprecision".

The authors of the MetaBlind study also say: "At this stage, replication of this study is suggested and blinding should remain a methodological safeguard in trials."


2) Measuring trial participants' expectations before the start of treatments isn't sufficient to negate bias in PROMS

Chalder, White and Sharpe also misrepresent Tacks' rebuttal of their misrepresentation of the rebuttal of the risk of bias in PROMS in the PACE trial:

Tack wrote:

"In contrast to what Sharpe and colleagues claim, measuring the expectations of patients before the trial begins, does not address how therapists might have influenced symptom reporting during the trial."

Chalder et al write:
"Tack acknowledges that the PACE trial found that patient expectations had no obvious role in determining outcomes..."

That's a crass misrepresentation. Tack highlighted that PACE only measured patient expectations before but not during the trial. So these measures aren't useful to negate a difference in participants' expectations and perception during the trial, when they were exposed to the therapists' expectations and also the suggestive leaflets that accompanied the intervention.


3) The NICE guideline: The whole guideline was agreed by the committee, including the recommendations on graded exercise therapy (GET) before there were resignations


Chalder, White and Sharpe write in response to Saunders:

"Saunders point out that NICE (the UK clinical guideline organisation) downgraded its recommendations for cognitive behaviour therapy (CBT) and graded exercise therapy (GET) in its recently revised guideline.

"However, these recommendations have been strongly disputed. Three clinicians resigned from the NICE guideline committee before publication [...] ”. A commentary published in the Lancet journal was equally critical, stating that “In our view, this guideline denies patients treatments that could help them.”2

There is no evidence for the implication of their statement that the committee members resigned because they didnt' agree , see the NICE minutes:
https://www.nice.org.uk/guidance/ng206/documents/minutes-31

“The whole guideline was agreed by the committee, including the recommendations on graded exercise therapy (GET) before there were resignations.” [my bolding]

The referenced commentary in the Lancet by Flottorp et al [***] is misleading as it also refers to the resignations from the guideline committee as if they committee members didn't agree with the new guideline. Flottorp et al reference a bmj news article [****] with the same misleading implication about the motivation of those who resigned, based on a quote from Paul Garner which was mere speculation.


So at least three obviously misleading arguments by Chalder, Sharpe and White:

1) A misrepresentation of another study's findings and the authors' conclusions (MetaBlind study),

2) A misrepresentation of the rebuttal of the critique of a misrepresentation of their own study findings

3) A reiteration of misleading speculation in an original article (Flottorp's et al in the Lancet editorial about the NICE committee, referencing the equally speculative first coverage by the bmj -- see also Saunders' tweets and my posts here and here.)

It would be great if someone could write to the JGIM about those misrepresentations but also to the Lancet and the bmj [****] where the speculation about the NICE committee appeared first.


References:

Tack, M. Bias in Exercise Trials for ME/CFS: the Importance of Objective Outcomes and Long-term Follow-up. J GEN INTERN MED (2022). https://doi.org/10.1007/s11606-022-07704-0

Saunders, R. Evidence-Based Care for People with Chronic Fatigue Syndrome and Myalgic Encephalomyelitis. J GEN INTERN MED (2022). https://doi.org/10.1007/s11606-022-07715-x -- Twitter thread here.

[*] Moustgaard H, Clayton GL, Jones HE, Boutron I, Jørgensen L, Laursen DRT, Olsen MF, Paludan-Müller A, Ravaud P, Savović J, Sterne JAC, Higgins JPT, Hróbjartsson A. Impact of blinding on estimated treatment effects in randomised clinical trials: meta-epidemiological study. BMJ. 2020 Jan 21;368:l6802. doi: 10.1136/bmj.l6802. Erratum in: BMJ. 2020 Feb 5;368:m358. PMID: 31964641; PMCID: PMC7190062.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7190062/

[**] Forum thread here https://www.s4me.info/threads/bias-due-to-a-lack-of-blinding-a-discussion.11429/page-2#post-231694 )

[***] Flottorp SA, Brurberg KG, Fink P, Knoop H, Wyller VBB. New NICE guideline on chronic fatigue syndrome: more ideology than science? Lancet. 2022 Feb 12;399(10325):611-613. doi: 10.1016/S0140-6736(22)00183-0.

[****] Torjesen I Exclusive: Four members of NICE's guideline committee on ME/CFS stand down, BMJ. 2021; 3741937, https://www.bmj.com/content/374/bmj.n1937


Edit: Thank you to the forum members who helped me write this post.
 
Last edited:
The Scottish Government has also just stated that clinicians' refusal to accept the new NICE guidelines is symptomatic of the general disbelief patients experience when dealing with clinicians.

So we need to start bringing that up too. It is clearly a continuation of their gaslighting.View attachment 17592

Gosh I was googling for something else and just came across this paper (hence looking up this thread on s4me to check it was on here)

In hindsight now looking back nearly 3yrs to 2021. This was published less than a month after the new guidelines were finally released. I'm not sure of the timeline involved with submission for the Journal of Internal Medicine and whether they might have been allowed to submit it 'after' any normal deadline if it did have to be in a lot before that to get published.

Boy does it make me think of the phrase 'pre-bunking'..

Also - is it interesting that these 3 have chosen the US market / audience / publication specifically for this, to direct their attention to here?
 
Last edited:
Back
Top Bottom