@NelliePledge, I understand. I hope this may help?
Untangling the MUS Web: How badly have UK proponents of ‘Medically Unexplained Symptoms’ (MUS) misled the medical community?
The person who wrote and researched this has no conflicts of interest other than their wish to ensure that all patients wrongly labelled with ‘MUS’ (as they have been too) get the biomedical care that they deserve. They have been researching aspects of both ME and MUS since 2015, particularly focusing on discrepancies and flaws in the literature.
They have raised discrepancies here which this moderator believes should have a much deeper and broader impact on the BPS coterie than other exposés have done to date. Holding the BPS / MUS architects to account for the detail exposed will require others to bring these issues to public attention. As the author observes, if any of the findings presented here can be countered by good evidence then the author is more than happy to dissect those too, and they can be filtered back to the author via the OMEGA Facebook page here.
EDIT: the author intends to remain anon, but is happy to be known as 'goodelf' #goodelf
**********************************************************************
From the murky mire of ‘medical’ literature on ‘MUS’, one reference emerges as more rancid than the rest. This 2001 paper -
https://www.sciencedirect.com/science/article/abs/pii/S0022399901002239?via=ihub - by Nimnuan, Wessely and Hotopf has been used extensively as a key piece of propaganda for the UK’s MUS management project, but there’s been little mention or scrutiny of its ‘small print’, and that’s a crying shame. The paper documents a study on ‘MUS’ at outpatient clinics at 2 London hospitals and has been repeatedly cited to persuade doctors, other health professionals and NHS commissioners that ‘MUS’ is rife in secondary care (in 52% of new outpatient referrals overall) and so a great burden on doctors as well as a considerable drain on NHS resources. However, in many or perhaps most cases, readers haven’t been told that there were significant limitations to the study, including that the required sample size wasn’t reached and that the prevalence of MUS could have been exaggerated through patients not being followed up over a longer period of time. But that’s small potatoes compared to another issue with this study.
The study also yielded another paper by the same authors that was published a year earlier - this Nimnuan et al 2000 paper-
https://academic.oup.com/qjmed/article/93/1/21/1588375 , and close inspection of it, and particularly of Table 4, reveals high MUS misdiagnosis rates in most of the 7 specialities studied. By ‘high’, we’re not talking a few percent. For cardiology, the MUS misdiagnosis rate was a whopping 31.7%, for neurology it was 21.1% and for gastroenterology it was 18.2%. This is misdiagnosis on a catastrophic scale. The authors expressed the misdiagnosis rate as the number of patients initially misdiagnosed with MUS out of all those who were finally given an ‘organic’ explanation for their symptoms. That’s one way of doing it, but people might think that the misdiagnosis rate would represent the number misdiagnosed with MUS out of all those initially diagnosed with MUS, i.e. the proportion/percentage of MUS diagnoses that turned out to be wrong. From the data given it’s possible to calculate that too. Expressed that way, the rate of MUS misdiagnosis was around 40% for rheumatology, 38% for cardiology and 19% for neurology, with an overall misdiagnosis rate for all 7 specialties of more than 25%.
In many or most of the cases that the Nimnuan et al 2001 paper has been referenced for its high MUS prevalence rates, readers haven’t been told that the accompanying MUS misdiagnosis rates were dangerously high. Shouldn’t doctors tell their colleagues the whole story, not just half of it? If they don’t then aren’t they failing in their duty (as outlined by the GMC ) to act with honesty and integrity -
https://www.gmc-uk.org/-/media/docu...hash=DA1263358CCA88F298785FE2BD7610EB4EE9A530 (see page 21) and to disclose important information to the medical community to protect patients and the public -
https://www.gmc-uk.org/ethical-guid...res-for-the-protection-of-patients-and-others (see point 60) ?
The 'JCPMH Guidance for commissioners of services for people with MUS' -
https://www.jcpmh.info/wp-content/uploads/jcpmh-mus-guide.pdf – comes across as a prime example of providing such misleading information. Here the MUS prevalence rates of the Nimnuan et al study are published in Table 2 (page 7), but there is no mention of the limitations of the study nor of its high MUS misdiagnosis rates. The JCPMH is a collaboration of 17 organizations co-chaired by the Royal College of Psychiatrists (RCPsych) and the Royal College of General Practitioners (RCGP) that develops guidance for healthcare commissioners -
https://www.jcpmh.info/about/ . Their guidance on MUS was published in 2017 when Wessely was still President of the RCPsych. It was jointly funded by the two royal colleges and the RCPsych agreed to fund 50% of the costs. This is clear from the minutes of a meeting of the MUS Working Group (who worked on the Guidance) where members of that group met with Wessely on 16th July 2014 at the RCGP, less than a month after Wessely had become RCPsych President. (These minutes were available to read online but appear to have been removed.) The minutes record that Wessely told group members the news about the funding and also that he would be invited to join any Expert Reference Group. It’s hard to imagine then that he didn’t bother to read the finished guidance and that he didn’t know that NHS commissioners weren’t being informed about the high misdiagnosis rates of his own study. You’d have thought that Hotopf would have taken an interest in this document and in other MUS papers and articles too and could have alerted the medical community to the high misdiagnosis rates. (NB Tok Nimnuan was just a PhD student who was supervised by Wessely -
https://nanopdf.com/download/professor-2_pdf .)
Clearly, other authors can’t be accused of concealing high misdiagnosis rates and misleading the medical community if they don’t know that the issue exists. The consequence of Nimnuan et al splitting their study between papers was that people could read and cite the 2001 paper for its high MUS prevalence rates whilst being blissfully unaware of the high misdiagnosis rates of the same study. However, it looks as though some people may have known about both the limitations and the high misdiagnosis rates of the Nimnuan et al study but done little to highlight them or raise the alarm. Among them are Jon Stone, Alan Carson and Michael Sharpe who’ve cited the Nimnuan et al 2001 paper in their work but who also referenced the Nimnuan et al 2000 paper when discussing this study -
https://jnnp.bmj.com/content/74/7/897 and therefore should have been well aware of the misdiagnosis rate of 21.1% for neurology displayed in its Table 4. (After all, Michael Sharpe quite stridently told someone else to ‘read the paper’ on Twitter -
! ) Moreover, Sharpe worked with both Nimnuan and Wessely on a 1999 paper -
https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(98)08320-2/fulltext - so you’d have thought that they might have told him about the misdiagnosis issue then. It’s indisputable though that Alan Carson knew about the high misdiagnosis rates. In his 2001 MD thesis -
https://www.semanticscholar.org/pap...rson/c0dedce75b873f5f8e2744bc521823c8f52a3974 - he documented on page 179 that the Nimnuan et al study showed high misdiagnosis rates for both medically unexplained and medically explained symptoms and put this down to there being a higher proportion of lower-grade staff in this study than in other studies. Carson seems to have been working under or alongside Michael Sharpe in the University of Edinburgh’s Department of Psychiatry at the time he wrote it.
In 2005, Stone, Carson and Sharpe appear to have tried to debunk Eliot Slater’s landmark 1965 paper -
https://www.bmj.com/content/1/5447/1395 - and his conclusion regarding the misdiagnosis of hysteria. With others, they conducted a systematic review of studies in neurology looking at the misdiagnosis of conversion disorder -
https://www.bmj.com/content/331/7523/989 - and then claimed that it showed that misdiagnosis rates had fallen and had been at quite low levels (around 4%) since the 1970s. They judged that this was most likely due to an improvement in the quality of the more recent studies rather than to the development of better diagnostic tests such as imaging but they didn’t include or mention the 2000/2001 Nimnuan et al study with its high misdiagnosis rates (despite presumably being well acquainted with it), and the inclusion criteria that they set would have excluded it. Hmm. They also pointed out that the rates of misdiagnosis of conversion symptoms/hysteria in older studies (which appear to have ranged from about 12% to 30% - 40%) had been unacceptably high. Surely then they must also regard Nimnuan et al’s MUS misdiagnosis rates as unacceptably high, including its 19% misdiagnosis rate for neurology (when expressed in the same way). Why didn’t they mention this issue in their review if they rated the Nimnuan et al study so highly that they cited it in their other work as evidence of high MUS prevalence rates? They followed up with a paper -
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1299341/ - that argued that Slater’s 1965 study (conducted by Slater and Glitheroe -
https://www.sciencedirect.com/science/article/abs/pii/0022399965900048?via=ihub ) had been of poor quality.
Earlier in 2005, another Stone, Carson and Sharpe paper/article had been published -
https://www.acnr.co.uk/acnr jan feb 2005.pdf - in which they’d cited Nimnuan et al 2001 as evidence for high MUS/’functional symptoms’ prevalence rates in neurology. They claimed that Slater had been wrong about misdiagnosis, and that the rate of misdiagnosing neurological patients with functional neurological disorders in more recent times had been ‘consistently’ below 10%. Don’t they know what ‘consistently' means? It seems that these authors will reference the Nimnuan et al study when it suits them to evidence high MUS/functional disorder prevalence rates in neurology but will ignore its other findings when they’re discussing and reviewing misdiagnosis rates.
Their 2005 systematic review has been used to persuade and reassure doctors that they need not be overly concerned about the risk of misdiagnosis, and not only in neurology clinics but when labelling their patients with ‘MUS’ in any specialty. It was included as a reference in the RCPsych/RCGP’s 2011 “Guidance for health professionals on medically unexplained symptoms” - [available here on registering -
https://www.scie-socialcareonline.o...unexplained-symptoms-mus/r/a11G00000017xGwIAI ] - to evidence that 4% to 10% of ‘MUS’ patients have their symptoms re-diagnosed as an ‘organic’ condition. Readers weren’t told about the high misdiagnosis rates of the Nimnuan et al 2000 paper, even though the paper was cited in the same section as evidence of high MUS prevalence rates! The same low 4% to 10% MUS misdiagnosis figure was also included on page 7 of this 2018 Paediatric Mental Health Association (PMHA)/ RCPsych guidance on MUS for pediatricians and other doctors -
https://paedmhassoc.files.wordpress.com/2018/12/mus-guide-with-leaflet-nov-2018.pdf but the reference given was not the original review paper but the 2011 MUS guidance (discussed above) that had cited it. Paediatricians weren’t warned about the high misdiagnosis rates found in a key MUS study and weren’t advised to proceed to more complex investigations if basic investigations were negative.
Astonishingly, it seems that Stone, Carson, Warlow and Sharpe were concerned that their systematic review of misdiagnosis rates may not have been up to the mark years before those two guidance documents on MUS were published. In 2009 they reported on a newer study of misdiagnosis rates -
https://academic.oup.com/brain/article/132/10/2878/333395 , advancing their lack of full confidence in their review’s conclusion as a reason for conducting the new study. (From the 2009 paper it looks as if they may have decided that their systematic review wasn’t good enough even before the review was published!) Do they know that their shaky review has been used since as evidence to persuade doctors that MUS misdiagnosis rates are quite low? If so, then don’t they have a duty as doctors to point out its shortcomings to the medical community? Perhaps they’ve done this somewhere, but the very next article in that 2009 journal was a paper by Wessely and others -
https://academic.oup.com/brain/article/132/10/2889/328088 - in which the Stone et al 2005 review is referenced as evidence that ‘organic’ explanations can be ‘effectively’ ruled out. The paper opens with Nimnuan et al 2001 again being cited as evidence for high MUS prevalence rates (supposedly making up 30–60% of neurological referrals) but there’s no mention of that study’s 19% (or 21.1%) MUS misdiagnosis rate for neurology that, going by the Stone et al 2005 review paper, would be considered unacceptably high. What excuse is there for Wessely not mentioning those accompanying high MUS misdiagnosis rates?
Last but not least are Chris Burton’s contributions in the 2013 BMJ Book (edited by Burton) entitled “ABC of medically unexplained symptoms” -
https://www.wiley.com/en-gb/ABC+of+Medically+Unexplained+Symptoms-p-9781119967255 - that was written as an expert’s guide to MUS for GPs and other primary care health professionals. Predictably, he reproduced the prevalence figures from the Nimnuan et al 2001 paper in Chapter 2 but in Chapter 3 gave the MUS misdiagnosis rates in neurology as being just 2% to 3%, even lower than the Stone et al 2005 review figures of 4% to 10%. The book gives no reference for his figures but he suggested that the rate could be similarly low in other disciplines too. Shouldn’t an expert in this field have known about the Nimnuan et al 2000 paper and its high misdiagnosis rates and thought them worthy of mention?
It looks as though several or many of the so-called ‘experts’ in the field of ‘MUS’ may have been pulling a fast one on the medical community and not only short-changed doctors and healthcare commissioners but put patient care in serious jeopardy. If doctors expect half of their outpatients to have MUS then they are unlikely to be concerned about labelling 50% of them with MUS and may well be predisposed to subjectively hunt for signs of MUS in patients to protect limited healthcare resources. (There’s a similar problem in the way that gender ratios have been wrongly touted for MUS -
https://spoonseeker.com/2019/03/08/mus-international-womens-day/ - so potentially prejudicing women’s care.) But if doctors knew that the accompanying misdiagnosis rates were dangerously high would they still be unconcerned? Is it really acceptable for doctors to knowingly tell their healthcare colleagues half-truths to persuade them to implement their treatment models? Perhaps the GMC would like to answer that one.
It may not have felt like it, but this has been quite a brief a summary of the issue. The Nimnuan et al 2001 paper has been cited hundreds of times and there are details, aspects and people that have been left out here. If a key point has been missed, if something’s demonstrably incorrect or if the ‘experts’ mentioned can show where or how they’ve alerted the wider medical community to the high MUS misdiagnosis rates of the Nimnuan et al study (apart from in Carson’s MD thesis), then efforts will be made to amend this post. If only all authors and journal editors would do the same.