UK:ME Association funds research for a new clinical assessment toolkit in NHS ME/CFS specialist services, 2023

There have been famous cases where I live, for example a case where a doctor was really really bad at identifying cancerous changes in Pap smears. The result of the inquiry found that there was inadequate quality control practices at all levels (in the lab, and nationally). That case could not go undetected for so long now because of the checks in place.

That sort of quality control is something quite unrelated which of course we do all the time - or should. Audit of treatment efficacy in individuals is bogus, whether or not you know the treatment works. That is the point.

I don't see any point in bringing in issues that have nothing to do with what we are discussing to be honest.
 
But asking the treating doctor isn't either—they could never be sure or control for all the variables. Surely that has to happen in the trials before the treatment is rolled out.
I'm thinking a lot about the use of unapproved medicines for something else at the moment. It turns out that a very large proportion of the treatments used in paediatrics are not supported by trials for that demographic. Even generally, it is surprising how much of medicine involves treatments without good clinical evidence. Combining that lack of evidence with a culture that is rather permissive of doctors doing whatever they think is best, often with good reason but frequently without, there is a big need for better independent scrutiny of outcomes.

So, I'm not suggesting we ask the treating doctor if a treatment worked. I'm saying the opposite, that monitored outcomes need to be objective.

I'm saying that I don't think criticising audits as a concept is a useful angle to our advocacy. Audits are part of the tool box for improving health care. But, like clinical trials, they can be done badly or well. We don't abandon the concept of clinical trials because of the PACE trial debacle - we call for better trial methodology with objective outcomes and better controls on who is funded to do studies. We should take the same approach with audits.
 
I guess assessment, monitoring and audit are integral. The systematically monitored clinical assessment I appreciate a lot, is whatever it was which uncovered the excess deaths eg in mother and baby units, likewise unnecessary removal of organs by some "surgeon".

I now gather this is routine quality control. In general, it generated for me the statistical incidence of medical error and iatrogenic harm, so I knew I did not imagine or exxagerate it.

Was all that done by systematic audit, evaluation, or what? Did this forum pinpoint what this toolkit will demonstrate or reveal, when applied to clinical M.E assessment of an excess death, unnecessary procedure, all harm done by errors of medical ignorance, failing which, harm done by errant 2nd opinions.

Would that systematic health and safety monitoring had uncovered the culling of patients by overdose, in Gosport, before that took decades of squirming to acknowledge. Likewise the infected blood now warranting lifetime support for survivors !!! lifetime support also needed by GET survivors !!!

Its not just hep C got spread. Then excess deaths of NHS staff in pandemics went unassessed and unmonitored too. Was the MEA sold a genuine clinical assessment of M.E health & safety?
 
Last edited:
I'm saying that I don't think criticising audits as a concept is a useful angle to our advocacy.

I may have misunderstood or be at cross purposes, but these are not audits. They're nothing like audits.

These are assessment tools, designed ultimately to show that clinics not offering any treatment are somehow of value, and patients are expected to do all the work.
 
Audit of treatment efficacy in individuals is bogus, whether or not you know the treatment works. That is the point.

Think of diabetes clinics. It could be reasonable to assess the rate of limb amputation for the patients in the clinics. That is an objective assessment of treatment efficacy. The outcome is meaningful to patients and health care funders and it is objective. Finding a high rate of limb amputation doesn't automatically mean that the staff of the clinic are doing a bad job with helping their particular patients control their diabetes, but it does mean that there is some problem, some lack of equity that needs thinking about.

This is from a 2025 study protocol:
The application of clinical audits in managing chronic diseases such as type 2 diabetes has shown promising outcomes. For example, interventional audits have led to better management of glycemic levels and blood pressure, along with increased patient satisfaction [4]. Similarly, audits conducted within primary care settings have demonstrated significant improvements in guideline adherence and the appropriateness of diabetes screening and care [5,6]. These initiatives underscore the potential of clinical audits to identify deficiencies, guide targeted interventions, and promote sustained improvements in chronic disease management. In some cases, audits have prompted the development of specialized clinics and longer follow-up visits to ensure ongoing patient care and optimal outcomes [7,8].

Despite these favorable outcomes, there is a notable deficiency in studies investigating the repercussions of an entire audit cycle, particularly regarding chronic disease care, on clinical outcomes and documentation practices. The success of clinical audits largely depends on effectively implementing recommendations in healthcare settings [9]. However, organizational barriers such as limited collaboration between clinicians and management, unclear lines of authority, and differing perspectives can hinder progress. Overcoming these challenges requires fostering a collaborative environment and establishing clear accountability [10].

There are facilities, registries that audit treatments e.g.
Improving quality of cancer care through surgical audit
European national audit registries in surgical oncology have led to improvements with a greater impact on survival than any of the adjuvant therapies currently under study. Moreover, they offer the possibility to perform research on patient groups that are usually excluded from clinical trials.

Imagine that we did have a useful treatment for ME/CFS. Would we want to have monitoring of outcomes in the clinics treating people with ME/CFS? I think, yes, we would. What percentage of people are able to return to work after treatment? What specific treatment regime is being used and are people completing the treatment course? How quickly are people getting treated? What demographic mix of people are being treated? I think we would want data collected and available to researchers to answer questions such as who is most likely to suffer significant side effects or not benefit at all, does duration of illness make a difference to treatment outcomes?

Increasingly, with improvements in record keeping and data crunching capacity, the lines between clinical trials, quality assurance and research done on data from patient registries and patient records will be blurred.

My point is just that I think we should focus our criticisms on the treatments that the ME/CFS clinics are offering, the type of health care professionals in the clinics, whether the clinics should even exist, and the nature of the outcomes chosen for monitoring. But not whether it is legitimate to do clinical audits at all. Even if you disagree with me about the utility of clinical audits, and I accept that there are some problems with them, I think there are much stronger arguments to make against the MEA ME/CFS PROMS.
 
I may have misunderstood or be at cross purposes, but these are not audits. They're nothing like audits.

These are assessment tools, designed ultimately to show that clinics not offering any treatment are somehow of value, and patients are expected to do all the work.
The MEA has not been at all clear as to what the assessment tools would be used for. But, yes, my understanding is that one of the uses will be to evaluate whether the clinics are delivering benefits. That will be a clinical audit, or at least part of an audit. Just not a very good one.
 
I think the PROM is maybe one arm of an audit, if that helps.

I read somewhere about a PROM being heralded as brilliant because the patients were having hip replacements and for some reason there were two types of hip being used. PROMs data showed patients had better improvements and faster recovery and better satisfaction with the cheaper of the two, so that was the only one being used going forwards (of course the low cost was the real good news for the NHS).

Now that story has a few red flags for me but aside from those, I can see why a Patient-Reported Outcome Measure was useful in measuring a physical surgery/recovery. That sounds to me like a PROM has a purpose.

Compare that tangible situation with the current ME clinic situation, I could write a “useful” PROM - did our clinic make your functioning
A)better
B)no change
C) worse
D) something else (free text)

Please explain why you chose that option………….

There was no info on how many 90+ questions were on the hip questionnaires x 5! /s
 
I could write a “useful” PROM - did our clinic make your functioning
A)better
B)no change
C) worse
D) something else (free text)

Please explain why you chose that option………….

You could...and to be fair you've done a better job of it in five minutes than the MEA project has in over a year...but it wouldn't tell anybody anything.

Patients can't know whether, if they felt better, that it was the clinic that made them better. It's of no more evidential value than me telling you I got better because I took [insert name of any food supplement].

This tool isn't audit and it isn't evidence. Confusing a PROM with them not only weakens our argument, it risks giving it credibility it doesn't deserve.
 
I’m quite late to this part of the discussion but figured I’d chime in since one of the first internship projects I ever did was a Rasch analysis. It’s really just a statistical framework for refining questionnaires. I found that it’s primarily helpful for a few things:

1) catching questions that deviate from assessing a singular underlying concept. If you are assessing functionality and intend a question to fall on the lower end (i.e. “I struggle to keep up a conversation for more than a few minutes”) but lots of people are answering “yes” to that question despite saying that they can also keep a part time job, that question would fail some of the fit tests in Rasch analysis. In this example, it would be because the question is capturing both low functionality in some people and social awkwardness in others.

2) in a similar vein, it can help catch questions that are ambiguous and being inconsistently interpreted by respondents.

3) helping to eliminate redundant questions.

4) making sure that the questions are scaled to capture appropriate gradation across the survey population—e.g. you have enough questions assessing the lower end of functionality to be able to differentiate between very severe and severe.

So in that sense it has some utility for improving the structure of a questionnaire. But as @Trish and others immediately realized, it does not in any way ensure that your questionnaire assesses what you intended it to assess. Or that the results of the survey will actually be meaningful and useful, for that matter.

Rasch analysis is more complicated than other methods for questionnaire building and requires a bit of statistics knowledge to fully understand the jargon, but it’s hardly too complicated for a lay person to get the gist, nor is it a magical method that makes questionnaires worthwhile.
 
I started to write a post but realised I’ve already largely covered the two things I think we need to measure and why I think this achieves neither.

Above all it’s not patients than need monitoring or educating by healthcare professionals but the reverse. Because right now they fail at the first hurdle of ‘do no harm’. I don’t think this is deliberate but it does seem impossible for us to individually overcome. Most of us have tried.

I wish there was a PROMS or whatever for the NHS and other services. There is so much that could be done to make things easier for us. To stop making us worse. Really simple things to remove barriers. But we need something to measure their performance.

The best I’ve been is when doing less than the limits this disease imposes on me allow. Not pushing or being pushed.

I need people to do the things I cannot. To accept what I say I can and cannot do. To listen and learn and adapt how they work and support me.

It is not my understanding of where my limits are that is the problem. It is other people’s. I don’t see how any of this helps change that. It’s busywork.

one reason often given for these measurements is to measure outcomes of treatments. And we hopefully will need measures of those.

But this level of detail would only seem useful for interventions which have minimal impact (so ones which basically don’t work). And seem completely useless for people who are more severely affected. I can’t speak for those who are milder.

We’ve discussed patient led measures of outcomes and things like FUNCAP which could be used to measure effectiveness of treatments in other threads. But as Jonathan says that is something done as part of a clinical trial.

And for measuring effectiveness or safety of clinics, that should be something which is either done with objective measures for commissioners (do people complete courses of treatment, do they get re-referred, number of adverse incidents etc) or subjective ones, like simply asking if patients referred found it useful.

The focus and language here seems to do neither and instead be rehab and measuring how effective patients are at managing themselves. Which is counterproductive, especially given how people with ME/CFS are seen within much of the NHS.
 
One more point, an obvious one but it seems important to reiterate for this when talking about looking at effectiveness or safety of clinics. If dealing with subjective outcomes where the subject is open to being influenced by someone who has motivation for a particular outcome, the outcomes should not be designed or measured by that someone. This is why we double blind things.

In short, this shouldn’t be measured by the clinics at all. And yet…the clinicians and the clinical services seem to be central to this whole approach.
 
The focus and language here seems to do neither and instead be rehab and measuring how effective patients are at managing themselves. Which is counterproductive, especially given how people with ME/CFS are seen within much of the NHS.

[My bold]

Exactly so.

And we already manage ourselves, free of charge, to the highest standards it's possible to achieve. No one else should be earning money from that, or taking the credit for it.
 
One more point, an obvious one but it seems important to reiterate for this when talking about looking at effectiveness or safety of clinics. If dealing with subjective outcomes where the subject is open to being influenced by someone who has motivation for a particular outcome, the outcomes should not be designed or measured by that someone. This is why we double blind things.

In short, this shouldn’t be measured by the clinics at all. And yet…the clinicians and the clinical services seem to be central to this whole approach.
This is a really good point. I've often fantasised about being able to doorstep ME/CFS clinics and survey patients on their way out. The clinics are marking their own homework.
 
I've often fantasised about being able to doorstep ME/CFS clinics and survey patients on their way out.
I know what you mean, but needs to be done with much more distance. I know I would have said more positive things after unpleasant experiences and outcomes from specialist services simply because someone was doing something. And probably out of a need to justify to myself the worth of what I was putting myself through, a sort of sunk cost fallacy. That is a real psychological phenomena.
 
I know what you mean, but needs to be done with much more distance. I know I would have said more positive things after unpleasant experiences and outcomes from specialist services simply because someone was doing something. And probably out of a need to justify to myself the worth of what I was putting myself through, a sort of sunk cost fallacy. That is a real psychological phenomena.
I agree. The optimum time would be when people have had time to feel the real impact of their experience at the clinic, and to have informed themselves a bit about the politics.

The problem is access, of course - the clinics have a monopoly over quizzing their patients and we (and independent researchers) can't reach them.
 
Thoughts on the purpose of these PROMs and clinical toolkit.

I think the project was set up with muddled objectives. At first I thought the aim was to recognise that the usual use of the Chalder Fatigue Questionnaire and other similar fatigue questionnaires, and the SF-36 physical functioning scale, are not adequate or appropriate for assessing outcomes of clinical trials, or for use in clinics either to guide advice given to patients or for service evaluation.

So the idea was to replace these with a set of more ME/CFS appropriate questionnaires that could be used:

1. for new patients attending clinics to provide useful information to the clinician about the pwME's symptoms, including PEM, severity and function. The idea being that this forms part of the process of diagnosis and of developing care plans and providing advice to patients.

2. As a record that tracks changes over time for individual patients, being filled in again at follow up and regular reviews with the clinician, to help them see changes and what new care is needed

3. As a record for clinics to use in service evaluation, to see whether their patients ME/CFS is stable, improving or worsening, an to evaluate whether the clinic is providing a useful service.

4. As a possible set of outcome measures for clinical trials.

We have seen the harm done on all these fronts from the use of CFQ and SF-36 PF.

So the MEA decided it was worth funding the production of a new set of better PROMs, and paid Sarah Tyson and some BACME and MEA people to produce them.

What they don't seem to have thought through is what those PROMs would be required to do that can be done by questionnaire and is better than the old ones, nor how they promotion of them might be used to perpetuate a model of care based so strongly around questionnaires by the same set of therapists who had misused the old PROMs for so long to justify their existence.

Nor do they seem to have recognised that subjective questionnaires designed to be comprehensive and used as the basis of care planning (pacing-up advice etc) would be wholly unsuitable as 'measures' to be used as clinical trial outcomes.

Nor did they have any provision for oversight and termination of the project if it went off the rails, as this project has clearly done.
 
The MEA has not been at all clear as to what the assessment tools would be used for. But, yes, my understanding is that one of the uses will be to evaluate whether the clinics are delivering benefits. That will be a clinical audit, or at least part of an audit. Just not a very good one.
to evaluate clinics it needs a survey written and administered independently that focuses specifically on asking what is being received by patients and how useful they found it.

for example if there is support with forms or statements for employers, benefits, equipment. if they did what they can to describe pacing appropriately and make it more possible by these adjustments and so on. If they appeared to have a sufficient understanding or will to understand when at an individual level they were trying to help someone for example come up with a plan if they might need to make their house accessible or work out whether a wheelchair would be useful for them.

there is something quite specific about me/cfs in the type of disability and in how the patient themselves might understand it early on vs once they've done trial and error on pushing themselves or kidding themselves and so there is a real importance in how to tackle that problem where communication styles and 'telling people what they want to hear' by making fake promises of 'maybe this will help' are used.

So the art of the questions asked would need to be ruling out any ambiguity at all - no terms like 'pacing' (energy conservation) where everyone can be told a different idea of it is that. It needs to be made clear that twisting the expectation someone writes a letter to an employer to make it clear reasonable adjustments are needed into a clinic overstepping the mark and using that position to hold as power over the individual would go the opposite way. Same thing to capture the issue where we have heard of clinics using 'multidisciplinary meetings' to make up untruths and fake narratives about a patient and fake safeguarding etc. So there almost certainly need to be red-flag questions too.

The end result being whether the clinic is meeting the basic criteria for which they should be funded. ie level 1 = they just about manage to diagnose but to reach 4 or 'gold' they need to be providing good adjustment/benefits support at a reasonable level, autonomy and people feeling safe and not coerced.

ie it needs something very specific and very direct. And who and how it is administered is just as important.


But on the other hand it seems like this PROMS project appeared out of nowhere and then suddenly we were finding out there was something to do with datasets in the implementation plan etc.

where those datasets are more to do with things like the basic prognosis data we should have - what % recover in the 1st, 2nd, 3rd year of being ill if they get suitable rest etc. and then once it is past that stage we believe it is 5%. But medical records were using silly assumptions like if people didn't come back to a GP for x years then it must have been because their 'cfs' had 'recovered' rather than they had it and it got worse but the GP had made it clear there was no help.

And yes if adjustmetns and support aren't put in place then unless that is coming from somehwere else then I think most of us will get more severe. But for some reason it seems the 'system' and almost all of those in it are doing everything to want to believe that we all just disappear off and don't get more debilitated and unwell but something else.

I don't fully understand how we can tackle that one - it needs a thread all on its own because the data input points are so compromised I suspect it is 2 parts in order to start uncompromising those input points eg how do you make a GP put accurate not inaccurate data on a record when beliefs get in the way etc. And why don't other conditions have this issue.

but putting it into the same project, along with loads of other promises just compromised what was being measured. So in some way I'd have sympathy if that expectation had been 'done to' Sarah and the team. Because you can't have a one size fits all and it do anything appropriately. Quite simply because you then are basically dropping all research design, as you can't truthfully have a research design for 'all possibilities' that's the point of research design is to answer a question. Accurately. By making sure the data pool is collected accurately to the question and intended usage etc.
 
I think that there is an issue which isn't unique to the MEA where some organisations do not realise that if you are going to commission something - be it marketing or research or insight then you do have to have a member of staff internally who has the skills/been hired because they are capable and qualified to do said commissioning and putting briefs and oversight together.

You should never just be leaving whatever agency or individual or hired team to it. And it needs someone who has the skills that fit what is being asked for/commissioned to be able to do that job. And also significant amounts of time to be half-designing the project on the internal end before it then liaises with the 'agency/team' to see what technique they would use, what is possible with the costs and recruitment etc.

Something like this is not just flinging money at a team with an approx description of what you want. As if it was a grant for an academic's defined research project that was already heavily defined and wasn't going to be something like eg a toolkit or measurement.

And that agency/team having to bear in mind their own overheads and what can be done for that amount vs what could be a changeable 'customer' hoping for it to tackle different things and adding bits to the list - a bit like a builder dealing with someone changing their mind on adding another bathroom half-way through

It feels still like there is that missing oversight part / position that really should have been running this (and was probably big enough it might have been one individual with their own support team given how far this has expanded it needed an internal MEA research and development team) and had sufficient resource to be doing so in commissioning and what was best to do first etc.

There is also another reason for this 'missing aspect' being key which is that the person who is in that role on the MEA side is doing something pretty hefty with regards translating the governance and representation of their target audience into something that the project team is then quite specifically commissioned to do (but would not be as subject to said governance and reporting etc).

WHen you break this down then it is potentially a project - sorry actually is now a large number of projects in reality, certainly not just one ‘objective’ /thing - that needed a team for probably years, because of how many 'potentially this or that' have been allowed to be bundled under it. Although they could have hit the ground running with a long-term strategy and then bitten off which were the first things to tackle from it ie building block no-brainer items or the highest priority projects.

It really isn't like the same role as whoever is signing off standard ready-made research academic projects. Particularly now we are starting to get things like apps added in. There is no way the oversight and control can be kept if someone is trying to do so from it being a minor part of someone's role that is in something else entirely. Because we really are talking about them having wandered into new product development territory. And it all involves quite specific skills and experience and support and structure to support it etc.
 
Last edited:
I think the project was set up with muddled objectives.

Yep. To do this successfully you'd have to understand the difference between a customer satisfaction survey, a functional assessment, a service audit, and the measurement and analysis of trial outcomes.

Anyone who did know that wouldn't even attempt to roll them up into one. The purposes are at odds and the range of professional expertise required to design and utilise them is vanishingly unlikely to be found in one person.
 
Back
Top Bottom