All NHS clinics will have outcomes measured in some way or other, including the ones you mention, there is benchmarking, service evaluation frameworks etc
Actually not. In medicine you treat people according to evidence and do not expect to judge whether that has been beneficial. As I mentioned before, you cannot judge that by 'audit', even though that was popular in the 1990s. It rapidly became clear that is was a bogus substitute for reliable trials, just as these PROMS are.
You can usefully evaluate waiting times, and politeness and pharmacy queues, but not treatment. That is done by trials.
The fact that rehab people will look at you as if to say you are mad if you point this out is merely an indication of how dumb they are. I trained in rehabilitation and decided not to join them.
I think the best approach is to target the key issue, which is that, as Jo says, you can't measure the effectiveness of the clinic in this way. You should do proper trials on treatments, and then apply them if the trials show them to be effective. The assessment toolkit is fundamentally misconceived: they should never have attempted to create one.
We're dealing with people who can't even grasp that no tool is needed outside clinical trials.
Catching up on the thread - apologies for the multiple quotes and probably taking some of those quotes out of context.
Summary - I don't think we should be seen to be dismissing the utility of evaluation of outcomes in individuals and in clinics - checking and accountability is important. The real problems are that
1. the BPS/rehabilitation treatments have been shown to be ineffective in trials, and
2. the use of subjective outcomes in the evaluation of benefits is fraught, and deeply problematic in the evaluation of BPS treatments for ME/CFS.
I think there is some nuance that needs to be remembered when we make arguments against the Tyson PROMS. It is entirely legitimate to evaluate clinical outcomes in medicine.
At the individual level, yes, doctors treat according to evidence, but then they should adjust treatment according to the perceived outcome. That perceived outcome will be influenced by what the patient says (e.g. the pain has gone) but also things like physical signs, blood tests and scans.
At the clinic and population level it can be very useful to evaluate outcomes outside of clinical trials. For example, what are the clinical outcomes of treatment for a specific sort of cancer in one clinic as compared to another, or in one region as compared to another, or for people of one sort of demographic as compared to another? Did people of working age return to and stay in employment? Those audits, those evaluations can begin to tell you things that a trial cannot, or at least has not yet.
For example, worse outcomes from one cancer clinic might lead people to realise that more of the patients in that clinic are only being referred when their disease is very advanced. So, you could improve the way that screening programmes work in that region, or educate the GPs to recognise the symptoms better and/or direct more resources so waiting lists are shorter. Or the finding of worse outcomes might lead people to realise that the clinic is not applying best practice treatment regimes or patients aren't complying with treatment requirements. You might find that the clinic has cured the disease but the treatment has left patients permanently disabled. The finding of worse outcomes for people of one sort of ethnic background compared to another can suggest that work is needed to find out why that is happening.
A clinical trial typically gives the treatment the best chance of working but there can be other factors that could result in a potentially effective treatment failing when the treatment is applied beyond that controlled environment.
The problem, as with so much of ME/CFS treatment and BPS, is when you only rely on subjective outcomes to measure clinical outcomes, perhaps out of a misguided belief that how patients feel about their health is the beginning and end of the problem, that there isn't really a disease. As Jonathan said, patient surveys can be fine for measuring things that are best quantified by subjective outcomes. So, 'did you feel respected?, were you treated with dignity?' are legitimate questions for a survey. But patient surveys are typically not the best way to determine if a treatment fixed a disease.
I think that is one point we need to push hard on. Assessing benefits at the individual or clinical level based only on whether the patients (that is, those who turn up to answer a flawed survey immediately after a treatment programme aimed at training patients to minimise their symptoms) report feeling better is not valid. It creates enormous opportunities for manipulation of outcomes by those who benefit from suggesting that the current treatments and delivery systems are working.
It's not the auditing, the evaluation of outcomes itself that is the problem. That can be very useful. It is how the assessment of benefit is done. (Of course, if trials indicate that treatment approaches are useless, as is the case with BPS approaches to ME/CFS, then constructing elaborate assessments of them in clinics, subjective or not, is not going to legitimately show that the treatments have become useful. It's a huge waste of time and resources that is likely to produce misleading results.)