UK:ME Association funds research for a new clinical assessment toolkit in NHS ME/CFS specialist services, 2023

I got my daughter to switch on my Samsung Health App last night to measure my activity.
It showed 2,000 steps today because I was feeling post-Covid again and couldn't do more than a little bit of pruning and weeding and help change the duvet.

My wife said it would show about 2,000 steps before we looked.
Seems pretty reasonable!
 
We need objective outcomes to be part of research, but also to be part of patient assessment. We know that subjective assessments are biased.
I agree comp1ete1y with the need for objective data. However, the rea1ity in Eng1and is that these ME/CFS services are not fit for purpose so there's no chance that they wi11 be interested in measuring objective outcomes. So it's not I don't agree with you @Hutan, just I'm cynica1 about the who1e project of service eva1uation when the funding is dire and what these services offer is ad hoc across various areas of the UK (service 1eve1 is not determined at nationa1 1eve1, but a11 the services are time 1imited and most don't accept any patient who can't trave1 and engage in their 'set activities').
 
Last edited:
More musings from me as I try to understand the purpose and process of this study.

From the MEA article:
The researchers undertook extensive work in preparation of the grant application. Using recent national guidance, they established the concepts to be measured and completed scoping reviews to identify any existing measurement tools. While these revealed that nothing suitable currently existed for ME/CFS several were found that could be developed.
It would be useful to our understanding of the study to see the 'concepts to be measured' and reviews of existing tools?

From the MEA article:
The toolkit will address the assessment needs and research recommendations (for a core outcomes database) identified in the 2021 NICE Clinical Guideline on ME/CFS. It will be produced following consultation with patients and with clinicians to ensure the toolkit can record accurate and reliable data. Then it will be made available to the network of services in England and in Northern Ireland, Scotland, and Wales, when new specialist services are commissioned.
Will the core outcomes database bear any relation to any core outcome measures being developed in other countries for ME/CFS clinical research? USA, Norway??

Is there a difference between what is envisaged as outcome measures suitable for service evaluation and for clinical trials? If so, why?

I wonder what service evaluations are meant to evaluate - presumably there is some sort of standardised system in the NHS. So does it include things like
customer satisfaction, compliance with prescribed treatment, dropout rate, patient uptake of different therapies on offer, provision for very severe patients with regular home visits... Or does it focus on patient outcomes?
Each of those and doubtless other factors would require completely different data. Only the clinical outcome for each patient would use patient symptom/function questionnaires, the rest would be clinic management data.

We have seen service evaluation in ME/CFS being, I think, misused by publishing questionnaire based outcomes with claims that, for example, CBT leads to improvement and recovery.

If the same questionnaires designed in this project are to be used both for enabling better clinical care of individual patients, and also for service evaluation, these are such different purposes that I can't see how one set of questionnaires can serve both.
I have elaborated on this concern on another thread:
https://www.s4me.info/threads/the-m...vid-19-syndrome-2022-sivan.27803/#post-474617
To summarise my concern, lists of scores for separate items added up don't make a good basis for assessing clinically meaningful overall change in severity of disease and level of function, however 'relevant' each item is to the patient, for reasons I spell out in the linked post.

I don't know the solution to this problem of assessing meaningful improvement or deterioration purely by questionnaires.

It was interesting that a study discussed at the conference in Germany some of us have watched over the last couple of days used the Bell disabilty scale as its main outcome measure.
http://www.oiresource.com/cfsscale.htm
That has the huge advantage that the patient themselves assesses which one of 10 levels best reflects their level of symptoms/disability/severity. Just one box to tick instead of long series of boxes that then get magically added together in ways that often make no sense clinically.
Checking the forum with the search function, I'm reminded that this scale is often suggested and sometimes used as an outcome measure.

I hope the researchers on this project will consider the possiblity of including the Bell scale or the similar MEA scale as part of their toolkit. They are by no means perfect, but I think more realistic a measure than any of the questionnaires I've seen.
 
Last edited:
I agree comp1ete1y with the need for objective data. However, the rea1ity in Eng1and is that these ME/CFS services are not fit for purpose so there's no chance that they wi11 be interested in measuring objective outcomes. So it's not I don't agree with you @Hutan, just I'm cynica1 about the who1e project of service eva1uation when the funding is dire and what these services offer is ad hoc across various areas of the UK (service 1eve1 is not determined at nationa1 1eve1, but a11 the services are time 1imited and most don't accept any patient that can't trave1 and engage in their 'activities').
That's the beauty of wearable technology. Daily steps; morning heart rate can be automatically sent to a secure data collection facility; the data can be automatically analysed. The results can be produced with no effort on the clinic staff's part and no chance* for the clinic staff to manipulate them. Yes, there are privacy issues to be worked through, but the gains in accountability are enormous.

*Well, humans are very clever, and there would be ways. But it becomes a lot easier to reduce bias.
 
I just hope that they remember that the specialist clinics should also be screening for conditions that aren't ME/CFS.
I don't think most of these c1inics do any screening that cou1dn't be done by a competent GP. I remember the 1ist of b1ood tests my 1oca1 c1inic had (not sure if they even sti11 do) and they were basica11y a11 tests that the GP cou1d order. There's no specia1ist testing for things 1ike POTs for examp1e, or even any suggestion that the GP shou1d do a basic NASA 1ean test.
 
Apologies, I went on editing after I'd posted my last post. Here's the last bit I added now posted as a separate post.

My current conclusion:
For individual patient care - questionnaire, diaries, or apps recorded at home may be useful.
For a single number measure to assess change of severity over time - Bell scale, not questionnaire totals.
Wearables and apps where possible for movement and symptom monitoring.

For service evaluation - Bell scale as single score for each patient at each visit.

For research - Bell scale, cognitive testing, movement/body position monitors, severity of a few specific symptoms relevant to specific treatment

My conclusion about the sort of questionnaires where lots of different things are added together to create single numbers as measures to show improvement/deterioration should be abandoned completely. If you want single numbers, use a single scale where the patient chooses which number best reflects their state of health.

That means you don't need to do any of the fancy statistics fiddling around as in the Covid questionnaire whether you should use 3 separate questions scores for breathlessness in your total or the worst or best one or average or whatever. It may be 'gold standard' methodology, but I think it can't see the wood for the trees.
 
Good comment, Trish. Yes, that makes sense.

I don't think most of these c1inics do any screening that cou1dn't be done by a competent GP. I remember the 1ist of b1ood tests my 1oca1 c1inic had (not sure if they even sti11 do) and they were basica11y a11 tests that the GP cou1d order. There's no specia1ist testing for things 1ike POTs for examp1e, or even any suggestion that the GP shou1d do a basic NASA 1ean test.
That's so wrong, isn't it. If the clinics are CBT sausage machines, they are doing more harm than good. I am sure many people who go to these clinics think they will be properly screened for alternative diagnoses. We need specialist doctors there. Any MEA funded "Clinical Assessment Toolkit" should be striving to embed sensible differential diagnosis screening, not propping up a useless system.
 
We need specialist doctors there. Any MEA funded "Clinical Assessment Toolkit" should be striving to embed sensible differential diagnosis screening, not propping up a useless system.
That's actua11y another good reason for these c1inics to be ab1e to '1oan out' wearab1e devices (for those who can't afford to buy their own). I on1y picked up my own POTs when I bought an OMRON wrist b1ood pressure monitor (this was many years ago). My GP hadn't even suggested I might be suffering from this, but 1ooking back I definite1y had the symptoms right back to my teenage years after my g1andu1ar fever had 'reso1ved'. I'd even gone to the doctor with my dizziness prob1ems in my 1ate teens/ear1y 20s and never had it suggested.
 
If the idea is to detect whether patients' health is improving, we can ask what criteria a clinical assessment approach should meet to be considered valid.

My first thoughts are:
- will it collect accurate, relevant patient data, not susceptible to influence from clinic staff, family etc., whether explicit, implicit or unconscious?
- will it be resistant to the "pushcebo" effect, i.e. patients pushing themselves unsustainably over the short/medium term after coming to the clinic in the hope that the clinic is helping?
- will it capture treatment harms?
- if it had been used in the PACE trial, would it have been vulnerable to the same criticisms as the measures used in that trial?
- will it be trusted by patients as well as by HCPs?

In addition if the test is supposed to help understand if an individual patient's health is improving (as opposed to patients in general):
- will it be resistant to distortions caused by normal symptom fluctuations?

As @Trish says, if the focus is not improvement but the general acceptability of clinics to patients then that is an entirely different issue. For that, I'd prefer a simple customer satisfaction survey: "How likely would you be to recommend our clinic to others?" etc.
 
We need specialist doctors there. Any MEA funded "Clinical Assessment Toolkit" should be striving to embed sensible differential diagnosis screening, not propping up a useless system.
It wou1d certain1y be good for the suggested screening tests to be consistent across a11 services. I seem to reca11 the referra1 paperwork for the GP gave a 1ist of necessary and then advisory tests to be done. For examp1e a HIV test was on1y advisory but I think TSH was necessary.
 
Will the core outcomes database bear any relation to any core outcome measures being developed in other countries for ME/CFS clinical research? USA, Norway??

Is there a difference between what is envisaged as outcome measures suitable for service evaluation and for clinical trials? If so, why?
This seems to be the re1evant research recommendation from NICE -

Recommendation ID
NG206/02
Question

A core outcome set: What core set of relevant health outcome measures should be used for trials of treatments for ME/CFS and managing symptoms of ME/CFS?

https://www.nice.org.uk/about/what-we-do/research-and-development/research-recommendations/ng206/02
 
Do we have a thread specifica11y for this question? If not, maybe it wou1d be worth having one separate to this discussion, but that can inform it.

We have had threads going back over five years discussing optimal outcome measures for trials. The problem with the NICE statement is that it confuses outcome measures for trials with measures for clinical management. The requirements are different for all sorts of complicated reasons. The concept of a 'core set' is pretty meaningless.

Unfortunately, the attraction of 'standardising' everything is hard for a lot of people to resist. But in real life standardising everything simply isn't how we go about things. And it is the opposite of 'patient-centred care' anyway.

The last time we. revisited the trial outcome business it seemed that most PWME agreed that we should stick to objective measures since subjective outcomes are so obviously open to bias. That more or less rules out PROMs completely.
 
The problem with the NICE statement is that it confuses outcome measures for trials with measures for clinical management. The requirements are different for all sorts of complicated reasons. The concept of a 'core set' is pretty meaningless.
I agree with this and as a 1onger time member am aware of many of these previous discussions, but wondered if a thread in its own right on the question wou1d make it easier to separate this issue from the other parts of this discussion, making it simp1e to refer anyone new to this forum to a thread which high1ights these prob1ems.
 
Last edited:
Looking on the plus side of this research. We can't change everything just because we see a better approach if others will still used questionniares/ PROMS in their service evaluation and research.

That being the case, it is better to have a really ME focused set of tools that at least tries to reflect reality for pwME rather than people going on using travesties like the Chalder Fatigue Questionniare for want of anything better.

Given that this project is happening and can't be stopped or substantially changed, I would still want to participate in trying to make this the best it can be within its defined constraints. If we have to have PROMS, let's at least have ones with genuine input from pwME.

I hope whatever gets published about the end result and included in any guidance for clinics and researchers will point out that questionaires alone do not provide the best resources for tracking or assessing severity and function for pwME and that both clinics and researchers should move towards making more use of technology to track symptoms and function in pwME.
 
If we have to have PROMS, let's at least have ones with genuine input from pwME.

I was pointing to PROMs being inappropriate for clinical trials rather than for everything. Asking patients to report is likely to be useful in other contexts.

But even Sarah T dislikes 'PROMS" preferring clinical measurement tools. This just illustrates the way you get caught up in the memes of committee sausage machinery. So what if global health admin bodies talk of PROMS. If that is the wrong word let's say so. 'Outcome' already implies coming out of something or being due to something and the biggest mistake with all these things is to assume you can deduce cause just from a change. The brainlessness of the establishment system shows through, as always.

I think the idea is to have input from PWME and I too am keen to see this sort of project go ahead. I just think we need to be honest about what people are trying to achieve and how best to do it.
 
I have only read a few posts

Thought occurs that health service management might like something that gets +ve ticks and doesn't cost anything --- much superior to actually measuring something [objectively] and acknowledging that it didn't work!

As a former boss used to say -- lets think about the big picture --- how will this affect me?

In the age of the health trust --- how will this affect the management bonus --
 
Looking on the plus side of this research. We can't change everything just because we see a better approach if others will still used questionniares/ PROMS in their service evaluation and research.

That being the case, it is better to have a really ME focused set of tools that at least tries to reflect reality for pwME rather than people going on using travesties like the Chalder Fatigue Questionniare for want of anything better.

Given that this project is happening and can't be stopped or substantially changed, I would still want to participate in trying to make this the best it can be within its defined constraints. If we have to have PROMS, let's at least have ones with genuine input from pwME.

I hope whatever gets published about the end result and included in any guidance for clinics and researchers will point out that questionaires alone do not provide the best resources for tracking or assessing severity and function for pwME and that both clinics and researchers should move towards making more use of technology to track symptoms and function in pwME.

And there are big gaps there should not be just in as far as having a 'register' of those with ME currently and shockingly. We don't know 'how many', or even the amount who might need certain really serious interventions or different severities. We certainly don't have a log of prognosis or of how many who did x ended up worse. And that is pretty outrageous. I'd have thought a healh system would have been comparing different set-ups on meaningful data like longer-term health to be able to see if those under certain regimes ended up significantly more disabled e.g. 5yrs on than they might have been if either just left or under different care that e.g just provided adjustments - just as a point of best practice type things.

So if I think about it from that angle, and the one of stopping the bucketing/lump and dump - which makes research and basic data like prognosis impossible, but also allows for the PPS and fatigue clinics to continue on with just a hand-waive saying 'well we only treat CFS anyway so the fact we cause issues with PEM isn't a problem', and leaves those who have either misdiagnoses or additional comorbidities that could be treated needlessly worse etc. - well it is pretty important stuff. It might be this indicates links where certain co-morbidities are common. And the OI type stuff seems one there for example, because of the PEM issue vs managing that.

I'm glad that severe ME has been mentioned. I guess on big issue is finding a way that if these become descriptors we can make sure that the scale for once isn't centred around the 'easy to access' and 'lowest common denominator' from that. And I say that as tending to be a bad habit (not with this new research, I mean in general) across all sorts of sectors having worked in a totally different field and watching processes and protocols tending to be developed around the 'easiest apps' instead of the most complex ones, and timed to such meaning the 'ave time/resource' needed wasn't representative at all and those dumped with the complex work would have e.g. 100times the work of those who only had such easy apps.

Whether intended or not that ends up affecting provision simply because people can't afford to deal with the cases that actually need to be prioritised. And don't have the skillset provided for it. Which is where ME/CFS has been for a long time. And if these can be 'made visible' and ideally things turned on their head so the most severe made the ones medical most responsible for it would transform the attitude towards the rest and ensuring they also don't end up there unnecessarily sadly, so really has been a driver for this slightly toxic over-positive hope for the best treatment stuff that almost is geared towards those who don't have ME/CFS being picked out and helped at the collateral damage of such having hurt those who did.

I also think that the use of old data using old clinics service assessment stuff needs to be required to be dumped after this. I'm also rather suspicious of their ethics statements, as well as their 'not an accurate/unbiased assessment process' and 'not an unskewed or representative cohort' (many use drop-outs to include success stories only in those they analyse for results). I really have a problem with conflicted HCP assessments from their own angles on filtered patient cohorts who filled in data under different pretences under piles of different pressures and influences being allowed to be treated as if it is meaningful and find it quite violating that these would still be used and kept when you think about children and all the scary situations they and their parents might be having to think about when they were doing those.
 
Back
Top Bottom