UK:ME Association funds research for a new clinical assessment toolkit in NHS ME/CFS specialist services, 2023

True, but for the future, it is very important to now record the issue of consent; lack of clear impartial information which should have been provided prior to requesting consent, on the potential impact of this reseach, as a 'material risk' which patient's have the right to be informed of. The size of the risk is immaterial.

Consent can be withdrawn at any time. If a patient in the future who suffers harm from treatment which can be traced back to this PROMS (similar to the use of SF36 in PACE) as an outcome measure, they can point to this thread as evidence of what they should have been made aware of. We're just doing what Melvin Ramsay and others did for us. Recording. This thread now satisfies the first part of the burden of proof in a future Montgomery claim.
Good to know
 
So it seems Gladwell was working on this report, evaluating the Chalder Fatigue Scale, prior to the pandemic. Luckily, it’s shown that it has issues which could be rectified by PROMS. Just as Sarah Tyson and Gladwell are weeks away from releasing their new MEAQ as part of their PROMS. Neat.
 
So it seems Gladwell was working on this report, evaluating the Chalder Fatigue Scale, prior to the pandemic. Luckily, it’s shown that it has issues which could be rectified by PROMS. Just as Sarah Tyson and Gladwell are weeks away from releasing their new MEAQ as part of their PROMS. Neat.

well that is what they put on it as their suggestion. But of course the only thing the actual research was doing: Trial Report - Exploring the content validity of the Chalder Fatigue Scale using cognitive interviewing in an ME/CFS population, 2024, Gladwell | Science for ME (s4me.info)

is looking at the CFQ and talking to people at their clinic prior to the pandemic and new guideline about that in order to critique it

HIs report shows no evidence that PROMS would be better rather than worse. And doesn't test PROMS.

It's like getting to sell your new maths course by saying the old one isn't getting good results without being expected to say whether yours is any different nevermind 'better' if the same gap was what is actually needed.

The bigger issue is that PACE used fatigue and physical function to define CFS tests - we know what their claimed results were back then for it, and we know those have now been reanalysed. We also know that Crawley et al (2013) did the same measures of fatigue (CFQ) and physical function (SF-36, 11 point improvement defined as recovery) and it failed to show difference in physical function.

SO is PROMS now effectively a sneaky way of moving the pesky 'physical function' bit out of 'what needs to be measured' ?

And realising that they needed to 're-brand' the fatigue scale to hide just basing it on a subjective questionnaire about fatigue (as the name chalder fatigue scale is an issue)
 
Sigh, so the project rolls merrily on.

I hope the PASS questionnaire will be radically revised.

I hope that the emphasis on physical function being the main measure will be a firm line from the ME Association

And , bearing that in mind, there is some standing-back and thinking how that needs to be appropriately measured without constraining it to having to be within 'constraints' pitched by eg Gladwell and the Crawley et al (inc White) paper in PROMS

Technically the acronym of PROMS just means certain words, I'd really like us to investigate whether any 'instructive' stuff out there from eg NHS (if this is about clinics) that do constrain and define that further as to what is must include and so on.

To make sure that perhaps what people think is 'dictated' if it is to be a PROM really is and so on.

And indeed the methodology behind it so we can understand how weightings can be translated - as others have said there will be some sort of algorithm but those are based on weightings and so there is stuff underneath that setting these things that can and should be able to be made more transparent.
 
A report is out
https://meassociation.org.uk/2024/04/patient-reported-outcome-measures-proms-in-me-cfs/
just going to read it

This is Gladwell’s Chalder Fatigue Scale review.
https://www.tandfonline.com/doi/full/10.1080/21641846.2024.2335861
Ethics granted 2018

edited

Well this was something as an article/advertorial put together focused on trying to PR what has been happening rather, yes it is mostly about selling the PROMS project (trojan-horsing the paper which was about the CFQ and didn't test PROMS at all as a 'reason for press release' but then quickly moving on to being about ... and so the PROMS project...). Lots of testimonials from people who filled in the survey and are apparently saying wonderful things.

Cynical me might use the term 'bolted onto the front' about the one-liner mentioning the paper Gladwell was involved with (that got its data before lockdown from its 13 participants and didn't ask them about PROMS) that was published on the 6th. The other content might have been written prior to/in anticipation of this.
 
MEA article:

"The research team have personal experience of ME/CFS so we understand the energy cost of completing the surveys. Please know that your efforts are greatly appreciated and your feedback is being put to good use. Thank you so much!"

"We have been overwhelmed and humbled by the thousands (literally!) of people who have supported the project so far, by completing the surveys and providing invaluable feedback about the tools. These will be combined with the results of the statistical analysis to revise the tools so the final versions are ‘fit for purpose’."

"With perfect timing, we are putting the finishing touches to the survey to test out the next assessment in the toolkit; The ME Activity Questionnaire (MEAQ). This aims to assess activity levels. We will publicise the link to complete the survey in a couple of weeks, via the MEA newsletter."


VS response which I have for various reasons not to distract cut out some rather inappropriate aspects from merely because it is interesting that the topic of discussion is very specifically 'physical function' and a paper which had used the SF-36 for that.

You haven’t actually presented any evidence that the project would be unsuccessful, or any rationale why completing a questionnaire would cause harm, which is clearly implausible.

There is merely a link to a paper which examines the use of two assessments of health-related quality of life as secondary outcomes for long-term follow up over two years in clinical trials for people with MS. The results merely show that specific measures of physical disability are more sensitive to changes in physical disability than measures of health-related quality life which include assessment of physical functioning, physical role, pain, general health, vitality, social function, emotional role, and mental health. (in the case of the SF-36) and activities of daily living and wellbeing in the case of MSIS. A result that will of no surprise to anyone.

This has no relevance of to the use of a measure of post exertional malaise in clinical practice, which is inevitably short term compared to the time scale of this study. If you would like to find out more about how clinical assessment tools can ‘work ‘ in practice, try these.

I think elsewhere on the thread others read through these 'offered as a retort' and found these were paywalled and not relevant, I'll look up the exact comment


OK here are the relevant posts:

I had a quick skim through what these were about. Many are paywalled so we can't study them, most are about measuring patients' progress in stroke rehabilitation, clearly not relevant to us since there is no treatment leading to progress, or rehabilitation, in ME/CFS, and anyway the research was mostly about how therapists use the PROMS, not about what patients find useful or not.

PROM stands for Patient reported outcome measures. Clinical care for ME/CFS is not about outcomes of rehab and progress, it's about coping as well as possible. Clinical encounters need to be about diagnosis, treatment of symptoms with medications where possible, pacing education, support and ensuring provision of aids and personal care where necessary. I don't understand where a lengthy questionnaire about PEM fits into that. Personally I'd rather be provided with wearable step and heart rate monitor with advice on how to use it to help prevent PEM, than try to analyse which specific activity might in hindsight have triggered an eplsode of PEM and try to figure out whether to say it's a strenuous activity or not.

I had not bothered to look at these. This is not even usable data. It is a series of 'qualitative' studies that tell us nothing more than, to quote one ' Staff were generally positive about the toolkit'

If this is what is aimed at there is no justification for funding it.
I am afraid it is more a question of 'If you would like to find out more about how rehabilitationists try to convince themselves that clinical assessment tools can ‘work ‘ in practice, try these.'

Why oh why does research for ME have to be allowed to based on such poor quality methods?
There is something strange about the psychology of this. One of my most intelligent and capable trainees spent ten years learning about evidence quality in research and was capable of taking apart anything substandard as a registrar and yet became an eminent rehabilitationist happy to go along with this sort of qualitative stuff and to claim that we know what works.


Now this final bit, I kept that last one. Editing out some 'problem areas' because beyond the tropes

- which one could suggest acted as a 'massive distraction' to divert things well-away from the very thing that had just come up as a constructive discussion/fair question further ahead in the thread -

I think it is worth studying this line.

I am not engaging with this thread any further now. I had joined it in anticipation of a constructive, critical discussion which could help progress the project and use of measurement tools in ME.


What she appeared to have been responding to appears to have been exactly what she claims here she joined the thread for.

I'm now looking up a certain paper she is mentioning, and wondering whether the content was a bit too 'near the knuckle' on this exact issue

She didn't want to answer the question??

- physical function being 'an issue' for eg Crawley, White (and PACE?) and so when you don't get the results on one half of a measure then change the measure to phase it out might be a thought.


Interesting question. I Googled MS and PROMS and got this hit first.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9326853/

It looks as if they don't work!

That's a very interesting paper. I'll make a thread for it. Thread here.

I do hope, @sarahtyson, that you will check out the thread on PROMS in MS, linked again here:
The MSIS-29 and SF-36 as outcomes in secondary progressive MS trials, 2022, Strijbis et al

Is it worth us, with the new hindsight, having a careful look back through these?
 
Last edited:
Another thing that strikes me as worth looking into / considering , having read the Crawley et al (2013) inc P D White paper (who found no result for physical function)

is how 'rolled in together' these PROMS are aiming to be regarding the end score or profile or whatever the output is.

If for example your research failed because you'd set 11 point increase on SF-36 as your hypothesis needing to be met for 'improvement' to claim a treatment works, but 'got a result' on a subjective fatigue scale then there is a risk if you were 'putting all the questions together' on:

- weighting out certain factors vs others on input? (easiest example is finding there are 2 x each question on CFQ added into a model and 1 x each question on SF-36)

- or having less questions from the physical function

- or on 'calculation' (don't make people answer the CFQ twice but just give answers to certain things more weight in the calculation)

- or on 'scoring' limits that can impact calculation eg if one measure has ceiling or floor effects that limits how much it can change vs a more sensitive measure, meaning that small change in that one is more likely and won't be cancelled out by changes in the other being equivalent

We might not love the physical function side of the SF-36 specifically, however the issue with 'delivery' somehow produced a 'change' in the CFQ without a change in that and so I think it is worth our studying the elements close to those questions very carefully.

It's interesting how the real end conclusion from Crawley et al (2013) vs what they thought back then were differing PACE results (although are they now? did reanalysis show any change in physical function) is that 'in clinic' apparently the scores on the subjective fatigue scale 'got changed' by people's physical function didn't.

Whereas they thought/claimed that PACE did both (although they also changed - pertinent to this thread - some of the 'measures' as what was defined as recovery, so was it that they changed it from '11 points difference'?)

I'm trying to 'get inside the mind' if you really believed your stuff on 'change the mindset become less disabled', what precisely would you have thought was missing on the delivery - that needed to be measured to pick up on these deficiencies/differences in 'delivery' - in CBT and GET that meant the clinic lot weren't 'jumping the shark' and changing physical function, despite changing 'fatigue scores'?

Is there even anything logical there (or illogical that could be their logic) that doesn't just point to the hidden message of 'better get rid of this measure' and then claiming you need a new one 'because the results aren't consistent with the trial because some physios must be doing it wrong'?
 
From MEA press release (my bolding):

"Well-developed assessment tools that represent people’s experience and produce robust, good quality information/data have several benefits for both people with ME/CFS and NHS specialist services. First, and most importantly, they are a way for people to identify and summarise their difficulties,

Secondly, the information the tools provide can act as a starting point for discussions with the clinical team about people’s needs and priorities, and how to manage them. They can also be used as evidence of difficulties and limitations in applications for disability benefits, or workplace adjustments, for example.

Finally, when combined, all the elements of the toolkit can be used to assess how well NHS specialist services are performing, by identifying what they are doing well and areas for improvement.

The final two elements of the toolkit, which will assess patients’ needs (called a clinical needs assessment) and their satisfaction with NHS specialist services (also known as a patient reported experience measure) will examine these issues in more detail.

This information can be invaluable for NHS specialist services to develop a business case for service improvements. For example, demonstrating the need for more staff, input from different professions, or more flexible ways of working. The assessments in the toolkit could also be used as outcome measures in clinical trials, but this is a secondary purpose.

With perfect timing, we are putting the finishing touches to the survey to test out the next assessment in the toolkit; The ME Activity Questionnaire (MEAQ). This aims to assess activity levels. We will publicise the link to complete the survey in a couple of weeks, via the MEA newsletter."



I've bolded a few bits because this does seem to be about using the patient and how they progress as a measure.

The following sentence in the middle, it is worth noting how they have termed the second of these 'satisfaction with NHS specialist services' as a 'PREM' so it is patient reported experience measure:

"The final two elements of the toolkit, which will assess patients’ needs (called a clinical needs assessment) and their satisfaction with NHS specialist services (also known as a patient reported experience measure) will examine these issues in more detail."

I can't help but feel that a lot of claims and look at the shiny keys early on where sort of inferring that the PROM was that PREM and about measuring satisfaction with the services.


Particularly given @Maat 's description of what they experienced when they had to sign off consent for their GP, employers HR and OH and so on to all be able to talk about them and plan the 'return to work' at the same time as the clinic 'GET-ting them'

why instead of measuring 'physical function' have we got a tool that is measuring all of these things in a person and then 'with the final activity questionnaire' I guess their activity levels.

SO someone who claims they couldnt use tech to inform physical function measures to be objective now wants clinics to be monitoring activity levels?

I'm sorry but it seems like some of the discussions that got closed down for certain claimed reasons are just not adding up with what is then being sold here.

Why would you 'monitor activity levels' instead of discussion 'physical function' and the methodology that is most appropriate and accurate for that?
 
what precisely would you have thought was missing on the delivery

Not hypnotic enough?

Magic spell not working?

But seriously, I wonder if they're confused about the subjective sense of fatigue and actual physical function?

When I'm more active I experience noticeably less fatigue, and if I was asked about it without knowledge of the way the information would be used, I'd say that. However, my actual physical function will be somewhat lower in the following days, because ... well, I have ME and I can't sustain periods of higher activity.
 
From MEA press release (my bolding):

"Well-developed assessment tools that represent people’s experience and produce robust, good quality information/data have several benefits for both people with ME/CFS and NHS specialist services. First, and most importantly, they are a way for people to identify and summarise their difficulties,

Secondly, the information the tools provide can act as a starting point for discussions with the clinical team about people’s needs and priorities, and how to manage them. They can also be used as evidence of difficulties and limitations in applications for disability benefits, or workplace adjustments, for example.

Finally, when combined, all the elements of the toolkit can be used to assess how well NHS specialist services are performing, by identifying what they are doing well and areas for improvement.

The final two elements of the toolkit, which will assess patients’ needs (called a clinical needs assessment) and their satisfaction with NHS specialist services (also known as a patient reported experience measure) will examine these issues in more detail.

This information can be invaluable for NHS specialist services to develop a business case for service improvements. For example, demonstrating the need for more staff, input from different professions, or more flexible ways of working. The assessments in the toolkit could also be used as outcome measures in clinical trials, but this is a secondary purpose.

With perfect timing, we are putting the finishing touches to the survey to test out the next assessment in the toolkit; The ME Activity Questionnaire (MEAQ). This aims to assess activity levels. We will publicise the link to complete the survey in a couple of weeks, via the MEA newsletter."



I've bolded a few bits because this does seem to be about using the patient and how they progress as a measure.

The following sentence in the middle, it is worth noting how they have termed the second of these 'satisfaction with NHS specialist services' as a 'PREM' so it is patient reported experience measure:

"The final two elements of the toolkit, which will assess patients’ needs (called a clinical needs assessment) and their satisfaction with NHS specialist services (also known as a patient reported experience measure) will examine these issues in more detail."

I can't help but feel that a lot of claims and look at the shiny keys early on where sort of inferring that the PROM was that PREM and about measuring satisfaction with the services.


Particularly given @Maat 's description of what they experienced when they had to sign off consent for their GP, employers HR and OH and so on to all be able to talk about them and plan the 'return to work' at the same time as the clinic 'GET-ting them'

why instead of measuring 'physical function' have we got a tool that is measuring all of these things in a person and then 'with the final activity questionnaire' I guess their activity levels.

SO someone who claims they couldnt use tech to inform physical function measures to be objective now wants clinics to be monitoring activity levels?

I'm sorry but it seems like some of the discussions that got closed down for certain claimed reasons are just not adding up with what is then being sold here.

Why would you 'monitor activity levels' instead of discussion 'physical function' and the methodology that is most appropriate and accurate for that?


And how on earth is this PROM going to identify what any clinic is 'doing well'?

I'm very familiar with benchmarking and passing on/exchange of best practice in other sectors and scenarios

Often these things begin with someone standing up doing a case study about how they built a service around proper customer input and built-in co-design and co-creation for example. And how they did checks through it.

Or someone who was tackling a specific issue and was able to describe how they investigated what was going on there and how they could improve it, what it meant for staff as they had to change themselves and way of doing things, but how the benefits to all were


They rarely involve 'measuring up' a load of customers on so many different factors, and being quite closed minded about the methods you will possibly consider for each.

And I'm suspicious of the 'patient input' claims. Is that a facade? we can all say we 'talked to 25 people' with a straight face

"
This was a very well written and relevant paper that illustrated the pitfalls of failing to include people with lived experience when developing measurement tools, leading to poor quality data and misleading results.

In the MEA-funded Clinical Assessment Toolkit project we are working with people with ME/CFS and clinicians from ME/CFS specialist services to produce a suite of measurement tools that overcome these short-comings."


SO if I pick out what the shortcomings of the CFQ were identified by 13 people who were at the Bristol clinic in 2018 were from those one-to-one with a psychologist very specific style of interviews....

Firstly the CFQ at least did show their intended effects in PACE and Crawley et al (2013) - so are they saying/agreeing that the only effect in the latter 'fatigue' from their 'treatment' was the result of a tool that 'leads to poor quality data and misleading results'?

Because if so they should be first campaigning to get out both of those treatments and anything masquerading as them from any clinic?


Then, this PROM getting all of this on individuals is the only and most accurate, and most acceptable to patients way of 'fixing that'? SO I assume that all of those questions on the CFQ are what has disappeared. And not the 'physical function' scale that didn't get their desired results?
 
Not hypnotic enough?

Magic spell not working?

But seriously, I wonder if they're confused about the subjective sense of fatigue and actual physical function?

When I'm more active I experience noticeably less fatigue, and if I was asked about it without knowledge of the way the information would be used, I'd say that. However, my actual physical function will be somewhat lower in the following days, because ... well, I have ME and I can't sustain periods of higher activity.

More like 'the hypnotism really works (for fear of heights, certainly 'on paper') but it turns out they had a balance issue (say Ménière's disease or undiagnosed Parkinson's, MS or something) so still have vertigo' but without the bit where they acknowledge you can't treat all vertigo with hypnotism.

And so instead of 'getting the issue' they claim the problem is 'the delivery of the hypnotism' in order to justify: changing the measure

In order to remove the test where they have to go up a ladder and stand on one foot for ten seconds. Or do other physical function tests.

That was alongside the questionnaire about whether they like things to do with heights and can read words about tall objects etc - which turned out to be the irrelevant bit to treating Menieres anyway, as that guideline had just confirmed shouldn't be therefore the focus of the offer



Well yes, like the people writing 'managing energy levels'

who don't get that unfortunately that has been what too many of us worked out how to do far too well for far too long, when we have an energy limit
 
The following sentence in the middle, it is worth noting how they have termed the second of these 'satisfaction with NHS specialist services' as a 'PREM' so it is patient reported experience measure:

"The final two elements of the toolkit, which will assess patients’ needs (called a clinical needs assessment) and their satisfaction with NHS specialist services (also known as a patient reported experience measure) will examine these issues in more detail."
With that usual caveats about subjective reporting and the potential for such measures to be misleading and open to manipulation, there might be some value in a general measure of patient satisfaction with the clinical encounter, separate from actual therapeutic benefit.
 
It’s all a bit much for me just now, but there was discussion with Sarah Tyson about some question having an answer along the lines of some, moderate, massively, etc and posters worried that those terms mean different things to different people, so it wasn't an objective or clear question/answer option. The response was that it’s “what it means to the person answering” which annoyed posters.
Gladwell criticisms of the CFQ reminded me of that.
 
Back
Top Bottom