Evaluating working memory functioning in individuals with [ME/CFS]: a systematic review and meta-analysis, 2026, Penson et al

forestglip

Moderator
Staff member
Evaluating working memory functioning in individuals with myalgic encephalomyelitis/chronic fatigue syndrome: a systematic review and meta-analysis

Penson, Maddison; Kelly, Kate

[Line breaks added]


Abstract
Individuals with myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS) frequently report pronounced cognitive difficulties, yet the empirical literature has not fully characterised how discrete components of working memory are affected. Given that working memory serves as a foundational system supporting complex cognitive processes, differentiating performance across verbal and visual modalities provides critical insight into which higher-order functions may be most vulnerable.

This systematic review/meta-analysis aimed to synthesise current research to investigate how ME/CFS impacts working memory systems. Using PRISMA guidelines, a systematic search of 6 databases was undertaken (MEDLINE, CINAHL, Web of Science Core Collection, PubMed, EMBASE and PsycINFO). Initially, 10 574 papers were imported and following screening 34 studies of good to strong quality met the inclusion criteria. A series of random effects models were utilised to analyse working memory.

Results indicated a significant difference and large effect size between ME/CFS individuals and controls on verbal working memory tasks; however, no significant difference in visual working memory performance was found between the groups. Following the breakdown of these subsystems into span/attentional control tasks and object/spatial tasks, these results remained consistent.

These findings contribute to the body of ME/CFS research by articulating where specific working memory deficits lie. Specifically, they show that individuals with ME/CFS have impaired verbal memory performance. This knowledge can guide future research targeting higher-order verbal cognition and underscores the importance of recognising cognitive manifestations within ME/CFS clinical care.

Web | DOI | Psychology, Health & Medicine | Paywall
 
I’d say that they aren’t far off with looking at exec function

And then I see ‘cognitive manifestations’ at the end - a vile term I hate

And note that rather than as a cognitive psychologist should be using it and realising we are people with a condition where we get PEM, fatiguability and exhaustion and this could be a useful within-individual measure to calibrate and actually show TGESE ie the exhaustion and PEM by measuring how it can be seen on its effects in this

They are now trying to suggest we just have cognitive issues that what - can be trained out if by unnecessarily wasting our energy

And think they’ve missed the point in both subjects - good scientific psychology that if so focuses on methods and controlling for factors that the whole of psych knows are human factors or bias etc

And ‘getting’ what the illness is.

Which is a shame given I suspect there is a huge body of research in ‘human factors’ research of the type used eg for air traffic control and had to be done in order to recommend length of shift and breaks etc

That could easily it feels be built on given it is explicitly looking at the impact of exhaustion and forwent external factors on ‘performance’ not to pathologies but just know human limits . I bet it even covers issues like noise and light and multi tasking on top. I wonder if it touches on eg pain if someone has something else distracting them like being in an uncomfy chair

On this occasion it would be comparing an individual ‘to themself’ but could eg then use a good test to show how when recovering from a different exertion eg an appointment or for more well people an exertion to their limit starts to show over the days after that (and measure when PEM hits and if it affects this and by how much and in what time cycle )
 
10 574 papers were imported and following screening 34 studies of good to strong quality met the inclusion criteria
That really should read "minimal quality". If it's only good enough to be included in a comparison, that doesn't mean "good to strong", it just means it meets the minimum level of quality they consider adequate. The mass of sloppy research in clinical psychology is a scandal that no one dares voice with proper urgency. All the crap about a crisis of replicability has gone completely ignored in real life terms, it just keeps plowing unimpeded. In evidence-based medicine it's routine for 90%+ of trials to be excluded, with the rest being of barely passable quality and high bias, somehow that gets praised as high quality. I've never seen a bar set so low in any profession.

And then you add the fact that so many people report significant problems with memories, and that none of the standard tools used for this have confirmed it, without understanding what it means about the tools they use. It's the lack of self-reflection that I find shocking in health care. There just is never any when it comes to issues like this, so the crisis only worsens.
 
10 574 papers were imported and following screening 34 studies of good to strong quality met the inclusion criteria
That really should read "minimal quality". If it's only good enough to be included in a comparison, that doesn't mean "good to strong", it just means it meets the minimum level of quality they consider adequate.
Someone ought to do that systemic review "We looked at 10574papers and determined none of them met sufficient standard to be included in a systemic review due to widespread poor standards and low quality". That is the reality of the psychology industry as it stands and its ability to replicate its findings. None of it is of sufficient standing to use and someone from their industry needs to say it, reject the entire lot because its just worthless garbage.
 
Someone ought to do that systemic review "We looked at 10574papers and determined none of them met sufficient standard to be included in a systemic review due to widespread poor standards and low quality". That is the reality of the psychology industry as it stands and its ability to replicate its findings. None of it is of sufficient standing to use and someone from their industry needs to say it, reject the entire lot because its just worthless garbage.
There's also a problem with methods. Obviously there aren't that many relevant papers, it's that keyword searches flag a lot of stuff that isn't relevant. That's narrowed down after the fact, but that's just bad information management and a lot of unnecessary workload.

This was a problem with Long Covid, when a few years ago there was this idea that there had been 25K (or whatever number it was) papers published about LC, when actually it was just a keyword search that flagged a bunch of completely unrelated papers that happened to have commentary or other unimportant paragraphs that mentioned the pandemic and its context and whatnot.

I don't know if librarians are involved in academic publishing and its information management, but everything I've seen over the years suggest that if they are, they're completely ineffectual and probably even ignored for the most part. Same thing with information technology and computer science, it's as if none of this expertise has even a slight footprint and everything still works on the old model of keyword searches because it's just the way it's always been done.

Still, even accounting for this, the vast majority of trials that would be flagged as at least pretending to be legitimate for this review would be excluded on the basis of being of completely unacceptable quality. And it's one thing that no one seems to care, but alongside the never-ending call for more trials, it just falls completely flat.
 
"Impaired verbal memory performance"was the main conclusion of the abstract.
I can't look behind the paywall, but isn't that a very limited description of cognitive difficulties.
Does that mean 10574 paper reviewed, mostly poor quality, and the review itself is quite inadequate too?
Impaired verbal memory performance does not describe my brainfog.
Bin the papers and the review.
Dear authors: Do some soul searching about your profession(s) and paperwriting, before publishing more junk.
 
Thanks @ME/CFS Science Blog for the quick summary on Bluesky. The nearly consistent direction of effect in the plots is interesting.

2) One of the most used tests is the Digit Span Backwards, where you have to remember and repeat a series of digits in reverse order (e.g., hearing "3-8-9" and saying "9-8-3").

Several studies showed a clear deficit in ME/CFS patients using this test. digit.jpg
3) Another common test is called the Paced Auditory Serial Addition Test (PASAT). It's quite similar: you are given a number every couple of seconds and are asked to add it to the one you heard before.

Here, there was also a consistent effect in ME/CFS compared to controls.pasat.jpg
 
The nearly consistent direction of effect in the plots is interesting.
There is quite a lot of heterogeneity itself though, goes to minuscule to enormous. A lot of older studies as well. I think it would be good to have a newer, bigger study to pinpoint where cognitive deficits in ME/CFS lie exactly.

My guess is that the tests needs some endurance aspect, that this is where ME/CFS will show the largest differences compared to controls.
 
Think this Dutch ME/CFS study will look into this but they plan to do measure a lot of things so perhaps the neurocognitive testing will not be very elaborate.
 
My guess is that the tests needs some endurance aspect, that this is where ME/CFS will show the largest differences compared to controls.
Yeah, I like that idea.

Any idea why the digit span plot doesn't include the more recent MCAM study (Lange 2024)? I haven't looked at it in a while, but it was pretty large and seems to have included backwards digit span tests which were non-significant.
 
data file is available

Data File: Evaluating working memory functioning in individuals with myalgic encephalomyelitis/chronic fatigue syndrome: A systematic review and meta-analysis’​

dataset posted on 2024-12-20, 01:25 authored by Kate Kelly, Maddison Penson

 
Any idea why the digit span plot doesn't include the more recent MCAM study (Lange 2024)? I haven't looked at it in a while, but it was pretty large and seems to have included backwards digit span tests which were non-significant.
Well spotted, thanks. Looks like there might have been a small effect for complex working memory (2-Back Task; TWOB).
Group mean differences were clinically meaningful to a smalldegree for TWOB-LMN (d = 0.3 for both timepoints),
But not for the Digit Span Backward
No significant difference was found in agecorrected standard scores of DSF and DSB between groups at T0 and T1

Not sure why this paper wasn't included. They review said "Database searches were conducted on 18 April 2024 and on 6 November 2024" and the MCAM study was on 1 november 2024. Perhaps it was published on the Frontiers website but wasn't included in the databases yet?
 
Back
Top Bottom