Use of EEfRT in the NIH study: Deep phenotyping of PI-ME/CFS, 2024, Walitt et al

What ever triggers immediate fatigue may contribute to subsequent PEM, and indeed this rapid fatiguability may be more noticeable or have more rapid onset if already in PEM, but I agree my subjective experience is the two are distinct.

If not already in PEM or experiencing concurrent ’rolling’ PEM, if I am experiencing rapid fatiguability immediately stopping the activity and resting will see the developing symptoms fair promptly dissipate, which is different to PEM which is delayed and can paradoxically continue to worsen even when resting. Also initial symptoms of rapid fatiguability are more likely to be directly related to the activity, such as muscle ache or tremor, where as the symptoms in PEM may include those related to the activity but also includes other not directly related symptoms.

I have similar experiences with sensory hypersensitivities, for example if a smell or a noise starts triggering adverse symptoms escaping the stimulus sees an immediate and reasonably prompt diminution in those symptoms, but persisting in that situation sees the symptoms worsen and can also lead to subsequent triggering of PEM.

and this is probably the important bit to note because probably all, but certainly the iller ME-CFS would be in rolling PEM when doing the task and the fatiguability can be increased in PEM as well as other symptoms too. The 'rolling part' meaning that you get 'immediate hurt' from exerting in PEM and you are 'adding to the bill' in an exponential-feeling way by finding work-arounds to try and do what you need to do. It is hopping on a broken leg. Except you hopped on it for the last 5 days too?
 
I have contacted Nath and Walitt and asked them to supply additional details that other EEfRT studies had supplied. These details are crucial to the understanding of the trial. I have also contacted Ohmann and asked @andrewkq how one could coordinate things. I think we should take our time (certainly not months, but at least a couple of days until we've made sure that every angle has been looked at) and don't have much reason for a rushed response.

Regarding "figuring things out" or trying to strategizes within the trial, there's even a study where the experiment is repeated 4 times and participants had a weeks break between the first 2 turns and the last 2 turns, it seems "strategising" wasn't a problem there. Focusing a response on the fact that one can "strategize" based on looking at HV-F alone, wouldn't make sense to me, especially when it is abundantly clear that his strategy is not even optimal and he makes non-optimal decisions multiple times, which makes it clear that he in fact is not beating the game at all, rather than just being an outlier that is gaming differently. Based on what I've seen some other studies might have excluded him as well.

I also find it interesting that in several studies authors would tell the participants different things about what the pay-out would be to control for the motivation. I think we have to know how exactly these things went in the intramural study and I think @Hutan's point of getting this information from a participant as well is crucial. Where they all chatting in a room, waiting in line or what was going on, is there a slight difference to what is reported in the paper?

I think it could be valuable to have a closer look at this thread I made:
Worth the ‘EEfRT’? The Effort Expenditure for Rewards Task as an Objective Measure of Motivation and Anhedonia, 2009, Treadway et al and look at some of those studies a bit closer.

I don't think it makes sense to focus too much just on the original 2009 paper, as the EEfRT has been used in a tremendous amount of different studies. The results of all the different EEfRT studies differ vastly and so do the interpretations of their results. For example people not using a "good strategy" is sometimes even argued to be a property of an illness. Furthermore multiple studies have excluded some participants, I haven't seen what reasons have been given, but typically an analysis was provided for and without these people and never would the results drastically change. I believe it makes sense to see if somewhere standard exclusion criteria were specified and if this was a priori or a posteriori. Multiple studies also had a difference in between people being able to complete tasks, I still have to have a closer look at that. I haven't found a study where there was a hard task completion even being close to as low as in the pwME group in the intramural study. I think it might makes sense to go through some of these studies and see what the authors said about the study if they had slightly slower completion rates in one group and what the lowest completion rate on hard tasks was. Perhaps there is a study somewhere where there is a lower completion rate on hard tasks that is statistically significant (I haven't seen one yet) and then see what the authors response to this was, since this could be the line of defence by Walitt et al.

Most trials made adaptations to the orginial trial, very often to account for fatiguability or some other deficits of the participants (for example people with cognitive problems not having a time limit on having to make a decision between hard and easy). Not having a calibration phase would be problematic in the intramural study if fatiguability has any influence on the results (which might not seem to be the case, but I don't think someone has fully looked at this yet). Often they also adapted their analysis accordingly, I have started looking into what this might mean for the results of the intramural study.

Looking at this has made me crash, but I hope to present some graphs in the next few days and once I've gotten some responses via email.

I don't think it makes sense to focus too much just on the original 2009 paper, as the EEfRT has been used in a tremendous amount of different studies. The results of all the different EEfRT studies differ vastly and so do the interpretations of their results. For example people not using a "good strategy" is sometimes even argued to be a property of an illness. Furthermore multiple studies have excluded some participants, I haven't seen what reasons have been given, but typically an analysis was provided for and without these people and never would the results drastically change.

I believe it makes sense to see if somewhere standard exclusion criteria were specified and if this was a priori or a posteriori.

Multiple studies also had a difference in between people being able to complete tasks, I still have to have a closer look at that. I haven't found a study where there was a hard task completion even being close to as low as in the pwME group in the intramural study. I think it might makes sense to go through some of these studies and see what the authors said about the study if they had slightly slower completion rates in one group and what the lowest completion rate on hard tasks was.

Perhaps there is a study somewhere where there is a lower completion rate on hard tasks that is statistically significant (I haven't seen one yet) and then see what the authors response to this was, since this could be the line of defence by Walitt et al.

Has anyone else got in touch to help tackle the dredging through the table of papers that use the EEfRT to answer these questions? And are there other questions we have?

Would it be helpful if we divided them up somehow (either by questions we are looking for the answers to, or by a group of papers) to do this? Or just a second pair of eyes - I don't know what I can promise on 'how many' (and quality might drop off with amount) but if someone is working top-down and the other bottom-up it might compensate a bit?

Happy if you'd feel better giving explicit instructions on how we do it (whether first it is a quick scan to see if it is touched on to narrow down into batches or what) given I might not be as good as you are/miss things you might have picked up?
 
Here's a chart of hard tasks chosen (% terms) vs expected prize money (2x the mean of the prize awarded for tasks completed). We can see HVF is an outlier in these terms (top left in blue). PWME shown in red.

View attachment 21287

If this test was really well-designed you'd expect the points to form a tighter upward-sloping line. There would be a tight link between the desired behaviour and the reward.

Interesting chart. I'm trying to wrap my brain around without strategy how the pure maths would operate here:

OK in conclusion there is no way of even guessing this out. SO any approx heuristic is enough. And at the end of the day ... even if people could win $8 (which I'm pretty sure they can't without defying probability on the 'pick 2') vs $2 which is the probable outcome if you were to pick 55% of the easy ones that 'counted'...



Here's what probably really matters:

I'll contextualise. I used to run focus groups.

Over a decade ago (when money was worth more) a youngster with not much money and a lot of energy would say in response to £10 EDIT: (as an incentive to recruit participants to said groups) that you couldn't buy a magazine with that. and they would be correct.

In fact the hourly rate in a job is now near that. And this was validated on undergraduate students. Who may not be taxed ? (depends on the country I guess?) and so I don't know what the other amount for doing the trial itself was but $8 even vs $2 = $6 difference


Although I doubt many would have clocked the exact reward magnitude for one is supposedly being used as a dangling carrot?

It would certainly be interesting to read some of the raw data on the papers from eg Ohmann where they interviewed participants to see what they thought of it and what it was about..
 
Last edited:
@andrewkq
I've noticed that Treadway is a co-author on the first one, but doesn't appear to be on the second one (which looks at external validity etc). I don't know what the likelihood is of there being history of commentary from someone who is a co-author of any kind (as they wouldn't be a peer-reviewer, but might reply to comments or there be other conversations) or how one might seek it if they had fuller access to databases of those kinds of things. Or whether on the second where he isn't a co-author it is worth looking for comments from said paper from him there?

I'll take a look!

Would it be helpful if we divided them up somehow (either by questions we are looking for the answers to, or by a group of papers) to do this?

I'd be up for taking on a portion if it'd be helpful.
 
There's history and precedent of booting out the data of people who try to maximise their payout, as shown in the next two screengrabs. This should be evidence EEfRT is a mess. But in terms of a fight over whether HVF's data should have been excluded, it's likely to weigh on Wallitt's side.

It is further evidence the best approach is to focus on rates of hard task non-completion by fatigued participants.

1.

View attachment 21289

2. This is where footnote 37 in the above screenshot leads:
Neuropsychopharmacology. 2021 May; 46(6): 1078–1085.

Dose-response effects of d-amphetamine on effort-based decision-making and reinforcement learning
View attachment 21290


:rofl::bawling:You'd think if it is amphetamine as I'm assuming it is ('speed') and the 'd' doesn't make it something different that both of these effects are both relevant and expected and part of the 'normal response curve' if a population were given that drug? thrill-seeking or whatever behaviour and feeling particularly 'energised' for doing lots of button-pressing?
 
That is because the results of the EEfRT seem to be somewhat unrobust in all kinds of different trials and yet others have often drawn some rather strong conclusions themselves, so Walitt et al might be able argue that their drawing of conclusions is consistent with the literature, even if the literature itself is inconsistent.
But there is criticism within the literature of some of the conclusions. That paper @bobbler linked is helpful.

e.g.
the 'examining the reliability' paper said:
However, self-reported personality traits (e.g., trait BAS) correlated with only some task performance parameters in Study 1, which did not replicate for the original EEfRT in Study 2. Our results indicate complex and sometimes inconsistent relations between different personality traits, task properties, and reward attributes.
They question the conclusions from the studies.

[sorry, deleted a bit I need to double check]

I think it's worth noting that some studies in the literature actually tested participant's ability to perform the tapping before the experiment and then adjusted the targets accordingly, so the results supposedly weren't disrupted by differences in physical capability. That the investigators in the Walitt et al study didn't do this when dealing with a patient cohort reporting reduced ability to perform repetitive actions could be criticised. Even that sort of modification is problematic though, because the healthy and patient cohorts will have different responses to prolonged tapping.


As far as I've seen the generation of the reward money is purposely not told to players so that optimal strategies are harder to find (this was written in a different EEfRT paper), furthermore I've never seen the distribution specified in any of the papers. In this case a truely optimal strategy would envolve estimating this distribution on the space of all possible distributions on ([1.24,4.30]), whilst also accounting for multiple other things (estimating your own abilities, what kind of rewards have already been paid out to you, what combinations have already appeared etc). The complexity of this will be far too high for any person playing the game.
I don't think a complex strategy is necessary to succeed well enough in the game, give the uncertainty about both the later selection of the two rewards, and the frequency of the reward and probability combinations. I think just knowing that you need to go for the tasks with a high value and a high probability, and flub the tasks with a low value and/or a low probability is enough.
 
Last edited:
Oh and PS, given the basing claims on a difference of on average 1.5 more tasks being chosen as 'hard' by HVs vs ME-CFS at the 50:50 probability-level

I also have questions about Power. Because maybe you could try and blag that meant something if you had 1,000 participants. But this was 15 and 17

The original Treadway et al (2009) used 60 participants

Ohmann et al (2022) used 120 participants

and that latter study (a review: Examining the reliability and validity of two versions of the Effort-Expenditure for Rewards Task (EEfRT) | PLOS ONE) notes the following in its introduction section:

"However, there are also various limiting aspects (see Table 1). First, the number of studies reporting a significant link between the behavioral measurements within the EEfRT and self-reported personality traits related to approach motivation is still small, although many studies refer to this link as validity evidence of the EEfRT.

Second, the number of participants in studies which used the EEfRT has often been relatively small, resulting in low statistical power to detect effects sizes that can be expected in individual difference research [42]. "

So even those larger participant numbers by comparison to Walitt are noted as having low statistical power


PS, in case anyone is curious that reference '42' at the end of the quote is: Effect size guidelines for individual differences researchers - ScienceDirect

I'm just adding this as it is an update to the HV vs ME-CFS No hard selected by level of probability, this time with HVF excluded (I've done a table to the left to show figures if they were included). These are just crude numbers and % without the extra variables included. I haven't looked to see if there are other things that were 'invalid' that I needed to have excluded. If we wanted to send anything off anywhere then I'd need to do a version that was more checked, as this is more quickly done to give a sense and as I'm exhausted I'm not beyond having made an error (or two)

walitt hard chosen with HVF excluded.png


No hard/person ME/CFS 0.5 level: 5.53
No hard/person ME/CFS 0.88 level: 10.53
both med+high level: 16.07

No hard/person HV 0.5 level: 7.44
No hard/person HV 0.88 level: 10.56
both med+high level: 18

So excluding HVF the difference is still particular at the 50:50 level of probability and is now nearer to 2 hard choices per person difference at that level (it was around 1.5 when HVF was included).

Although there is now slightly more of a % hard chosen difference between HV (64.75%) and ME-CFS (62.95%) , and still a 2% difference at the low probabililty level - with HV picking hard 2% more there too, the difference really does seem to still sit in that 50:50 probability area.

Which of course makes sense given the way the game works that would be where variation would logically take place (particularly if you were 'handicapped' in the metaphorical sense, so had to focus your choices of hard due to capability limitations)
 
I'm just adding this as it is an update to the HV vs ME-CFS No hard selected by level of probability, this time with HVF excluded (I've done a table to the left to show figures if they were included). These are just crude numbers and % without the extra variables included. I haven't looked to see if there are other things that were 'invalid' that I needed to have excluded. If we wanted to send anything off anywhere then I'd need to do a version that was more checked, as this is more quickly done to give a sense and as I'm exhausted I'm not beyond having made an error (or two)

View attachment 21294


No hard/person ME/CFS 0.5 level: 5.53
No hard/person ME/CFS 0.88 level: 10.53
both med+high level: 16.07

No hard/person HV 0.5 level: 7.44
No hard/person HV 0.88 level: 10.56
both med+high level: 18

So excluding HVF the difference is still particular at the 50:50 level of probability and is now nearer to 2 hard choices per person difference at that level (it was around 1.5 when HVF was included).

Although there is now slightly more of a % hard chosen difference between HV (64.75%) and ME-CFS (62.95%) , and still a 2% difference at the low probabililty level - with HV picking hard 2% more there too, the difference really does seem to still sit in that 50:50 probability area.

Which of course makes sense given the way the game works that would be where variation would logically take place (particularly if you were 'handicapped' in the metaphorical sense, so had to focus your choices of hard due to capability limitations)

OK I spoke to soon of course. I forgot about the warm-up rounds. SO the following tables exclude the warm-ups/trials with a '- number' ANd also excludes HVF (with the info if they had been included on the right)

wallit hard chosen by probability HVF removed and warm-ups removed.png

Which make for the following calcs by 'person' (15 ME-CFS participants, 16 HVs):

No hard/person ME/CFS 0.5 level: 4.67
No hard/person ME/CFS 0.88 level: 10.00
both med+highlevel: 14.67

No hard/person HV 0.5 level: 6.38
No hard/person HV 0.88 level: 10.13
both med+high level: 16.5

This actually brings out what looks like quite a big % difference at the low-probability level (hard chosen 12.55% by ME-CFS and 18.41% by HVs at this level), even though the numbers behind it are actually small (HV chose 44 hard, ME-CFS 29 = 15). It equates to around 1 more hard choice per person though?

And a 2.5% difference at the high probability level, but underneath that is only 12 hard choices.

I've done the calculations per person because of the difference in 'N' number of participants in each group. At the high probability level, the difference in number of hard selected per person is 10 (ME-CFS) vs 10.13

At the 50:50 probability level it is 1.71 hard choices per person difference: 4.67 were chosen as 'hard' for ME-CFS and 6.38 for HVs (out of around an average of 16 tasks at that level to choose either easy or hard from)
 
OK I spoke to soon of course. I forgot about the warm-up rounds. SO the following tables exclude the warm-ups/trials with a '- number' ANd also excludes HVF (with the info if they had been included on the right)

View attachment 21295

Which make for the following calcs by 'person' (15 ME-CFS participants, 16 HVs):

No hard/person ME/CFS 0.5 level: 4.67
No hard/person ME/CFS 0.88 level: 10.00
both med+highlevel: 14.67

No hard/person HV 0.5 level: 6.38
No hard/person HV 0.88 level: 10.13
both med+high level: 16.5

This actually brings out what looks like quite a big % difference at the low-probability level (hard chosen 12.55% by ME-CFS and 18.41% by HVs at this level), even though the numbers behind it are actually small (HV chose 44 hard, ME-CFS 29 = 15). It equates to around 1 more hard choice per person though?

And a 2.5% difference at the high probability level, but underneath that is only 12 hard choices.

I've done the calculations per person because of the difference in 'N' number of participants in each group. At the high probability level, the difference in number of hard selected per person is 10 (ME-CFS) vs 10.13

At the 50:50 probability level it is 1.71 hard choices per person difference: 4.67 were chosen as 'hard' for ME-CFS and 6.38 for HVs (out of around an average of 16 tasks at that level to choose either easy or hard from)

another spot that is puzzling me (always a chance it is a data error) is that in total HVs did 734 tasks vs ME-CFS 707. EDIT: scrap that (as each person did around 40 tasks then it starts dropping off if they did lots of hards) That is more than what is accounted for in the different number of participants in each group (16 vs 15).

Treadway et al (2009) noted I think that they cut off data at trial 50, although they ran their test for 20mins and so I think that meant all participants were still playing.

Does anyone know whether something similar was used by Walitt et al (2024)?

This makes an average for HVs of round 45.875 and ME-CFS 47.1 rounds ie one more round on average - although I can see from just scrolling the big table I did that obviously wasn't evenly spread.

I think it is mainly important for data analysis purposes to see if I need to cut out later trials if Walitt has used a cut-off like Treadway did?
 
Last edited:
thought in the mean time I'd point you towards these two papers which is the closest I've found to it starting to be used in the 'effort sense':

Effort-Based Decision-Making Paradigms for Clinical Trials in Schizophrenia: Part 1—Psychometric Characteristics of 5 Paradigms | Schizophrenia Bulletin | Oxford Academic (oup.com)
Great find, @bobbler . Right off the bat we have what may be the dealbreaker - Treadway recommended individual calibration of what constitutes a hard task in this schizophrenia study:

The hard task requires an individually calibrated number of button presses to be made within 30 s, with the nondominant pinkie finger. The easy task requires one-third the amount of the individually calibrated hard number of presses to be made within 7 s, with the dominant index finger. The individual calibration phase precedes the practice rounds and choice trials. It requires participants to button-press as many times as possible within 30-s time intervals with both the dominant and nondominant pinkie fingers and after 3 rounds with right and left hands, an average is calculated. The target for the “hard” trials is 85% of this average value; the participant button-presses as rapidly as possible while a computerized graphic illustrates progress toward the goal

The specific modifications to the EEfRT task in this study were based on discussions with the task developer (M.T.T.) ][=Treadway] and included removal of a 12% probability level used in previous iterations (only included 50% and 88%), allowing for individual calibration of “hard” to adapt the required number of button presses, and standardizing the number of trials, so that all participants completed 50 trials.

If I'm repeating things others have already said, oops. My individual calibration for S4ME is lower than some others'!
 
And from the discussion (Reddy 2015, https://academic.oup.com/schizophreniabulletin/article/41/5/1045/1921437?login=false#83746476):
Importantly, we adapted the task to include individual titration to adjust the “hard” for subjective levels of difficulty and we standardized the number of trials each subject received, unlike previous versions of the task. Given similar results across studies with these procedural differences, it appears that group differences on this paradigm were not solely driven by general motor speed or dexterity differences between participants with schizophrenia and controls.
 
Planning a hybrid workshop to explain the findings to the subjects and the general community soon.”
A bit of listening from the NIH would go a long way, certainly on the EEfRT

a) they got the EEfRT "effort preference" research wrong.
b) the term sucks and doesn't describe ME. Some of the brain and hand grip findings are interesting, in my view. But why use "effort preference", which the MEAction response piece suggests is a term from mental health? Preference is insulting, as is the paper's claim that patients were trying to avoid "discomfort". They have no idea.
c) They don't seem to have a good grasp of how ME affects people. E.g. the battery of tests over a week for a patient cohort with an average SF36 phys function score of 30.

Perhaps it's time to listen to patients, or even partner with them (DecodeME) and/or learn from Patient-led Reserach?

It would probably lead to better science and less misunderstanding.
 
Last edited:
EDIT: added latest table which hopefully is easier to look at colour-wise. I lost the old quote stuff, so I'm quoting myself from the old post in the first bit but removed things I've now changed as it was a work in progress with rows missing then etc (apologies for any repetition)

OK so I've tried to see if I could put this into a table format that could be more pictoral than scrolling down and down through looking at each participant in order. I've stripped out data to ration it down to just 'complete' (1 and blue for completed, 0 and pink for not completed), and clicks (anything less than 98 is non-completing on hard tasks). For hard tasks chosen only.

Anyway it is me trying to show what I've described above, which was quite stark when I looked through. On this table each participant is in a different column so you are looking 'down' at the colours' and noting how there are differing patterns as you go across in that pattern of many HVs who are 'all green' vs those who had lots of non-completions (most of whom were ME-CFS, just one HV had many non-completions)

OK, a new version of this table I posted before the weekend, because there was an error for HVO (that actually being a column copied over column from HVN by accident) and it included HVF. But also I've tried to clean it up a bit (!), changed the colours

I've also ordered within cohorts (HV and ME-CFS) by the number of completed hard tasks. To see if it helps to see any 'overlap/similarity' between HVs and ME-CFS who are doing better. But also hopefully to put the 'middling/in-betweener' ME-CFS in the middle of that cohort, between those who are similar in choices to HVs and those who clearly have capability-issues with the hard task and number of clicks (which is the number next to the coloured square so you can see how far off some were from the 98 to complete).

walitt big table hard choices with muted and warm-up grey.png


if anyone is just glancing at it then

each coloured square is a hard task - blue for where the participant completed it and pink for when they didn’t.

the participants are in columns so you ‘read down’ to see the pattern of complete/non-complete of their chosen hard tasks and obviously the white/blank is where they chose easy. So hopefully gives a sense of how spaced out their choices of hard were.

all the HVs are in the left side (ordered by number of hard completed) all the ME-CFS are on the right (ordered the same way). So those who completed fewest hard tasks are on the far right.

next to these if you can see that close is number of clicks (to left of whether they completed or not) so you can zoom in and get a sense of whether they were far off eg 80 clicks or just missing eg 96, 97 when it is red.

one thing you should be able to see now as it is pretty stark is how ‘blue’ the HV dude is apart from one participant who ‘just missed’ completing quite a few hards at the start then got it together.

whereas the ME-CFS side has a lot of pink on the right hand side (almost down the middle)

when you look closely there are, I think, approx ‘groupings’ with around 5 who non-complete most and their clicks are generally not near-misses, then in the middle ones who fail to complete/have quite a few red but tend to be nearer-misses to 98 clicks and might have spaced things out more etc.
 
Last edited:
Correction:
My original post stated that 7/17 (41%) of patients had a lower success rate for hard tasks than all healthy volunteers. This should have been 7/15, making the correct percentage 47%.

No wonder there were so many zeros on that p-value.


Surely the major finding of this task should have been that patients couldn't do the hard task due to their condition and as such, it had to be removed from the analysis.
I've made a correction to one of my posts above. 47% of patients had a lower success rate for hard tasks than all healthy volunteers.
 
OK, because there was an error for HVO (that actually being a column copied over column from HVN by accident) and it included HVF. But also I've tried to clean it up a bit (!), changed the colours to red and blue in case of colour-blindness (do feedback if you want it in the red and green if you found that easier and find it useful - I don't know whether anyone else is)

I've also ordered within cohorts (HV and ME-CFS) by the number of completed hard tasks. To see if it helps to see any 'overlap/similarity' between HVs and ME-CFS who are doing better. But also hopefully to put the 'middling/in-betweener' ME-CFS in the middle of that cohort, between those who are similar in choices to HVs and those who clearly have capability-issues with the hard task and number of clicks (which is the number next to the coloured square so you can see how far off some were from the 98 to complete).

View attachment 21296
Great graph. Am sure I'm not the only one with this problem: I find charts/graphs really difficult to look at now. My brain seems to get dazzled by strong contrasts in colour. I find it difficult to look at stripes, dotted patterns (tiny dots can be OK)...geometric patterns are the worst, gah. The less the contrast, the better. We recently decorated the rooms I'm in and I hunted down paints with lower "reflectance values" and reduced the contrast between colours that were going to be next to each other. No bright white ceilings. Brain is grateful.
 
I looked at a few things you might expect to differ between groups if one group were having difficulty with the task. I hypothesised that a group having difficulty with doing a task would have:
  • Longer choice time
  • Lower recorded presses
  • Higher completion time
  • Lower button press rate
compared to the other group.

Here's how it looks (NB mistakes possible, so double check if using)

upload_2024-3-4_12-27-48.png

It seems to me that lower # recorded presses, higher completion time and lower button press rate for pwME in hard tasks could support the argument that they found hard tasks more difficult than healthy volunteers, if these differences are statistically significant differences. Anyone able to check if these are statistically significant differences?


The paper states

there was no difference in the decline in button-press rate over time for either group for hard tasks (Fig. 3b).

which I understand to be a check for fatigue - if pwME were fatiguing quicker during the task than healthy volunteers, then they would have found a difference here. But what I’m talking about above with lower button press rate is not pwME becoming less capable over the course of the task, but potentially being less capable to begin with.
 

Attachments

  • upload_2024-3-4_12-26-34.png
    upload_2024-3-4_12-26-34.png
    75.6 KB · Views: 1
Great graph. Am sure I'm not the only one with this problem: I find charts/graphs really difficult to look at now. My brain seems to get dazzled by strong contrasts in colour. I find it difficult to look at stripes, dotted patterns (tiny dots can be OK)...geometric patterns are the worst, gah. The less the contrast, the better. We recently decorated the rooms I'm in and I hunted down paints with lower "reflectance values" and reduced the contrast between colours that were going to be next to each other. No bright white ceilings. Brain is grateful.

:thumbup:
I did think some mightnt be able to and nearly wrote the words magic eye etc in the ‘whether people find it useful’ etc but then conscious of rambling on

I’m assuming it wouldn’t and don’t want to minimise but if any changes like me ‘going pastel’ on colours or playing with lines would make it more accessible to you then let me know x happy to give it a go etc if it makes a difference to anyone x

totally understand and I’m hoping it doesn’t dazzle anyone with how I’ve posted it as they scroll though?
 
Back
Top Bottom