Use of EEfRT in the NIH study: Deep phenotyping of PI-ME/CFS, 2024, Walitt et al

Discussion in 'ME/CFS research' started by Andy, Feb 21, 2024.

  1. Andy

    Andy Committee Member

    Messages:
    22,814
    Location:
    Hampshire, UK
    This post has been copied and the following posts moved from the main thread on the study:
    Deep phenotyping of post-infectious myalgic encephalomyelitis/chronic fatigue syndrome, 2024, Walitt et al

    Abstract

    Post-infectious myalgic encephalomyelitis/chronic fatigue syndrome (PI-ME/CFS) is a disabling disorder, yet the clinical phenotype is poorly defined, the pathophysiology is unknown, and no disease-modifying treatments are available. We used rigorous criteria to recruit PI-ME/CFS participants with matched controls to conduct deep phenotyping. Among the many physical and cognitive complaints, one defining feature of PI-ME/CFS was an alteration of effort preference, rather than physical or central fatigue, due to dysfunction of integrative brain regions potentially associated with central catechol pathway dysregulation, with consequences on autonomic functioning and physical conditioning. Immune profiling suggested chronic antigenic stimulation with increase in naïve and decrease in switched memory B-cells. Alterations in gene expression profiles of peripheral blood mononuclear cells and metabolic pathways were consistent with cellular phenotypic studies and demonstrated differences according to sex. Together these clinical abnormalities and biomarker differences provide unique insight into the underlying pathophysiology of PI-ME/CFS, which may guide future intervention.

    Open access, https://www.nature.com/articles/s41467-024-45107-3
     
    Last edited by a moderator: Mar 1, 2024
    Hutan likes this.
  2. Eleanor

    Eleanor Senior Member (Voting Rights)

    Messages:
    232
    This caught my eye in the Supplementary Information, pp. 9-10:

    "Motivation was assessed using the Effort-Expenditure for Rewards Task (EEfRT) which assesses effort, fatigue, and reward sensitivity (Figure S5A) [...] multiple models were evaluated to assess for group differences and assess for the presence of potential interaction effects. Model 1 tested the effects of reward value, reward probability, expected value, trial number, sex, and PI-ME/CFS diagnostic status on hard-task choice, without any interaction effects. Given equal levels and probabilities of reward, HVs chose more hard tasks than PI-ME/CFS participants (Odds Ratio (OR) = 1.65 [1.03, 2.65], p = 0.04; Figure 3A). For all of the other replicated models, which tested the significance of the effects of the interactions of diagnostic status and reward probability (Model 2), diagnostic status and reward value (Model 3), diagnostic status and expected value (Model 4), diagnostic status, reward probability, and reward value (Model 5), and diagnostic status and prior reward feedback (Model 6), none of the interaction terms were significant, so these models were dropped from consideration in favor of Model 1 for the final analysis."

    So the six models that showed no difference between groups on 'effort preference' aren't important, but the one which did show a difference must be telling us something important? :unsure:
     
    JoClaire, SunnyK, EndME and 21 others like this.
  3. Trish

    Trish Moderator Staff Member

    Messages:
    54,800
    Location:
    UK
    Is this single barely statistically significant, and presumably not corrected for multiple comparisons, probablility of p=0.04 the sole basis of Wallitt's stuff about effort perception etc?
     
    Louie41, Medfeb, bobbler and 19 others like this.
  4. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,706
    Location:
    London, UK
    'The mills of S4ME grind slowly, but they grind exceeding small.'
     
    Louie41, JoClaire, Hoopoe and 23 others like this.
  5. sneyz

    sneyz Established Member (Voting Rights)

    Messages:
    46
    Is this really a way to do it properly? If the presumption is that they are probing the same underlying feature, would the number of tries not have to be factored in to the probability? Or is medicine just exempt the otherwise generally accepted methods of statistics? I actually can't get my head around this..
     
    Louie41, Simon M, Sean and 5 others like this.
  6. Hutan

    Hutan Moderator Staff Member

    Messages:
    28,865
    Location:
    Aotearoa New Zealand
    More on that and looking at that S8B figure again -

    Screen Shot 2024-02-22 at 8.09.17 pm.png

    For a start, just looking at where the data points are on the x axis - the percentage of hard-task choices a participant made in an investigation of decision-making - there's very little difference between the ME/CFS group and the healthy group. I think @Snow Leopard might have already made the point that some outliers are doing a lot of the heavy lifting in terms of differentiating the two groups. Most of the data points are in that range of 30 to 60%, regardless of what group the person was in.

    And look how many data points there are - I make it 8 ME/CFS and 7 healthy volunteers. It's utterly ridiculous to draw a conclusion about people with MECFS's tendency to make hard-task choices on the basis of so few data points. I don't know what the authors tell us about those 15 people, but surely all sorts of things could influence those choices - self-confidence, intelligence, how they were primed before the study, experience with the type of task offered, and of course, whether someone is feeling ill or not.

    I assume the reason the investigators plotted this combination of data is because they expected to find that the people who had higher peak power also tended to choose hard tasks. And that those people aren't people with ME/CFS. Am I stretching too far to find it all a bit misogynistic?

    The chart is of course rubbish at proving such an idea that the people with higher peak power are the ones that also choose hard tasks
    1. because they have not separated out men and women, or normalised for things like body size that might make a difference to peak power.
    And, 2. crucially, they didn't find any statistically significant relationships between peak power and proportion hard-task choices. :)

    If some of the investigators like Walitt actually had to spend a bit of time as a person with severe ME/CFS, hell, even mild ME/CFS, they might start to understand what 'hard-task choices' are. We make them every day.
     
    Last edited: Feb 22, 2024
  7. chillier

    chillier Senior Member (Voting Rights)

    Messages:
    218
    Good of you to take the time to properly look at the data for what is already a fundamentally flawed premise - it's good to give the benefit of the doubt and check their data, and sure enough it's poor.

    Perhaps not! It is a bit reminiscent of medical misogyny tropes like patients having poor awareness of their own disease, and also ignoring patients own testimonies of their disease.

    I think the most straightforward way to interpret their argument is that we are not self aware of our own bodies, choices and their consequences. Their language is obscure and hedged in a way that could leave open to interpretation that it is a neurological disease affecting behavior/ indirectly affecting biology, but I think to most people it doesn't read that way. That explanation does not fit at all well with the symptoms of the illness in particular PEM. It does feel to me that if you listened to what the patients were saying about their illness you could not possibly come to this conclusion.

    This paper in my opinion could have been ok as a data dump where they didn't try to string this very poor narrative to it. They have 17 IOM ME patients ( reading from @EndMEs comment) which is acceptable in principle although their PEM status isn't clear. The cohorts are not particularly well matched as you say given the enormous effort they put it to recruitment. I haven't looked over it all but there maybe be other interesting bits in the data they've generated.
     
    Louie41, SNT Gatchaman, Hutan and 9 others like this.
  8. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,508
    I wonder as well that if you are looking at the psychology and doing the 'hard choices' side of things then it should really be displayed as a curve for each individual against time, rather than just % hard choices.

    Putting aside any exhaustion issues and simply focusing on the psychology then 'confidence' will be one of the main drivers in this type of task so it makes sense that when someone gets e.g. two hard ones wrong in a row of something the very logical thing is to switch to the easier option (as that is the aim of the task - to maximise reward, unless you've set it up differently to look for people with impulse/risk-taking issues)

    We can't see the pattern here, and if it turns out certain individuals were getting lots wrong then it isn't 'choice' in the same way really. EDIT: so it would be a representation of performance (that could be linked to exhaustion etc)

    Without illness being involved I don't know what the tasks are but there also might be a learning effect and also tiredness, boredom and 'have enough reward' - I don't know enough about either the tasks or the rewards to know whether 'involvement' (the term used for what drives people to eg keep playing a game) might be pretty low at any point either.

    Then of course you have the whole list of things that relate to having ME/CFS, some of which might be related to the task itself (cognitive fatigue) others to the set-up and other things during the day (meaning the chair is uncomfy or what not).

    It's quite a complex - from the perspective of there also being a lot more psychology factors involved with analysing/planning these sorts of things - task to choose in order to try and demonstrate something. And then you add in the size of the sample and reason for the two different cohorts to have potentially signed up for the trial itself (one set might be more reward-driven if eg HCs are there for payment whereas those with ME/CFS are looking for a cure)
     
    Last edited: Feb 22, 2024
    Louie41, Ash, Karen Kirke and 4 others like this.
  9. Evergreen

    Evergreen Senior Member (Voting Rights)

    Messages:
    345
    Can someone explain this bit to me?

    They're saying that patients were pacing for easy tasks but not for hard tasks. Huh?

     
    bobbler, MEMarge, Medfeb and 5 others like this.
  10. Denise

    Denise Senior Member (Voting Rights)

    Messages:
    496
    I apologize if this has already been mentioned.
    EEfRT task has its origins in work on depression and is validated in schizophrenia. (I know you are all shocked at that. ;))
    Has it been validated in ME? Other conditions?

    Examining the reliability and validity of two versions of the Effort-Expenditure for Rewards Task (EEfRT)

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8797221/pdf/pone.0262902.pdf


    Another advocate wonders if the four "recovered" participants were adjudicated as being "spontaneously recovered" by the adjudication team.....
     
    bobbler, Sean, Amw66 and 7 others like this.
  11. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,508
    I'm really struggling with this Effort Preference stuff.

    I've just got the paper up:

    and read the following:

    "Effort preference, the decision to avoid the harder task when decision-making is unsupervised and reward values and probabilities of receiving a reward are standardized, was estimated using the Proportion of Hard-Task Choices (PHTC) metric.

    This metric summarizes performance across the entire task and reflects the significantly lower rate of hard-trial selection in PI-ME/CFS participants (Fig. 3a).

    There was no group difference in the probability of completing easy tasks but there was a decline in button-pressing speed over time noted for the PI-ME/CFS participants (Slope = −0.008, SE = 0.002, p = 0.003; Fig. 3b).

    This pattern suggests the PI-ME/CFS participants were pacing to limit exertion and associated feelings of discomfort16. HVs were more likely to complete hard tasks (OR = 27.23 [6.33, 117.14], p < 0.0001) but there was no difference in the decline in button-press rate over time for either group for hard tasks (Fig. 3b)."

    Apart from the terrible writing, which I assume is a side-effect of trying to wangle inferring stuff that isn't there with 'careful wording' to the point grammar can't cover it.

    He is basing 'the result' merely on 'no difference'

    but

    'a decline in button-pressing speed over time for ME/CFS participants' is the actual finding ????

    Worse, the last line above says there wasn't even a different between groups in this 'decline in button-press rate over time' for either group for hard tasks.

    So where is this 'difference'???

    I also don't know what he thinks he has found (putting aside the methodological nonsense of using the hard-easy task choice without displaying whether people failed the task or not, among other issues) when I look at the charts: Fig. 3: Impaired effort measures and motor performance were observed in PI-ME/CFS cohort compared to HV. | Nature Communications

    The might look like pretty picutres but as someone who likes a diagram I'm struggling to see what is in them ( the top ones, I haven't even got to the bottom ones on hand-grip)
     
    Louie41, Karen Kirke, EndME and 9 others like this.
  12. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,508
    I've finally found the description of the task in the methods section:

    I've picked the following out, because I think anyone with ME might relate to this:

    "Next, the participant either completed 30 button presses in seven seconds with the dominant index finger if they chose the easy task, or 98 button presses in 21 s using the non-dominant little finger if they chose the hard task."

    I often can't use my phone because of fatiguability in my arms, and if scrolling and tired I also often end up with my scrolling finger shaking with exhaustion to the point I can no longer use it. I frankly struggle to believe that someone would choose a task that involve such small muscles in people who get fatiguability, nevermind 98 presses PER go, in a set timeframe of 21 secs. 30 is actually bad enough. With what sounds like 6 seconds of 'rest' between these?

    What is their 'peripheral fatigue' measure here? Because surely a relevant measure is signs of fatigue in the finger itself? How were they measuring this?

    Having see the word 'modified' in the title of this methods section, I was intrigued to look up reference 15 that is cited I guess as the originator of said task, to spot the morifications: Worth the ‘EEfRT’? The Effort Expenditure for Rewards Task as an Objective Measure of Motivation and Anhedonia | PLOS ONE


    The first part I want to highlight directly relates to my question of fatigue:

    "Effects of fatigue during the EEfRT
    An important requirement for the EEfRT is that it measure individual differences in motivation for rewards, rather than individual differences in ability or fatigue. The task was specifically designed to require a meaningful difference in effort between hard and easy-task choices while still being simple enough to ensure that all subjects were capable of completing either task, and that subjects would not reach a point of exhaustion. Two manipulation checks were used to ensure that neither ability nor fatigue shaped our results. First, we examined the completion rate across all trials for each subject, and found that all subjects completed between 96%-100% of trials. This suggests that all subjects were readily able to complete both the hard and easy tasks throughout the experiment. As a second manipulation check, we used trial number as an additional covariate in each of our GEE models."

    SO there was a paragraph in the actual methods section specifically on effects of fatigue. Designed around undergraduate students, who weren't ill other than reporting anhedonia maybe.

    I do not see an equivalent paragraph in the method for this Walitt et al study's test.

    The paper itself notes how important a requirement it was that it wasn't supposed to measure individual differences in ability or fatigue. SO, it did manipulation checks.

    Did this Walitt et al (2024) paper do a new battery of manipulation checks in order to ensure that this was the case for this? and even so, I do not know how when working with healthy controls vs those with something they call 'chronic fatigue syndrome' they could justify saying this was not just measuring 'individual differences in ability and fatigue'?

    The only difference seems to be that it was 98 instead of 100 presses with their non-dominant little finger on the Walitt et al test.

     
    Last edited: Feb 24, 2024
    Louie41, ME/CFS Skeptic, Ash and 11 others like this.
  13. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,508

    Hang on, in comparing the two papers further

    Walitt et al (2024) claim:

    "The primary measure of the EEfRT task is Proportion of Hard Task Choices (effort preference). This behavioral measure is the ratio of the number of times the hard task was selected compared to the number of times the easy task was selected. This metric is used to estimate effort preference, the decision to avoid the harder task when decision-making is unsupervised and reward values and probabilities of receiving a reward are standardized."


    Whereas the Treadway et al (2009) paper from which the EEFRT was 'modified' states it isn't just 'proportion of hard task choices' but is that 'in relation to probability', the third of these quotes being very pertinent as it states that it is:

    "to validate a novel effort-based decision-making task that could serve as an objective measure of individual differences in reward motivation" - which seems very different as a measure to what Walitt et all (2024) seem to be saying is instead 'effort preference' simply based on 'proportion choosing hard tasks' without the probability levels being compared to choices?

    "Analysis Method 1: Repeated Measures ANOVA/Correlations
    Data were analyzed using two statistical approaches. The first approach used repeated measures ANOVA and correlations. For these analyses, mean proportions of hard-task choices were created for all subjects across each level of probability. Proportions of hard-task choices and responses to self-report questionnaires were approximately normally distributed, and therefore parametric tests were used for inferential statistics."

    "Main Effects of the EFFRT
    A Repeated Measures ANOVA found a significant main effect for probability level on the proportion of hard task choices, with higher probability trials levels associated with more hard-task choices (F(2,120) = 139.8, p<.000, partial η2 = 0.7). Across all subjects, proportion of hard-task choices for medium probability trials were moderately correlated with proportion of hard-task choices for both high probability (r = .31, p<.05) and low probability trials (r = .31, p<.05). High probability and low probability trials were uncorrelated (r = −.02, p = ns). We also found a main effect of gender, with men making more hard-task choices than women (F(1,59) = 3.9, p = .05). Consequently, gender was included as a covariate in all subsequent analyses."

    and (from discussion)

    "Discussion
    The present study had two specific aims: 1) to validate a novel effort-based decision-making task that could serve as an objective measure of individual differences in reward motivation; and 2) to explore interactions between anhedonia, probability and reward magnitude so as to determine whether these variables exhibited a pattern that would be consistent with preclinical models of Nacc DA release. In accordance with our first hypothesis, we found that individuals with elevated reports of both trait and state anhedonia exhibited a reduced willingness to make choices requiring greater effort in exchange for greater reward. "
     
    Last edited: Feb 24, 2024
    Louie41, Karen Kirke, EndME and 9 others like this.
  14. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,508
    OK, in conclusion from studying the 'reference 16' paper by Treadway et al (2009): Worth the ‘EEfRT’? The Effort Expenditure for Rewards Task as an Objective Measure of Motivation and Anhedonia | PLOS ONE

    Which stated in its results:

    "Analysis Method 1: Repeated Measures ANOVA/Correlations
    Data were analyzed using two statistical approaches. The first approach used repeated measures ANOVA and correlations. For these analyses, mean proportions of hard-task choices were created for all subjects across each level of probability.

    Proportions of hard-task choices and responses to self-report questionnaires were approximately normally distributed, and therefore parametric tests were used for inferential statistics."

    That if only looking at proportion of hard-task choices (as Walitt et al, 2024 seem to be suggesting they did in their charts) - rather than cross-comparing them with the 'probability of success' given in the 5 secs preceding the task - then the proportions of hard-task choices in their undergraduate student population were 'approximately normally distributed'.

    And then looking at the figures (top 3) from Walitt et al (2024): Fig. 3: Impaired effort measures and motor performance were observed in PI-ME/CFS cohort compared to HV. | Nature Communications

    which simply show 'proportion of hard choices' vs trial number ie time/exhaustion. Then it is just a downward curve, with those with CFS tending to be averaged (in the same shape) about - 0.1 beneath the healthy volunteers. With more overlap between the two groups at the beginning and end trials, than the 15-25 trials out of 50+.

    I haven't triple checked (for the next 2 charts) that what they mean by 'button-press rate' is the 98 times in 21 required for hard questions and 30 in 7 secs for easy but am assuming this is what the chart of 'number of times button-pressed per second' (on y axis) vs 'trials' (on x axis) with a blue curve for healthy control and pink for ME/CFS? One for the easy questions and one for the hard ones.

    If so, and those charts are showing the number of times button-pressed/second in conjunction with completing either the hard or easy task, then decreases 'presses per second) between button-presses would/could surely be showing fatigue? Where the pink CFS curve is for 'easy questions' going down over time and the blue one going up. And for hard questions the pink is sitting underneath the blue the whole time.

    What other possible measure could that be? Particularly given Walitt et al (2024)'s failure to do checks before to eliminate this effect being possible, and/despite the obvious 'increased likelihood of fatigue' of one group over another.


    Where is the measure that Treadway et al (2009) actually said measured 'effort preference' - if these checks had been properly done - where the probability level of questions is cross-analysed with the choice, to see whether people tended to pick hard when there was more probability of a reward?

    What has Walitt et al (2024) actually measured?

    Am I missing something here, but he seems to have presented a graph showing fatigue and 'performance' and claimed it shows 'effort preference' to reward probability that latter part of which he didn't even include in his calculations and ergo figures?
     
  15. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,508
    And in the figures in the Walitt et al (2024) paper for grip strength (bottom set): Fig. 3: Impaired effort measures and motor performance were observed in PI-ME/CFS cohort compared to HV. | Nature Communications

    What on earth is 'figure e'? According to the labelling beneath, and the fact this is placed alongside the two grip strength boxplots, it says it is the grip-strength test. But the axes are 'proportion of hard-task choices' and 'time to failure'.

    Has he taken the results from one test on choice making (where he has used the wrong measure) and somehow tried to compare it against the time to failure on a grip strength task.

    Even if you were comparing it as a test of fatigue - which would seem to be the most valid measure he is actually doing in both - one is repetitive finger movement, and he's used the '% hard choices' and the other is the whole hand grip and he's used 'time to failure'.

    If he thought he was showing some dodgy corrupted twist as if it were motivation-related why did he choose not to show selection of hard vs easy choice over time?

    The graph shows that for ME/CFS the diagonal line is consistent with those choosing the hard, more button-pressing (except it isn't really because the test keeps running you've only got 8 more to speed-wise fit in if you think 3 x 30 (amount to do in 7 secs for easy task) = 90 + 8 in the 21 secs (98 amount to do for hard task) ) is chosen less by those who failed earlier in the grip-strength. Whereas weirdly the pattern is slightly the opposite for healthy controls (but more like a straight line)

    Why has someone done this chart?

    Did they not understand that the effort-preference required comparison with the probability of reward to be measuring what they claim?

    Can anyone help me here with figuring this out because I'm starting to wonder whether it is just me being tired and missing things as to why this stuff so obviously doesn't add up.
     
    Louie41, Karen Kirke, Hutan and 2 others like this.
  16. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,508

    I should probably underline this with two things to try and make clear what I'm reading. The first is the 'probability level' being referred to in Treadway et al (2009) is that in the 5 secs preceding 'doing the test' participants are shown the 'probability of reward' for the test, whilst being asked to choose to do either the hard or easy version.


    THe second thing is to note that the Treadway et al (2009) study was looking at Anhedonia vs this, so you can see it was the proportion by probability level (ie how much did that choice vary dependent on likelihood of there being a reward) that they compared, not 'whether they selected hard or easy in general':

    In fact Treadway et al (2009) make a point in their method of explaining how there is no way of winning/predicting in the game, because it continues for 15mins and the hard tasks take 3 times as long as the easy tasks. In fact in the Walitt et al (2024) version you press the button 8 times more in 21 secs to complete it than you would in 3 easy ones - the only difference is you get more 6 second breaks in-between.

    Treadway et al make a note that people have to consider not doing the hard task when the probability of reward stated is low - because you risk wasting 21 secs on that and not getting to what might be higher reward likelihood ones later in the game. SO the point was never about just selecting hard vs easy for the sake of it - and that doesn't seem to mean anything as a measure in the context of this method?
     
    Louie41, ME/CFS Skeptic, Sean and 4 others like this.
  17. Hutan

    Hutan Moderator Staff Member

    Messages:
    28,865
    Location:
    Aotearoa New Zealand
    Thanks @bobbler for your useful analysis.

    Here's a bit more:

    The Effort-Expenditure for Rewards Task (EEfRT)

    So, yes, Method in the Walitt et al study

    for each choice, the participant chooses one of two tasks:
    the easy task involving 30 button presses in seven seconds with the dominant index finger
    the hard task involving 98 button presses in 21 seconds with the non-dominant little finger

    Each choice of task has a reward of a specified value and a probability of winning it, if the task was completed it. And it sounds as if the probabilities are kept the same within a choice, but vary between choices. It isn't clear to me if, in the Walitt et al study, the probabilities tended to increase towards the end, or if participants were told that they would (both of which seems to be the case with the test validation study that bobbler linked).

    So, the first choice might look like (and I'm just making this up)
    Choice 1a: Easy task reward $2.00; probability of winning it if easy task completed - 25%
    Choice 1b: Hard task reward $3.50; probability of winning it if hard task completed - 25%

    Once Choice 1 is done and the participant finds out if they won the reward, the participant goes on to other choices, until 15 minutes is done.
    At the end, two rewards are chosen randomly, and the participants receives the sum of those two rewards.

    From bobbler's posts, it appears that the test in the Walitt et al study differed from the standard method in crucial ways, including the outcome measured i.e.
    • ratio of hard tasks to easy tasks, or
    • a measure that takes into account variation in the propensity to choose the hard tasks based on the probability of receiving the reward
    It is not at all clear to me yet if or how probabilities and rewards varied between tasks, if participants were told about the variation and how any such variation was accounted for in the analysis.


    Results
    Screen Shot 2024-02-25 at 5.55.17 am.png

    We aren't shown the actual data in Figures 3a-c, just confidence intervals. I think that's a problem. The statistics used to produce the confidence intervals could be worth looking at. If the probability of choosing a hard task is much lower (and it was especially towards the end of the trial), there should be less data in Fig 3c, and I would have thought that less data would make the confidence intervals bigger. But, they are not.

    Figure 3 a - Probability of choosing the hard task
    The probability of choosing the hard task was a bit lower in the ME/CFS group than in the healthy group, throughout the trial. It wasn't actually that big of a difference. The difference didn't change over time, which I think the authors concluded meant that there wasn't more fatigue in the ME/CFS group. But actually, if your dominant index finger was getting tired, you might choose an occasional hard task using another finger, to give the dominant index finger a break for a while.

    Figure 3b - Button press rate for easy tasks
    What they mean there is that people with ME/CFS got slower with the button pressing for the easy tasks over time, but were still fast enough to reliably complete it. That is what the authors see as 'pacing'. And that approach makes good sense, because the participants were not rewarded on the number of tasks that were completed - however many they did, the computer would only pick two rewards at random to pay out on. The only reason you might want to get through the easy tasks fast is to get to later tasks that might have a higher reward.

    (I think it's questionable how big an incentive something like $5 would be to a participant with ME/CFS who probably wanted to make sure that they could get through all of the studies (like the CPET and biopsy and giving blood), many of which probably seemed a lot more important than making choices about button pushing. I assume it was clear to the participants that this wasn't meant to be a test of fine motor control. So, would you really go all out, concentrating and pressing buttons for 15 minutes, in that context, after you had completed the two tasks needed to qualify for some reward, knowing that this was a government study and so it was very unlikely that later rewards would be a lot more than what you had already secured? There was a lot less on the line for the healthy participants, and so they probably felt freer to fully engage in the game. In terms of effort preference then, I think the people with ME/CFS may well have been expressing a very reasonable effort preference, given the circumstances. )

    It is possible that fatigue played a part in the decline in button pushing rate and the substantial difference in button pushing rate between the two groups seen in later easy tasks - I don't see how the authors can rule that out.

    Figure c - Button press rate for hard tasks
    Most people undertaking the hard tasks pressed the button fast enough to complete them. Both groups showed a very slight increase in press rate with experience. But Figure 3a tells us that the people with ME/CFS were unlikely to choose to do the hard task, especially towards the end, so I expect that there wasn't much data behind that finding.

    (See my note about the confidence intervals above - I think they are generalised in a way that conceals the paucity of data at different trial numbers. It would have been better to show the button press rates as plotted data points, with the x axis being time. Maybe such a chart can be constructed using the source data.)

    ****
    Frankly, I find this task a joke in terms of providing insight into the supposed pathological psychology of people with ME/CFS. I feel appalled that conclusions are being made about the 'effort preference' of people with ME/CFS on the basis of this 15 minute study with 15 ME/CFS participants.
     
    Last edited: Feb 24, 2024
  18. bobbler

    bobbler Senior Member (Voting Rights)

    Messages:
    3,508
    So for the Treadway et al (2009) original EEfrt design the probabilities that might be given at the start in the 5 sec pause were:

    "Trials had three levels of probability: “high” 88% probability of being a win trial, “medium” 50% and “low” 12%. Probability levels always applied to both the hard task and easy task, and there were equal proportions of each probability level across the experiment. Each level of probability appeared once in conjunction with each level of reward value for the hard task."

    So given that they note that if someone chose to do a task rated hard with a small monetary value and 12% probability they would be picking something that took longer and had a low probability of being paid out on the logic is people might choose the 7 second 'easy' for this combination if they were motivated by rewards. So the Treadway et al (2009) method sort of at least makes some sense. In that they are looking at the cross-analysis between 'level of probability' and 'whether someone selected hard or easy'.

    "This meant that making more hard-task trials toward the beginning of the experiment could reduce the total number of trials, which could in turn mean that the subject did not get a chance to play high-value, high-probability trials that might have appeared towards the end of the playing time. This trade-off was explained clearly to the subject."

    "This was done to help ensure that subject decisions reflected individual differences in the willingness to expend effort for a given level of expected reward value."

    I just don't get what Walitt et al (2024) thinks their alternative generic 'proportion of hard selected' without that being crossed with probability of winning it is supposed to mean without this, at all?

    I'm not 100% convinced by the Treadway et al (2009) hopes but at least I can see their cogs turning on what they are trying to think about.

    In the Walitt et al (2024) from what I can see if someone saw £0.50 and 12% probability of this being picked then selecting easy (which takes only 7 secs vs 21 secs) doesn't show anything other than basic common sense. Even in the context of Treadway et al (2009) for which the test was designed as a measure of 'effort for reward'.
     
    Louie41, EndME, Sean and 4 others like this.
  19. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    14,706
    Location:
    London, UK
    I am reminded (by @bobbler and @Hutan's efforts) of the debate about incompatibility of General Relativity and Quantum Theory. This worries a lot of physicists. But I was reassured by an eminent physicist that it doesn't matter because there is no question to which the two theories would give different answers. In situations where one theory seems to give answers to questions incompatible with the other theory it turns out that you simply cannot ask those questions of the other theory. It would give the answer 'Yer what?'.

    I find myself incapable of analysing these data because I am quite sure that this is a 'Yer what?' situation from the outset. You cannot draw conclusions about the psychological role of effort in any disease by testing patients and controls simply because the patients know they are patients and controls know they are controls.

    BUT, if members can show in lucid terms why the questions being asked are technically illegitimate, as looks very likely, even disregarding the prima facie case for invalidity that I cannot get my effort level past, then that would be a major achievement. Go for it. I think this may justify a Kindlon and Wilshire type reply paper but I happily admit that my effort preference doesn't cope with this.
     
    Louie41, EndME, Dolphin and 18 others like this.
  20. Kitty

    Kitty Senior Member (Voting Rights)

    Messages:
    6,590
    Location:
    UK
    If some of the questions are not asked, or not regarded as important, in other diseases of energy limitation, would that be part of the reason for considering them technically illegitimate?
     
    Louie41, EzzieD, Sean and 5 others like this.

Share This Page