Use of EEfRT in the NIH study: Deep phenotyping of PI-ME/CFS, 2024, Walitt et al

I'm curious if there are plans for this?

I don't think there is, as far as I know. It would probably be nice to have it all in one place, but I doubt the journal would publish it. It could be posted as a comment on the article or self published on a preprint server.

Also if we will ask our key question at the symposium or reserve it for when we share the letter.

I'm not sure whether or not to ask about it. Also not sure if they'll be allowing questions from the people on zoom. I am tempted to though, it feels wrong to allow them to promote the findings without addressing the issues. I really was hoping to have the letter in before the symposium but it didn't happen. Do you have thoughts on this @EndME @Evergreen ?
 
I decided to submit two questions ahead of time. Here is what I submitted. I think it's unlikely that they'll include them but it doesn't hurt to dream.
  1. You reported in the article that the PI-ME/CFS participants had significantly lower successful completion rates for the EEfRT's hard trials compared to healthy volunteers. The creators of the EEfRT state in their original validation article that the EEfRT data is not valid if participants are not able to consistently complete the task. Why did you proceed with interpreting the PHTC data as evidence of impaired effort preference when the data appears to be invalid according to the guidelines set by the task creators?
  2. Your article is the first to frame the EEfRT as a measure of effort preference, despite there being over 70 previous articles using the task. In these other studies, the task is typically framed as a measure of reward motivation, effort allocation, or effort-based decision-making. What is your rationale for reframing the EEfRT as a measure of effort preference, rather than using an established interpretation?
 
Sorry for the delay, @andrewkq. I wasn't sure what to say. Here's what I would have suggested:
You measured what you call effort preference using the proportion of hard tasks chosen in the EEfRT task, and you found that people with ME/CFS chose fewer hard tasks than healthy volunteers. However, the hard task sounds very difficult for people as disabled as your ME/CFS participants were - indeed, you found that patients were less able to complete the hard task successfully than healthy people. Since the hard task was more effortful for your patient cohort than for your healthy controls, how can you reach any conclusions about whether effort preference is different in people with ME/CFS compared to controls? It sounds like comparing apples with oranges.
 
I anticipated this response from AIRIO (Agency Intramural Research Integrity Officer) at NIH after I filed a complaint on this test & Walitt, but sharing for visibility.

"I am writing in response to your email of March 19, 2023 on the Nature Communications article, “Deep phenotyping of post-infectious myalgic encephalomyelitis/chronic fatigue syndrome.” My office has assessed the allegations, and determined that they do not fall within the definition of research misconduct, in that you are not alleging falsification, fabrication, or plagiarism in this paper. Although there is a scientific difference of opinion about whether the Effort-Expenditure for Reward Task (EEfRT) test was appropriately used and appropriately discussed, this assertion does not meet the definition of research misconduct, as research misconduct does not include honest error or differences of opinion. Therefore, my office has closed this matter with no further action. As you may be aware, NINDS will be hosting a symposium about this topic on May 2, 2024 [https://mregs.nih.gov/ninds/f1p5-f2l3320], which is one of the remedies I believe you had proposed. We wish you the best."

Sidebar: Will see how tomorrow goes now.
 
Last edited:
I anticipated this response from AIRIO (Agency Intramural Research Integrity Officer (AIRIO) at NIH after I filed a complaint on this test & Walitt, but sharing for visibility.

"I am writing in response to your email of March 19, 2023 on the Nature Communications article, “Deep phenotyping of post-infectious myalgic encephalomyelitis/chronic fatigue syndrome.” My office has assessed the allegations, and determined that they do not fall within the definition of research misconduct, in that you are not alleging falsification, fabrication, or plagiarism in this paper. Although there is a scientific difference of opinion about whether the Effort-Expenditure for Reward Task (EEfRT) test was appropriately used and appropriately discussed, this assertion does not meet the definition of research misconduct, as research misconduct does not include honest error or differences of opinion. Therefore, my office has closed this matter with no further action. As you may be aware, NINDS will be hosting a symposium about this topic on May 2, 2024 [https://mregs.nih.gov/ninds/f1p5-f2l3320], which is one of the remedies I believe you had proposed. We wish you the best."

Sidebar: Will see how tomorrow goes now.

"Although there is a scientific difference of opinion about whether the Effort-Expenditure for Reward Task (EEfRT) test was appropriately used and appropriately discussed"

Does this mean people at the NIH had different opinions on whether the EEfRT test was appropriately used/discussed?
 
@EndME I asked this question. Sharing reply.

“No, I could not say that. I could say that my office reviewed the literature (including all the material you sent) to confirm that this was, in fact, an ongoing matter of discussion, and that there had not yet been a consensus statement resolving the matter.”
 
Last edited:
Sorry for the delay, @andrewkq. I wasn't sure what to say. Here's what I would have suggested:
You measured what you call effort preference using the proportion of hard tasks chosen in the EEfRT task, and you found that people with ME/CFS chose fewer hard tasks than healthy volunteers. However, the hard task sounds very difficult for people as disabled as your ME/CFS participants were - indeed, you found that patients were less able to complete the hard task successfully than healthy people. Since the hard task was more effortful for your patient cohort than for your healthy controls, how can you reach any conclusions about whether effort preference is different in people with ME/CFS compared to controls? It sounds like comparing apples with oranges.

I think this is framed well @Evergreen, would you be open to submitting it or having someone else from the thread submit it? I think this version does a good job of highlighting the logical issues without needing to rely on Treadway's perspective.

The email for submitting questions is mecfssymposium@ninds.nih.gov
 
I think this is framed well @Evergreen, would you be open to submitting it or having someone else from the thread submit it? I think this version does a good job of highlighting the logical issues without needing to rely on Treadway's perspective.

The email for submitting questions is mecfssymposium@ninds.nih.gov
Thanks Andrew. I'd be very happy for someone else to submit it. Just post on here if submitting so that it only gets submitted once.

So many on this thread pointed out that the hard task would be really hard for them, or harder for patients than for healthies, and for me, that's the crux of the argument. You can't compare effort preference if you don't control effort.

The task in any response is just to provide evidence for what patients can see at a glance - and yes, Treadway and other researchers have seen it too.

Edit: Didn't mean to make it sound like a response is easy! Making simple things appear simple is notoriously hard work.
 
Last edited:
For those who weren't able to tune in, they did answer (shortened) versions of my two questions. Here are quotes of what the moderator asked and the answers given.

My original question 1:
  1. You reported in the article that the PI-ME/CFS participants had significantly lower successful completion rates for the EEfRT's hard trials compared to healthy volunteers. The creators of the EEfRT state in their original validation article that the EEfRT data is not valid if participants are not able to consistently complete the task. Why did you proceed with interpreting the PHTC data as evidence of impaired effort preference when the data appears to be invalid according to the guidelines set by the task creators?

What the moderator asked:

The creators of the EEfRT state in their original validation article that the EEfRT data is not valid if participants are not able to consistently complete the task. Why did you proceed with interpreting the PHTC data as evidence of impaired effort preference when the data appears to be invalid according to the guidelines set by the task creators?

Nicholas Madian's Answer:

So I wanted to field this one and first I want to express my thanks for this question and the opportunity to provide this clarification. I was informed that this question had been raised and I wanted to field it because I've done a lot of work with this task and with similar tasks. I'm very familiar with the original publication on the EEfRT that the question mentioned. And nevertheless, I still really wanted to make sure that I did my due diligence, so I spent a significant amount of time before this meeting reviewing the publication in question especially closely to make sure that I'm representing it accurately. What the paper describes is that the EEfRT was designed so that the sample of patients used within that original study could consistently complete the task. This does not mean that everyone who takes the task must be able to complete the task without issue for the administration or data to be valid or interpretable. It seems that the creators wanted to ensure that in general, as many people as possible would be able to complete the task but without compromising the task’s ability to challenge participants. Furthermore, I think it bears mentioning that although our ME/CFS participants did not complete the task at the same 96 to 100% rate as the participants in the original study or at the same rate as our healthy controls, they still completed the task a large majority of the time. To wrap things up, to answer the question, consistently completing the tasks is not a requirement for a valid effort test administration, and by all accounts we believe our data is valid and is thus interpretable as a measure of impaired effort discounting.

As a reminder, here is the exact quote from the original EEfRT article we are referencing:

An important requirement for the EEfRT is that it measure individual differences in motivation for rewards, rather than individual differences in ability or fatigue. The task was specifically designed to require a meaningful difference in effort between hard and easy-task choices while still being simple enough to ensure that all subjects were capable of completing either task, and that subjects would not reach a point of exhaustion. Two manipulation checks were used to ensure that neither ability nor fatigue shaped our results. First, we examined the completion rate across all trials for each subject, and found that all subjects completed between 96%-100% of trials. This suggests that all subjects were readily able to complete both the hard and easy tasks throughout the experiment. As a second manipulation check, we used trial number as an additional covariate in each of our GEE models.

My original question 2:

  1. Your article is the first to frame the EEfRT as a measure of effort preference, despite there being over 70 previous articles using the task. In these other studies, the task is typically framed as a measure of reward motivation, effort allocation, or effort-based decision-making. What is your rationale for reframing the EEfRT as a measure of effort preference, rather than using an established interpretation?

What the moderator asked:

What is your rationale for reframing the EEfRT as a measure of effort preference, rather than using an established interpretation?

Brian Walitt's Answer:

The answer is actually pretty simple. I think Nick did a really wonderful job talking about what effort preference is for us, and particularly the unconscious nature of effort in that aspect of it. The EEfRT task is typically framed as a measure of reward motivation, effort allocation or effort-based decision-making. These terms effort allocation and effort based decision making framed task performance as an entirely volitional action. We chose effort preference to reflect both the conscious and unconscious aspects that guide the moment to moment choices that are made during the effort test.
 
"to answer the question, consistently completing the tasks is not a requirement for a valid effort test administration"

But the problem is that the task likely required more effort for patients than controls. Lower completion rate is probably only a reflection of that, as was the proportion of hard task choices. Frustrating response.
 
Well done on getting answers to your questions, @andrewkq , very valuable.

Just highlighting bits from the quotes in @andrewkq 's post above:

From Treadway's paper - the creator of the task:
First, we examined the completion rate across all trials for each subject, and found that all subjects completed between 96%-100% of trials. This suggests that all subjects were readily able to complete both the hard and easy tasks throughout the experiment.[my bold]

From Nicholas Madian's answer:
This does not mean that everyone who takes the task must be able to complete the task without issue for the administration or data to be valid or interpretable. It seems that the creators wanted to ensure that in general, as many people as possible would be able to complete the task but without compromising the task’s ability to challenge participants.[my bold]
 
So basically they are saying it's all OK because they believe it's OK, and their invention of new terminology was OK because they liked the term effort preference.

I thought they were supposed to be doing science, not making stuff up and building their whole report of the study around a hypothesis based on making stuff up.

I think that's insulting to all the real scientists who did real science in the study.
 
Maybe they might at least get a hint that 'effort preference' was a remarkably stupid choice of term.
Involuntary effort is something you cannot test for and so not a scientific concept.
Psychological paternalism always shows through.
Often accompanied by unstated moralism. And not always unstated.
I think that's insulting to all the real scientists who did real science in the study.
I want to hear from these other scientists. Do they support the smearing of patients and critics?
 
Brian Walitt's Answer:

Thanks for submitting these, Andrew.

Re: Walitt's response, I don't recall whether there was precedent for using number of hard choices versus using difference in motivation effect.

Does someone recall?

Also there is a teleconference May 6. Does anyone know if they've been asked about whether they posed a hypothesis and tested for it. Versus data exploration?

And going forward has anyone pressed them to be transparent about the hypothesis leading into testing versus allowing 8 years of data exploration and then using p-values as if they are meaningful.
 
Thanks for submitting these, Andrew.

Re: Walitt's response, I don't recall whether there was precedent for using number of hard choices versus using difference in motivation effect.

Does someone recall?

Also there is a teleconference May 6. Does anyone know if they've been asked about whether they posed a hypothesis and tested for it. Versus data exploration?

And going forward has anyone pressed them to be transparent about the hypothesis leading into testing versus allowing 8 years of data exploration and then using p-values as if they are meaningful.

Could you say a bit more about what you mean by using difference in motivation effect?

I don't think I've heard anyone ask about hypothesis testing versus data exploration and the use of p-values explicitly. I would love to see them confronted about this. In the call yesterday, Walitt openly admitted that their hypotheses for the fMRI portion were only about the motor cortex and they were surprised that they were not significant and that the TPJ popped up instead. So he openly admitted to it being entirely speculative and not hypothesis driven, even though that is not at all how it was framed in the paper. It would be nice if someone could convince them to preregister their hypotheses in the future if they are going to use p-values but there's no way they would agree to that.
 
I don't think the 'study' deserves its own thread, it's just a huge muddle of academic echo chamber brain rot that tries to pin brain fog on pandemic measures, using very odd language that reminds me of microeconomics 101 classes and models of 'homo economicus', simplified models meant only for illustrative purposes and that aren't supposed to translate into the real world, as they simplify human behavior down to basically being simple automatons.

It reframes brain fog as simple lack of motivation, or whatever. Everything blamed on imaginary hardship during a pandemic that completely ignores the freaking pandemic itself:
Specifically, potential cognitive consequences and persistent exhaustion, commonly called “brain fog,” attracted considerable attention and stimulated concerns about long-term negative effects, even following mild infections (Stefanou et al., 2022). One overlapping symptom between the Chronic Fatigue Syndrome (CFS)/Myalgic Encephalomyelitis (ME) and possible long-term consequences of COVID-19 was characterized as enduring fatigue unmitigated by rest and unrelated to cognitive or physical activities (Cortes Rivera et al., 2019). In addition, CFS and depressive disorders may mutually intensify each other. Anhedonia – a general lack of motivation and willingness to exert effort for goal-oriented behavior – emerges as a prominent symptom in both conditions (Smith et al., 2021). This holds considerable importance, as in the postpandemic period, emotional and motivational changes endure, including reduced overall well-being, depressive symptoms, and persistent anxiety disorders. Many of these changes may be attributed to pandemic-related restrictions, such as reduced social interactions and less regulated daily routines (Maison et al., 2021).
Reference for "In addition, CFS and depressive disorders may mutually intensify each other. Anhedonia – a general lack of motivation and willingness to exert effort for goal-oriented behavior – emerges as a prominent symptom in both conditions" is:
Smith, L., Crawley, E., Riley, M., McManus, M. & Loades, M. E. (2021). Exploring anhedonia in adolescents with chronic fatigue syndrome (CFS): A mixed-methods study. Clinical Child Psychology and Psychiatry, 26(3), 855–869. https://doi.org/10.1177/13591045211005515
To me it looks a lot like the principle behind 'effort preference', although the closest they come to using the term is:
We evaluated effort investment by identifying the indifference point, employing the COG-ED Paradigm (Westbrook et al., 2013). The indifference point represents the point at which participants exhibit equal preference between two choices, perceiving the options as equally favorable
It's not named as the EEfRT task, but it's described as pretty much the same thing:
The participants engaged in a sequence of choices, selecting between more or less demanding levels of the above N-Back WM task using the left or right key. The lower effort opportunity consistently featured the 1-Back task, while the higher effort opportunity could encompass any other task difficulty. The participants had 5 seconds to respond to each offer before the subsequent choice presentation. If participants did not respond, the offer reappeared in the subsequent sequence of choices. Higher-effort opportunities consistently yielded a fixed monetary reward, either 2 € or 4 €. Simultaneously, the lower effort investment was initially set at half the reward of the higher effort option (1 € or 2 €). Based on the choice, the amount offered for the less-demanding task was titrated until the indifference point was reached (following the methodology of Westbrook & Braver, 2015). We presented the offers in a pseudorandomized manner across eight staircases, with each of the four levels of difficulty encompassing two staircases for the high (4 €) and low (2 €) monetary reward of the difficult task. Each staircase contained six stairs that could not be revisited until reaching the indifference point. The collected data encompassed all decisions made by the participants.


The study:

Effects of Perceived COVID-19 Exposure and Action-Outcome Predictability on the Motivation to Invest Cognitive Effort
https://econtent.hogrefe.com/doi/full/10.1024/1016-264X/a000392

Abstract: Everyday life situations characterized by poor controllability because of restrictions and uncertainty about action outcomes may attenuate motivational states and executive control. This article explores the interaction of a prior experience with COVID-19 and the susceptibility to respond to a challenging situation with low action-outcome predictability. We assessed cognitive effort readiness as the willingness to invest in cognitively demanding tasks. Individuals with a COVID-19 history exhibited a more pronounced reduction in cognitive effort readiness after experiencing experimentally induced action-outcome unpredictability compared to controls. These results suggest a generalization of perceived loss of action-outcome control among individuals with a COVID-19 history. These findings contribute to conceptualizing and assessing the long-term consequences of pandemic-induced emotional and motivational problems.
 
I don't think the 'study' deserves its own thread, it's just a huge muddle of academic echo chamber brain rot that tries to pin brain fog on pandemic measures, using very odd language that reminds me of microeconomics 101 classes and models of 'homo economicus', simplified models meant only for illustrative purposes and that aren't supposed to translate into the real world, as they simplify human behavior down to basically being simple automatons.

It reframes brain fog as simple lack of motivation, or whatever. Everything blamed on imaginary hardship during a pandemic that completely ignores the freaking pandemic itself:

Reference for "In addition, CFS and depressive disorders may mutually intensify each other. Anhedonia – a general lack of motivation and willingness to exert effort for goal-oriented behavior – emerges as a prominent symptom in both conditions" is:

To me it looks a lot like the principle behind 'effort preference', although the closest they come to using the term is:

It's not named as the EEfRT task, but it's described as pretty much the same thing:



The study:

Effects of Perceived COVID-19 Exposure and Action-Outcome Predictability on the Motivation to Invest Cognitive Effort
https://econtent.hogrefe.com/doi/full/10.1024/1016-264X/a000392

Abstract: Everyday life situations characterized by poor controllability because of restrictions and uncertainty about action outcomes may attenuate motivational states and executive control. This article explores the interaction of a prior experience with COVID-19 and the susceptibility to respond to a challenging situation with low action-outcome predictability. We assessed cognitive effort readiness as the willingness to invest in cognitively demanding tasks. Individuals with a COVID-19 history exhibited a more pronounced reduction in cognitive effort readiness after experiencing experimentally induced action-outcome unpredictability compared to controls. These results suggest a generalization of perceived loss of action-outcome control among individuals with a COVID-19 history. These findings contribute to conceptualizing and assessing the long-term consequences of pandemic-induced emotional and motivational problems.

I feel sad for the kids who underwent that testing presumably in the hope that they were contributing to some meaningful research.
 
Back
Top Bottom