Great to hear it is ongoing. Thanks for working on this.It's still in the works. I crashed and had to take a break from working on it for a bit. The current plan is for me to have the next draft of revisions ready for the co-authors to review in mid to late June. So hopefully looking at sending it to the NIH researchers in July and submission a few weeks after that.
Does it in some way avoid the ‘correlation issue’ of what is causing what given disability existed before the task —> correlation with completion can’t be ‘ in reverse’I think this could strengthen your argument if positioned in such a way.
If PHTC shows effort preference, which (ahem) is “avoiding feelings of fatigue.” That should be strongly correlated with reported disability. Rather, percent complete is more closely tied to disability. The fact that ability to complete wasn’t wholly determinate of PHTC is also descriptive. (They are definitely related - high p- value with small sample and 0.3 is meaningful enough to break the assumptions you’re going after. But also illustrates that many chose hard tasks anyway.
Part 2 of Jeanette Burmeister's article on the NIH intramural study is posted on her blog. It is long and requires concentration to read but she shreds the study including Madian's response to an advance question.
https://thoughtsaboutme.com/2024/06...e-study-lies-damn-lies-and-statistics-part-2/
The blog is complex and I have done no more than skim it. Even this might be an exaggeration."
While this is a very complex article covering the data, this quote sums it up nicely:
"It is easy to see why the authors chose not to generate a visual of what actually happened during the EEfRT and instead resorted to manipulating the data with statistical tools until they arrived at a figure that fit their desired outcome (Figure 3a and Supplemental Figure S5e). The latter allowed them to make it look as though patients chose significantly fewer hard tasks for every single trial throughout the EEfRT while the former shows clearly that their Effort Preference claim has no legs."
Jeannette's blog said:The assertion that the proportion of hard-task choices is the primary measure of the EEfRT is demonstrably false. Based on the EEfRT instructions, it is improper to use the EEfRT as a measure of motivation—or an alleged false perception of effort as NIH has done. In the structure of the EEfRT, always choosing the hard task over the easy task (even choosing the hard task a large majority of the time) is not optimal if one is trying to receive the maximum reward. The use of rewards is designed to be the motivating factor, and winning as much money as possible is the goal of the test. For example, one EEfRT paper clearly and simply states the following (other EEfRT papers contain similar language):
“The goal of the EEfRT is to win as much money as possible by completing easy or hard tasks.”
Hence, merely looking at the relative proportion of hard versus easy tasks is not the correct way to assess results of the EEfRT because that approach would most definitely not lead to a maximization of rewards. If the instructions had been to choose as many hard tasks as possible, then the proportion of hard tasks chosen would be the primary outcome measure, but that is not the case in EEfRT studies and was not the case in the NIH study.
While this is a very complex article covering the data, this quote sums it up nicely:
"It is easy to see why the authors chose not to generate a visual of what actually happened during the EEfRT and instead resorted to manipulating the data with statistical tools until they arrived at a figure that fit their desired outcome (Figure 3a and Supplemental Figure S5e). The latter allowed them to make it look as though patients chose significantly fewer hard tasks for every single trial throughout the EEfRT while the former shows clearly that their Effort Preference claim has no legs."
That is stark indeed, especially the final chart that clearly shows that, if anything, patients were playing a better strategy than the healthy controls. Which hardly supports the patients are misinterpreting/incompetent/imagining/delusional/whatever view.
It is also dangerously close to straight fraud, on the part of the authors. Or at least one of them.
I think there is a whole story behind this that has yet to come out, and it will not be flattering to Wallit. I very strongly suspect he pulled rank to bulldoze his bogus spin into the paper over the objections of others.
I am extremely angry about this. Every single concern expressed about Wallit from the moment he was put in charge of this critical project nearly a decade back, and which were sneeringly dismissed by the head honchos at the NIH, has proven completely justified.
He has to go. Now. No possible good can come of his continued involvement, at any level.
In the meantime, not sure if this was already discussed here but in case not, her blog references comments about effort preference NIH made in their response to reviewer comments (page 12 line 244)
"The approach selected with GEE was necessary to determine the primary objective of our study, the existence of EffRT performance difference between the PI-ME/CFS and HV groups.
When I looked at these graphs I thought --- basically there's no difference between the two groups -- just the sort of "noise" you'd expect (measurement error). Then I looked again &, as per your comment, they're the wrong way around i.e. contradicts the claim!Oh hey, the pwME won more money with their strategy (though just barely):
![]()
chart from blog
We can't demand that he goes but I do think he's clearly making unfounded claims and that surely is in conflict with NIH's mission - perhaps I'm wrong, but the controversy around this will be noted by the higher echelons in NIH!He has to go. Now. No possible good can come of his continued involvement, at any level.
Oh hey, the pwME won more money with their strategy (though just barely):
![]()