
Yeah, if we take those words at face value, then that would make the HV's stupid and the PI-ME/CFS participants smart. Because, if you are faced with the choice of two tasks with equal rewards and equal probabilities of receiving the reward, then why would you not choose the easy task?
But, I think we can safely say that what the investigators did in this particular experiment is less than clear.
Walitt has pitched it as if it is a game where like basic life cliches it is a simple case of 'you work harder you get more reward'.
But what he has used as a tool is something that deliberately ensures that there is no particular strategy that 'wins' / is better. You
aren't supposed to choose 'hard' when you get a low probability (88% chance it is a no-win, won't count) task.
It isn't a test of 'effort preference' what Treadway invented but a tool that allows for study of the pattern/shape across all the different factors (they get an amount option for hard vs easy that varies along with the probability) in order to see
sensitivity to these different
subtle (and that is a key point noting the issue with fatiguability/disability below) aspects in the test.
It isn't even the cliche of 'when you have a cold you will get that £10 note that is sitting on your back garden/lawn, but when you have flu you can't'. But he is trying to pitch it like this by making the game so impenetrable and complicated that noone is looking it up to see it doesn't test even that.
The amounts of money picked aren't huge in order that it relies on other intrinsic individual differences - but it also isn't either 'guessable' or really very possible to affect very much what you would end up with as an amount.
It wasn't really supposed to be a test of effort EDIT: with the 'metric' Walitt has claimed of 'just take % of hard choices', but how people interpret all these little subtle varying elements involved in choice behaviour when you vary them at such a speed that it has to be instinctive and can't have a social element.
It the Treadway et al (2009) example it is 20mins, hard takes 30sec on ave and easy 15 secs. If someone just approached the test as wanting to test out their little finger against getting the bar up the screen and only did hard that would be about 40 tasks they'd get. Assuming a lot such as the random presentation of probability levels meaning approx even low, medium, high then 40/3 = 13 only 50% of those even 'count', and noting that the 12% is 'inverted' for the low vs high it's the same thing if you averaged out those and were doing them blind ie 50% of each task is a no-win. is literally Pointless, you get a screen at the end saying it 'doesn't count'.
SO yes, someone just selecting 'hard', even in an averaged sense would be spending literally 50% of their time on tasks that 'don't count'. But then so would someone who only selected 'easy'.
So the point of all of these other factors being displayed is to give those doing the game the impression that there are some of these tasks that are more 'worth it' than others.
EDIT: and the 'tool' is more complex and has lots of analyses bits because it is looking at the sensitivity of different 'individual differences' to things like eg 'reward magnitude' vs 'probability it is no-win' and so on. Of course when you add in complications like fatiguability and PEM that causes a significant
additional imbalance that isn't accounted for in the EEfRT because it was explicitly designed and validated on those without physical issues. So for ME/CFS they 're-weight' the balance of all of these validated (on physically healthy people) factors anyway.
It wasn't about effort on its own - which is why the acronym is unfortunate. But looking at how (they think) individual differences - and yes within that they might mean things like schizophrenia other times its invalidated small personality traits - just to be 'curious' whether when you look at all the little patterns of how people chose some eg were more responsive to eg 'high probability', some might have been 'sensitive' (which they suggest with anhedonia) to the kick in the teeth of doing all your clicks then being told it is a 'no-win' so become increasingly more likely to choose easy for everything except high probability (or even that) as time goes on.
It wasn't supposed to be about chucking out one figure at the end '% hard' (particularly if you've modified it so you've nothing validated to compare it to from other trials) and suggesting they are the harder workers. Like Walitt seems to be pitching. The test just doesn't operate like that.
Treadway et al have thrown in all of these different factors in order that it is showing individual differences - which fatiguability would be a huge one, massively outweighing anything else,
particularly if aspects have been weighted to outweigh other 'rewards and downsides' within the game.
I think it would help for us to have a separate thread on this in order that we can try and get these points across of the game before people comment based on the assumption Walitt is trying to sell that the test he has used 'backs up' or 'tests' the 'construct' he has invented. I do not think that is helpful because ironically you then get within the argument itself an inadvertent validation of something that isn't valid.
It's a hard enough one to communicate given what he has done if you get a straight trail, and I know he's drip-fed his non-concept through other bits of his paper but being able to interrogate the concept itself in a 'clear run' to show where things he writes in such a way that readers 'assume its there/validated/true' aren't easily shown up as missing.