Analyzing ordinal data with metric models: What could possibly go wrong?, Liddell et al. 2018

And I think the reason it has been used so much is because its very short, it’s easy to administer, it includes items related to physical and mental fatigue, and its got eleven items.

Accuracy and relevance seem to be missing from that list.
Nonetheless, Chalder is probably right - that no doubt is why it has been used so much. The implication being that true validity of such an instrument counts for much less than the apparent costs of using it. As compared to true cost effectiveness, the assessment of which doesn't seem to get a look-in ... but of course then it would need some real science behind analysis of that.
 
Then every person using it assumes it is a properly validate informative set of questions which give a properly validated informative answer.

The fact that it needs interpreted for patients - what do they mean by "usual" is that last week, before I was diagnosed, before I was ill, or just how I usually feel? - disqualifies it from the first glance.
 
Something like this scale is not just subjective in its usage, its whole development is subjective, from its conception onwards. Coming up with the discrete labels, and their presumed adequacy for information capture, is a highly subjective part of the development process. Then there is the need to test if for real - but what reference criteria do you test against, when it is the first measuring instrument of its kind, and there are no absolutes, only subjectivity and beliefs. Almost certain the patients tested will be those with conditions the scale inventor is convinced their new scale best suits, and when it does - hey presto! ... it works!! And what about "independent" verification? With a new scale, and all part of the same club, the notion of independence is out of the window, all just eager to prove how well it works.

And that is quite apart from the issue of converting a nebulous ordinal value to a continuously variable one, and getting any reliable sense from it.
 
And I think the reason it has been used so much is because its very short, it’s easy to administer, it includes items related to physical and mental fatigue, and its got eleven items.

Accuracy and relevance seem to be missing from that list.
Another thing missing from the list might be that it has a high probability of giving a positive result. Whether a positive result is true or false doesn’t seem to matter to some researchers – outcomes which confirm preconceptions are “useful”; those which don’t are not.
 
I think you also have to step back bit when considering ordinal data - what is the nature of the real-world parameter that you are attempting to represent?

With an ordinal scale, it is invariably a means to try and simplistically map discrete value labels onto sub-ranges of a continuously variable (analogue) parameter, typically a subjective one. So if, for example, the parameter you wish to measure is "how do you feel your temperature to be?", you might have ordinal values ranging from "desperately cold" all the way through to "desperately hot". But the underlying parameter, subjective or not, will itself be an analogue one.

So the process of a person deciding what their answer to the question is, will itself be a translation of their real, analogue, temperature sensation, to the ordinally labelled value that seems a "best fit" for them at the time. It will get tricky to answer if their real, analogue, sensation feels to be close to a boundary between two ordinal labels , and in such boundary cases will also be much more liable to bias, depending on how they are feeling at the time - boundary condition decisions are often subject to such in-the-moment biases.

So any subsequent mapping of ordinally scaled readings onto an analogue scale, is only any use if it can adequately reverse map back to a good approximation of what the person's original analogue sensation was at the time they answered the questions.

And this of course, is about just one question, about a single category being queried of the person. Once you have multiple questions, sub-categorised or not, the potential for error between reality and measured will be high I would think.

There is an engineering technique called "fuzzy logic", which is a way of mapping natural-skills solutions into engineering ones, and the one thing for absolute sure, is that the mappings are highly unlikely to produce nice neat linear translations - if they did then fuzzy logic would be not have been necessary. (Note: I'm not sure if fuzzy logic is used so much these days).
 
found that 100% of the articles that analyzed ordinal data did so using a metric model

Thank you for posting this. I'm shocked that it was 100% of studies they looked at. I took a stats 101 class online and "don't treat ordinal data as interval data" was just about the entirety of the third unit. Do researchers not take stats? How has this happened?
 
she was suggesting that this was really Simon's idea and that she just made up a bit of a scale for him or something. So maybe disowned is a bit strong. Maybe she was just too modest to acknowledge her mastery of method

I remember reading this, too. Wish I could find the quote.

Made me think that he or his lab was really the one to come up with it but she was the lucky one who's last name started with C so they could get their CFS for cfs. For the people who like such cute names for things and all.
 
Thank you for posting this. I'm shocked that it was 100% of studies they looked at. I took a stats 101 class online and "don't treat ordinal data as interval data" was just about the entirety of the third unit. Do researchers not take stats? How has this happened?
I have actually never seen a paper on ME/CFS that used the ordinal model for count data wile Likert-type scales are used all the time.
 
Back
Top