The Stigma of self-report in health research: Time to reconsider what counts as “Objective”, 2026, Alwan

Dolphin

Senior Member (Voting Rights)

News Release 28-Jan-2026

De-stigmatizing self-reported data in health care research​

Peer-Reviewed Publication
PLOS


De-stigmatizing self-reported data in health care research
image:

Five questions to ask before judging self-reported data as of inferior quality in quantitative health research.


view more


Credit: N. Alwan, 2026, PLOS Global Public Health, CC-BY 4.0 (https://creativecommons.org/licenses/by/4.0/)

Professor Nisreen Alwan calls to de-stigmatize self-reported data in health care research, highlighting Long COVID as one setting where it has unique strengths over 'objective' data.
 
The author seems to be confusing the ‘self reported/observed by others’ continuum with ‘objective versus subjective’ continuum, though lots of others, including the people she is critiquing do the same. Though these may over lap, it is possible to have self reported objective data and reported by others subjective data.

If someone with Long Covid reports they go out of the house x times in a week that is objective data, whereas if a neurologist reports they see x number of people with FND in a week that I would argue is subjective data.

This discussion also seems to confuse objective/subjective with observer/measurement reliability. For example people’s reporting the quantity of alcohol they drink in a week tends to underestimate/underreport, whereas say the number of cans of beer someone drinks is an objective measure but the drinker may not always be a reliable observer/reporter.

For me patients should be considered as accurate reporters of their own symptoms, or even more so, as a doctor’s reporting their patients’ symptoms. However their reporting of causality or theoretical explanations of an underlying mechanism is something very different. Unfortunately often the language available to described symptoms confuses description and explanation.

I commend the intention behind the article, but wish in general we approached the topic with more sophistication.
 
I commend the intention behind the article, but wish in general we approached the topic with more sophistication.

I agree. It is a muddle. Objective means various things and always will. The 'need to change' is something of a straw man. I find 'stigma' unhelpful, too. There are no stigmas floating about attaching to things, just other human beings being kind or unkind, informed or ill-informed.
 
Is this a pleading that long covid and ME/CFS should be taken seriously even if there are not reliable objective signs yet?

Or is this an argument that psychology research should be taken seriously even if it consists of poorly blinded studies with subjective outcome measures, in the same spirit as Knoop's infamous studies?

The researchers list "5 steps to ask before judging self-reported health data as of inferior quality".

They fail to address the question whether self-reported subjective data is of good quality, even if objective data sometimes is equally bad.
 
The issue is, of course, not the importance of subjective data. It has its place. The issue has always been: is it adequately controlled to reveal the known problems with subjective self-report data so they can be factored into the analysis?

If it has been then it is useful data. If it has not been then it is not useful, and may even be highly misleading and dangerous. As the history of ME/CFS, et al, shows all too starkly.
 
Nor does the value of subjective self-report measures mean they can over rule objective measures when they contradict.
 
I think there may be some confusion about context. I don't think the article is about the question of subjective or objective outcome measures for research.

I think it's about diagnosis and medical care for individual patients, where subjective refers to the patient's report of symptoms, and objective refers to signs the doctor can see, measure or test.

So in this situation, the problem with ME/CFS is that diagnosis is by symptoms, as there is nothing visible, measurable or testable to convince the doctor you are really sick.
 
It says it is though.
Yes, but it's about whether a person has a symptom, and whether a diagnosis is valid, as in epidemiological research, not about whether a treatment worked or not and the change in the symptom severity in a clinical trial.

In researching health conditions like Long Covid with loose and variable case definitions and no specific diagnostics, self-reported answers are still frequently considered less valid than using healthcare records of clinical diagnostic codes. Symptoms -by definition- are only assessed by self-reporting. They cannot be ‘objectified’ by clinical examination or medical investigations. These so-called “objective” measures are supplementary at best. However, when someone says they have Long Covid, doubt creeps in—not because of evidence, but because of stigma [4].

So I should have said in my post that it's not about the use of PROMs in clinicial trials, which is where we see particular problems.
 
Yes but the author starts off with this example of what they think needs to change:

For years, I stood at the front of classrooms, extolling the virtues of the “objective” over the “subjective,” teaching students to prize “hard” data when examining causal relationships in epidemiology because self-reported data can be subject to bias.

That relates to causal relations.
I am just unclear what is supposed to be new here.
 
I haven't read the article but familiar with Dr A's work re LC. I suspect (I may be wrong) she is thinking of causal relationships, specifically re LC. She was one person advocating for "Count Long Covid" during part of the pandemic. (Actually she may have started that call to count LC). She was aware a lot of us were going to GPs saying ~"I've been really unwell ever since I caught covid, I haven't recovered" and being fobbed off. I was one. GPs were telling us Covid only lasts maximum 2 weeks and denying the causal link to covid if we hadn't recovered several months in.

This led to a massive blindspot imo in medicine / response to the pandemic. Some of it willingly, of course. Helped massively along by the longstanding blind-cavern regarding post-viral illness. I agree with her point on this (if I've understood it!).
 
Seems to me the problem is more about qualitative vs quantitative data, which is kind of the same thing but also not really. I don't see the standard use of statistical tools, as is common in medical research, as valid on qualitative data, they only work on quantitative data. Questionnaires are the worst form of data gathering, right after randomized trials, especially when they consist largely of subjective qualitative questionnaires.

The issue of subjective vs objective isn't really over the data themselves anyway, because some subjective data are amplified to comical levels, while other, better, less subjective data are completely dismissed, as befit the agenda of the person doing the analysis, and the people sponsoring the research, and the institutions, and the politics, and so on. That can't be fixed, because the problem isn't even with the data, so there is no way to address it.

It's entirely possible to do a lot of useful research on subjective qualitative data, but I have not seen that in medical research, in large part because it's far too biased. The discipline has not built the relevant skills and expertise to achieve that, and does not apply its own rules consistently.

Most of that ultimately comes down to biases. Medical research is absolutely awful at it, and it's most awful precisely where it has the most significance. High quality objective data almost mean nothing when there is so much bias. The total failure in dealing with LC has fully exposed this giant flaw in the system, and how it's all about biases and agendas, about who does what for whatever purpose they may have.

If the biases were minimized things would work out OK, but instead they are massively amplified, precisely for the purpose of pushing agendas. This is like the value of a jury trial. It's terrible if the jurors are biased and corrupted, but good if the process plays out fair. Same thing with judges and leaders, if they are individually biased and/or corrupt, there is no system of rules that can make them do a good job.

The problem here is not about data, or what the patients experience, it's a skill issue within the profession, and no one in the bubble is comfortable to say it out loud. The blame-free approach of the profession is not sustainable, no system can work out this way. If it wasn't for the relentless progress of science and technology, medicine would actually progress very little, most of the gains are only possible with technology, where all of this problem goes away because the analysis is ultimately not up to human judgment, which is terrible. This process works out very well when it applies, but on issues where it's still lagging, everything is stuck in pre-science days.

The nexus of evidence-based medicine and biopsychosocial ideology has proven the case, beyond any possible doubt, that the profession cannot handle this on their own, it only works out when measurements are impossible to rig, and they will always try to rig anything they can get their hands on because they so badly want what they're working on to be important. Everything is built to enable this.

It's not the tools, it's the users. It's not even the data or the statistical tools, it's the profession itself. It's the biases, the agendas, the egos, the politicking and the everything in secret, behind closed doors. Outside of scientific certainty, everything is politics, and the politics of health care are totally screwed up because the victims have zero a uniquely low capacity to force anything to happen.
 
Back
Top Bottom