Seems to me the problem is more about qualitative vs quantitative data, which is kind of the same thing but also not really. I don't see the standard use of statistical tools, as is common in medical research, as valid on qualitative data, they only work on quantitative data. Questionnaires are the worst form of data gathering, right after randomized trials, especially when they consist largely of subjective qualitative questionnaires.
The issue of subjective vs objective isn't really over the data themselves anyway, because some subjective data are amplified to comical levels, while other, better, less subjective data are completely dismissed, as befit the agenda of the person doing the analysis, and the people sponsoring the research, and the institutions, and the politics, and so on. That can't be fixed, because the problem isn't even with the data, so there is no way to address it.
It's entirely possible to do a lot of useful research on subjective qualitative data, but I have not seen that in medical research, in large part because it's far too biased. The discipline has not built the relevant skills and expertise to achieve that, and does not apply its own rules consistently.
Most of that ultimately comes down to biases. Medical research is absolutely awful at it, and it's most awful precisely where it has the most significance. High quality objective data almost mean nothing when there is so much bias. The total failure in dealing with LC has fully exposed this giant flaw in the system, and how it's all about biases and agendas, about who does what for whatever purpose they may have.
If the biases were minimized things would work out OK, but instead they are massively amplified, precisely for the purpose of pushing agendas. This is like the value of a jury trial. It's terrible if the jurors are biased and corrupted, but good if the process plays out fair. Same thing with judges and leaders, if they are individually biased and/or corrupt, there is no system of rules that can make them do a good job.
The problem here is not about data, or what the patients experience, it's a skill issue within the profession, and no one in the bubble is comfortable to say it out loud. The blame-free approach of the profession is not sustainable, no system can work out this way. If it wasn't for the relentless progress of science and technology, medicine would actually progress very little, most of the gains are only possible with technology, where all of this problem goes away because the analysis is ultimately not up to human judgment, which is terrible. This process works out very well when it applies, but on issues where it's still lagging, everything is stuck in pre-science days.
The nexus of evidence-based medicine and biopsychosocial ideology has proven the case, beyond any possible doubt, that the profession cannot handle this on their own, it only works out when measurements are impossible to rig, and they will always try to rig anything they can get their hands on because they so badly want what they're working on to be important. Everything is built to enable this.
It's not the tools, it's the users. It's not even the data or the statistical tools, it's the profession itself. It's the biases, the agendas, the egos, the politicking and the everything in secret, behind closed doors. Outside of scientific certainty, everything is politics, and the politics of health care are totally screwed up because the victims have zero a uniquely low capacity to force anything to happen.