I don't think we have an open discussion specifically on the problems of bias in academic research. I initially titled this "Bias in science", but really the problem is with academic research, too much of which makes no attempt at being scientific. And I don't mean interpretive dancing academia here, no disrespect intended. It's become evident how big of a problem it has become, and how it has essentially nullified all efforts when it comes to researching chronic illness, which is basically made of three biases in trench coat. But it is a wider problem, this is a root cause that affects all of academia, it just has far more severe consequences with us, with a few hundred people having essentially total, unchecked, control over millions of lives. Most notably, is that the occasional discussion of this from academia itself, misses the point entirely, or focuses on the wrong things, or is entirely symbolic, like Cochrane's recent guideline on clinical research, which they don't actually apply themselves, or at least have more exemptions for than they have rules. So this is to discuss the problems of bias, how it affects academia and scientific research. Generally and specifically.
I am starting this thread in part because of this excellent video from Veritassium. It presents the problem of bias in interpretation of data, how in some cases being smarter can lead to being more wrong than average. I think a lot of this explains what is going on in the mediocre research and ineffective health care we are subjected to. How the people working at those just don't believe in the data they produce unless it validates their expectations, that they know better, and are simply trying to figure out how to apply science to get the results they want. Which is the exact opposite of what the scientific method is about. But they are so biased that they can't see that. In the example, they use political bias, and I'd argue that the ideological bias in evidence-based medicine is at the extreme end of it, at 10/10. It's basically maximized. And there is the 'tribal' factor, in the video represented by political parties, but it's obvious how much peer pressure there is in academia to report that treatments that are widely asserted to be effective must be effective. If a treatment is 'effective' and 'evidence-based', how can anyone argue that it's not? It's in the names! But the names are themselves immensely biased. https://www.youtube.com/watch?v=zB_OApdxcno
And this other one, too. I really enjoy Sabine's channel. She is a disgruntled academic who is exposing a lot of the problems in physics, her academic discipline, but it's absurd how more severe they are in health care. She also talks about news, studies, and other things, so she doesn't just rant for its own sake. For example a common thing she rails about are some massive expensive projects, like particle accelerators, that don't really move the needle, but even the biggest particle accelerator project, the LHC, cost a fraction of they money lost to the many failures of evidence-based medicine and psychosomatic ideology. It's basically cents to a $100 bill by comparison. When fundamental physics research stalls, it just stalls. People don't die. Unlike with bad research in health care, where millions of lives have basically been sacrificed to a mediocre 19th century belief system. And continue to do so. Even though all the evidence shows them otherwise. Which is a point she emphasizes in this video. They know it doesn't work. They don't care. They can't stop themselves. I wanted to leave a comment but even though the video was posted yesterday, it had almost 9K comments already, so it would have been lost. It would probably take a while to get her to see the similar problems in health care academia, but judging from her criticism of physics research, she would absolutely lose her mind at how much worse it is here. In this video, she gave an example of neurologists making stuff up about something or another, forgot what the example was exactly, but actually they do so much worse, so excessively more mediocre than this, that even her example is tame by comparison. https://www.youtube.com/watch?v=HQVF0Yu7X24
Just a quote from an article I saw earlier this year. The article was based on an interview with Professor Tamás Horváth, a Doctor of Veterinary Medicine whose main interest is neuroscience. He is Hungarian but he has been working at Yale since the fall of communism. This is what he said (translated by ChatGPT): Steering the conversation toward the world of science in a more general sense, Alinda Veiszer suggested that globally, the acceptance of science is declining: opinions are becoming dominant, and facts are increasingly being questioned. Tamás Horváth agreed, but added, "I work in science, and I have to say that science is not objective either, and a lot of rubbish appears even in the most prestigious journals." He cited an example of a survey conducted on behalf of pharmaceutical companies, where they examined how many scientific articles out of a hundred stand the test of time, and they found that 80 percent do not. He believes this is because there are two ways to conduct science: one can clone genes, or one can gather data (which is what they do) and draw conclusions from it, creating a story that is sold. Time will tell whether this story is true or not. So, it doesn’t hurt to be skeptical, he says.
Hossenfelder did a good video on Long Covid a while back. https://www.youtube.com/watch?v=kJ3l95udXok
In some critiques of studies on here the lack of reporting of bias is sometimes mentioned. I'm not sure if it is necessarily because researchers are unaware of them, but that they are not mentioned because they are believed to be well-known. Then there are problems such as character limits of articles, it's easy to remove text that is believed to be superfluous. For example I've been getting into repeat discussions about whether or not "measurement error" should be listed as a source of bias/weakness in a study, when I feel that this type of error is present in everything we do. Did something happen with the machine so some readings are wrong? Did a labworker or dataanalyst make a mistake but this has not been picked up since the results are seen as within the range of the expected? I don't see the point in mentioning how this can happen because of course it can. Other types of bias that are specific to the study in question is another matter. But where to draw the line? If you look at repeat measurements in longitudinal studies you can quickly get some sort of survivor bias as people don't answer or die over time. However, those that are non-responders or dead might not have been part of the subset you are interested in to begin with. So while survivor bias is still there, if few people are expected to be in this group due to for example the age-range you are interested in, should survivor bias be mentioned or not?