An idea for improving research quality

Creekside

Senior Member (Voting Rights)
The present system seems to be about winning points for publishing something that will make other people's papers look better by citing it, regardless of how bad the study actually is. Could a system be set up in which scientists win (desirable, career-benefiting) points for pointing out flaws or weaknesses in published studies? Reward people for actually reading papers and finding which ones are flawed.

Just an idea from an outsider to the research system.
 
I think you’d just find ways to abuse it. Because who’s going to check and determine if the correction is correct?
Well currently journal editors already determine that. Maybe the difference could be that the person who reported something to a journal which eventually led to a correction or retraction gets this linked to their name.

Although probably a good way to be ostracized by some in scientific community if you are known for causing these kinds of headaches to authors.

Edit: It incentivizes a type of adversarial landscape in science which doesn't sound too appealing, with authors knowing there are going to be people poring meticulously through every word looking for any possible error.
 
I think most of alternative science publication and rating models, of which tremendously many exist with different scientists having different preferences for alternative systems, often end up suffering from what some would call "Goodhart's law".

Reward people for actually reading papers and finding which ones are flawed.
I think this goes against the somewhat general consensus that there's no need to point out the flaws of what others have done in forms of publications, you should just do things better yourself (and flaws are typically discussed elsewhere, such as seminars). In your system you'd just end up having people write studies on nonsense studies instead of doing actual scientific work. Maybe people will just start publishing more nonsense collectively as it allows them to "earn points twice".

The problem about the current publication model is that people publish a tremendous amount of stuff but only very little has any influence on the scientific understanding of problems and that would only be worsened by the above system. You publish to publish. I think to some extent this has very much always been the case. A majority of scientists probably write nothing that needs publishing and good ones write a handful of papers that contribute something, whilst only a handful really have more than 20 different meaningful contributions.

I think it would be perfectly fine if every scientist was only able to publish 5 papers per year. The rest of their work can still be upload to repository servers or be published under pseudonyms as others have done in the past, so nothing would be lost.

I also don't think it makes sense to just equate all of science. There seems to be large differences between disciplines when it comes to the above topics with some having far lower standards than others and the implications for the disciplines being very different.
 
I'm almost a broken record on this, but: this is something the software industry has mostly (/somewhat) solved. Because our job isn't about programs, or computers, or software, or anything like this, it's information. We deal in information, in how to create it, transform it, assess it, transfer it, validate it, and so on. So, out of necessity, our information processes are very mature, including the information processes that allow us to do the rest of our work, a giant system of trial-and-so-so-many-errors.

Of course it varies widely, but most large software companies have bounty programs, where if you report a bug they didn't know you can get rewards, usually cash. Sometimes a job offer, especially if it deals with security or is especially tricky. There are open programs about finding and exploiting security holes, and they are taken very seriously, completely unlike the academic publishing model which basically considers all issues shut once a paper is published. Open source projects, which is a closer model to academic research, also does similar things, and mostly for reputation gains, so would be the best model to emulate.

Of course one big difference is that software developers want to know about bugs. To correct them. Quickly. Whereas in academic research, every error pointed out is immediately dismissed as a personal attack, and things don't get much better from there. Completely different cultures. Despite the fiction of science being "self-correcting", scientists hate being corrected. Absolutely hate the living hell out of it, because mistakes are considered a mistake, rather than a continuous process of iterative improvement, where after-market service and maintenance is crucial to the industry.

It's seriously demoralizing thinking of how much medicine could be improved if it actually did multidsciplinary work, spanning across actual disciplines, rather than specialties of the same discipline. Engineering methods and especially the processes we use to deal with information could radically improve things there, but of course people solely trained on memorizing bits of human biology don't really see the potential here. Which is odd considering how so much of clinical work is basically following scripts, just with much less error- and exception-handling, testing or validation.
 
Last edited:
I don’t think that’s going very well, though
Or at all. Mostly they seem to hate the idea, see them as a black mark on their reputation, and hide behind authors who assured them that whoever thinks they saw an error is actually confused, and that their work is impeccable. Then they act very confused about what to do next, wishing that the problem would just "solve itself" by the complainer simply letting it go.

At least that's what I've noticed happens about 99% of the time. There are exceptions to this rule.
 
I think this goes against the somewhat general consensus that there's no need to point out the flaws of what others have done in forms of publications, you should just do things better yourself
That was actually a literal response by Michael Sharpe to all the criticism about PACE. He pretty much said "well if you don't like it, just do your own $8M trial".

And that's when I truly understood that this industry is simply not serious, and most of what they do is performative horseshit where results are completely irrelevant to everyone involved most of the time, and the rest is rarely any better.
 
Or at all. Mostly they seem to hate the idea, see them as a black mark on their reputation, and hide behind authors who assured them that whoever thinks they saw an error is actually confused, and that their work is impeccable. Then they act very confused about what to do next, wishing that the problem would just "solve itself" by the complainer simply letting it go.

At least that's what I've noticed happens about 99% of the time. There are exceptions to this rule.
After initially contacting customer service for questions about why peer reviews (of some ME/CFS garbage) were not published for a certain piece, I ended up with a brief email conversation with the chief editor of the journal. He didn’t respond to my questions and essentially accused me of attacking his integrity by questioning a decision made by one of his editors, and if that decision was in line with their stated values.

After sending a long list of suggestions to my local library’s e-book app that essentially implied they’ve completely neglected what I believe is an important aspect of the user experience, I got a profound thank you, a promise to pass it on to the developers (but no guarantees), and was asked to keep sharing ideas.

I have no clue if it will result in any changes so ut might just be empty platitudes, but the differences in how my questions were met are astounding.
 
I'm almost a broken record on this, but: this is something the software industry has mostly (/somewhat) solved. Because our job isn't about programs, or computers, or software, or anything like this, it's information. We deal in information, in how to create it, transform it, assess it, transfer it, validate it, and so on. So, out of necessity, our information processes are very mature, including the information processes that allow us to do the rest of our work, a giant system of trial-and-so-so-many-errors.

Of course it varies widely, but most large software companies have bounty programs, where if you report a bug they didn't know you can get rewards, usually cash. Sometimes a job offer, especially if it deals with security or is especially tricky. There are open programs about finding and exploiting security holes, and they are taken very seriously, completely unlike the academic publishing model which basically considers all issues shut once a paper is published. Open source projects, which is a closer model to academic research, also does similar things, and mostly for reputation gains, so would be the best model to emulate.

Of course one big difference is that software developers want to know about bugs. To correct them. Quickly. Whereas in academic research, every error pointed out is immediately dismissed as a personal attack, and things don't get much better from there. Completely different cultures. Despite the fiction of science being "self-correcting", scientists hate being corrected. Absolutely hate the living hell out of it, because mistakes are considered a mistake, rather than a continuous process of iterative improvement, where after-market service and maintenance is crucial to the industry.

It's seriously demoralizing thinking of how much medicine could be improved if it actually did multidsciplinary work, spanning across actual disciplines, rather than specialties of the same discipline. Engineering methods and especially the processes we use to deal with information could radically improve things there, but of course people solely trained on memorizing bits of human biology don't really see the potential here. Which is odd considering how so much of clinical work is basically following scripts, just with much less error- and exception-handling, testing or validation.
There are plenty of examples of things going very wrong in the software industry as well.

And software is different than science, because the purpose of software is to do things, while the purpose of science is to figure out the truth.

If one of the programs your software relies on gets broken, you either have to fix/replace the program so it does the same thing, or rewrite your program to achieve its goal without the broken one.

Unless you’re proposing to codify all of science into some kind of unified system, that mechanism will never manifest.
 
Back
Top Bottom