Should research teams include some engineers?

Creekside

Senior Member (Voting Rights)
Like everything else, biological processes can be modeled by mathematical equations. Many of those processes involve feedback loops. One factor medical researchers might ignore is time. Having a molecular signal arrive 12 ms--or 12 minutes--late might make a big difference in effect, but you won't find that if your testing method doesn't take the time factor into account. Systems theory is taught in engineering schools, but is it taught in medical schools? Adding an engineering perspective to medical research might be helpful.
 
I think the biggest thing missing from much of research compared to engineering is focus on the priority tasks. I worked in both R&D and Engineering design and in R&D and the latter took forever to get things done. In retrospect we spent a lot of time thinking and talking, Being "creative", instead of doing, although at the time I would have strongly disagreed with that.

For example I see researchers re-inventing the wheel often, thinking their experiment will be the best, when many times you can pick up the phone and within 3 calls you have found a good practical approach. Well, in my time it was the phone. For example if I wanted to do a Seahorse experiment I would pick up the phone to Daniel who has used it extensively in different cell lines and learn the pro's and cons.

It's especially true for sample collection, processing, freezing and thawing, controlling batches etc. Find out best practice and use it. One example is media culture. If I'm doing cell experiments I need to understand how the media culture might affect my experiment. If I have doubts make a few calls.... Same applies to metabolite half life. If something has a short half-life it's going to hard to measure accurately and repeatably, It's the practical aspects that engineering brings that Scientists can miss. Otherwise your experiment could be useless where no one can replicate. Could be 5 years work down the drain.

I've seen best practice used for a pilot study, and then a new "better" creative team gets assigned, does all sorts of experimental tuning, and one year later chooses the exact same experiment the pilot study used except theirs is somehow presented as being much better. But they don't get the same results because they missed a crucial sample handling step.

In Engineering you have project management. That consists of questions like
* Where are you on the plan.
* What issues do you have that need help or escalating.
* What caused the delay. What can we learn and improve from that.
* Does it still make sense to work on this given that X just happened........
And also regular project reviews where colleagues can give feedback.

And too much bureaucracy. For example, sometimes getting access to frozen samples can take many months and what was meant to be a quick experiment turns into a time waster that just distracts.

I don't think there is a simple solution unfortunately.
 
My completely uneducated impression is that most medical researchers underestimate how much of a hard science it really is. You need rigour.

But if JE’s anecdotes are anything to go by, some of the solutions are so counter-intuitive that you might have a hard time making any substantial contributions unless you’ve got a free thinking mind.

The problem is combining both, possibly in a team, but ideally in the same person.

And it seems like most of the BPS proponents either don’t have any of them, or are unable to apply them.
 
Like everything else, biological processes can be modeled by mathematical equations. Many of those processes involve feedback loops. One factor medical researchers might ignore is time. Having a molecular signal arrive 12 ms--or 12 minutes--late might make a big difference in effect, but you won't find that if your testing method doesn't take the time factor into account. Systems theory is taught in engineering schools, but is it taught in medical schools? Adding an engineering perspective to medical research might be helpful.

I think the answer is that to get anywhere working out disease processes you need an intuitive understanding of complex dynamics. I am not sure that teaching systems theory can ever provide this. UCL set up a Systems Biology centre which I was invited to join. It was all jargon from people with no intuitive grasp. "Systems Medicine" is to me another of those empty fads.

Some engineers will have the grasp. The ones who retrain in medicine perhaps unsurprisingly don't seem to (those that can do, those that can't retrain).

My impression is that very few people have the grasp needed to see intuitively why one model of a disease could fly and another would never fly. And yes, it has as much to do with time as the spatial patterns of biochemicals.

The truth is that we understand very little about disease mechanisms other than some simple concepts like too much this. Much of what us taught is wrong. The trouble is that even if you identify more subtle dynamic patterns nobody understands, so the received wisdom never changes.

My experience us that you havd to work in groups of people with complementary neural skills but I am not sure it matters much what formal training they come with.
 
I cultivated an interest in software development and what I saw there in teams in terms of attention to quality control is a world apart from how things seem to be done in research and medicine.

They didn't just build software. They built documentation with the explicit goal to would allow a person with no prior involvement to understand how things worked. In non-commercial projects, the code was public and easily accessible to everyone and public participation was welcome and the project was set up to allow people with a wide variety of skills and experience and time to contribute. There were automated systems that ensured things were working as intended by regularly running tests. The goal wasn't just to build something, but something that worked correctly and reliably.

An approach used was to intentionally try to make the software fail or behave incorrectly. People were encouraged to think about all the ways in which things might fail. Anyone could report issues on a public bug tracker.

If some of this culture came to research it would help improve replication rates, save time and resources and allow more participation by outsiders.

Imagine if the expectation in medical research was not to publish the most positive result that was good for one's impact factor and career, but to create and share reliable knowledge about a particular experiment and the outcome, making everything as freely and easily accessible as possible, so that everyone could have a good chance of replicating the results or spot errors.
 
Last edited:
I’ve seen this idea that software development somehow has a better grasp of this mentioned before.

What you you describe was my job for a decade or so. The tests, the quality control, etc. And I can assure you that there are plenty of people in software development who do not take this seriously or with egos or taking shortcuts or ignoring problems exactly as you see in research. People who see it as not important, as knowing better than doing the hard boring work of making sure stuff works, or just wanting to cut corners for expediency because they can. The industry is very good at projecting an image of itself that is far from reality. Especially when money/business comes into it.

Edit: as ever xkcd has a comic

Software development and research/science/academia both supposedly have systems for challenging, improving and finding what is best. Both are routinely circumvented.

Sure, there are some fantastic examples too, but I just want to give a bit of a counter to the idea that it is an industry to copy. No field has got this right (although we can all learn things from other fields). Instead it’s all about the people and the teams and the cultures they build.

Some of the most interesting developments in biology do seem to be coming from people who also have a physics background though. I’m not sure if that is about them or their education though.
 
Last edited:
Like everything else, biological processes can be modeled by mathematical equations. Many of those processes involve feedback loops. One factor medical researchers might ignore is time. Having a molecular signal arrive 12 ms--or 12 minutes--late might make a big difference in effect, but you won't find that if your testing method doesn't take the time factor into account. Systems theory is taught in engineering schools, but is it taught in medical schools? Adding an engineering perspective to medical research might be helpful.


I think the answer is that to get anywhere working out disease processes you need an intuitive understanding of complex dynamics. I am not sure that teaching systems theory can ever provide this. UCL set up a Systems Biology centre which I was invited to join. It was all jargon from people with no intuitive grasp. "Systems Medicine" is to me another of those empty fads.

Some engineers will have the grasp. The ones who retrain in medicine perhaps unsurprisingly don't seem to (those that can do, those that can't retrain).

My impression is that very few people have the grasp needed to see intuitively why one model of a disease could fly and another would never fly. And yes, it has as much to do with time as the spatial patterns of biochemicals.

The truth is that we understand very little about disease mechanisms other than some simple concepts like too much this. Much of what us taught is wrong. The trouble is that even if you identify more subtle dynamic patterns nobody understands, so the received wisdom never changes.

My experience us that you havd to work in groups of people with complementary neural skills but I am not sure it matters much what formal training they come with.
Having worked on modelling systems (not biological systems) I think its necessary to have people who understand the area as well as people who understand modelling. What we have found in modelling complex systems is that often no one has a good overall understanding of all the areas that interact (although they may have reasonable intuitions). Then a model effectively animates what the different experts are expressing.

Just writing down a model can be useful as it checks everyone is on the same page with a larger complex system. But the work we have done has looked at writing models that can then be executed (in our case as discrete event simulations) so that the interactions between different elements can play out. The way we've done this is having the model that is executed and then this uses a number of stochastic distributions that represent the environment that is not being modelled. Thus you run the model a large number of times (using Monte Carlo techniques) to understand the overall space.

I think there are advantages in using simulation over sets of equations (which is common in many areas) in that it becomes easier to express concepts that are more natural to experts and you aren't constrained by the maths (say of dealing combining lots of pdfs).

However, in the end modelling should be thought of as a tool to help people think and not necessarily believed (an economist I worked with kept making this point). Models are approximate and abstractions of the world they may or may not be accurate. Where I think models can be useful is in making sure ideas are well expressed to everyone in the team (i.e. written down and communicated) and then in exploring the space of a solution.

For ME I'm convinced we need to be understanding it as a dynamic system and this should impact on the way experiments are done. If the system is basically of the form 'symptom levels' is a function of (exertion over the last x days, .....) and if we measure biological material (blood or whatever) that is in some way correlated with symptoms then it needs to be done in a way that understands levels of exertion (and other factors). Whether a mathematical model can help to explore different candidate mechanisms I don't know.
 
An approach used was to intentionally try to make the software fail or behave incorrectly. People were encouraged to think about all the ways in which things might fail. Anyone could report issues on a public bug tracker.
More than that people are paid for finding security critical significant bugs (bug bounty programs). And lots of academic research tries to crack crypto, protocols, etc. It feels very different from the medical world which seems very defensive and resistant to criticism of techniques - look at how PACE etc.
 
Sure, there are some fantastic examples too, but I just want to give a bit of a counter to the idea that it is an industry to copy. No field has got this right (although we can all learn things from other fields). Instead it’s all about the people and the teams and the cultures they build.
I'm a believer that different fields can learn from each other and its useful to understand different ways of working/ideas and concepts from different fields - then use them in your own. Sometimes this is a good approach to innovation.

I do think that one thing we have on this forum is people with different backgrounds who look at medical research and perhaps see it differently from those who are working in the field everyday and perhaps take methodology as a given,
 
It feels very different from the medical world which seems very defensive and resistant to criticism of techniques - look at how PACE etc.

The publishing works in ways that still resemble how things were done in the pre-internet era.

If I think of an open source project on github and compare that to how medical research is done, the medical research seems archaic, closed, susceptible to loss of data, manipulation, lack of accountability.

Is there no one that can envision and build a better system?

Something like github but for documenting every single detail at every step of research, what was done, by whom, with all the non-sensitive data, who uploaded data and when, who altered data and for what reason, all the analyses that reference what data was used, etc.

(yes I know there are data repositories for research which perform some of these functions but they seemed to be more about conserving valuable data)
 
Last edited:
The publishing works in ways that still resemble how things were done in the pre-internet era.

If I think of an open source project on github and compare that to how medical research is done, the medical research seems archaic, closed, susceptible to loss of data, manipulation, lack of accountability.

Is there no one that can envision and build a better system?
Publishing has changed as well. These days most AI papers are published on archive way before they appear in a journal or conference. The field is fast moving. But also much of the research is done by companies (with or without academics) possibly due to the costs. Surprising maybe how many companies opensource their models (meta, microsoft, google, deepseek etc) which allows others to build on these.

In my field (security) there is an community of people who break systems and publish blogs and this is how the field progresses.

Also huge numbers of resources on github.

The problem as I would see it is with computer science you can just do it (with time, money and equipment) and there is a certain economic incentive to create systems and hence companies that back these and get investment from the markets. If I need hardware I can rent it in various clouds.

Medical research doesn't seem to work this way. You need ethical approval, lab equipment, subjects etc. So the process is much more regulated.
 
I'm a believer that different fields can learn from each other and its useful to understand different ways of working/ideas and concepts from different fields - then use them in your own. Sometimes this is a good approach to innovation.
Absolutely agree and hope my comment along those lines made that clear. But I also think it’s important to not fall into traps of the grass being greener on the other side or of technical or methodological solutions being a magic solution to human problems.

More than that people are paid for finding security critical significant bugs (bug bounty programs). And lots of academic research tries to crack crypto, protocols, etc. It feels very different from the medical world which seems very defensive and resistant to criticism of techniques - look at how PACE etc.
So to challenge this a little and play devils advocate, I want people not versed in the field to hear and understand these approaches are not universal. There are plenty of parts of the software development world which have been and still are resistant to these approaches and have been as defensive and resistant to criticism as the medical world. Security through obscurity still exists. Systems released without adequate testing or which do not receive ongoing support for known flaws let alone robust security audits are rife. And the money spent lobbying to avoid regulation is huge.

AI, one of the fields with the most money right now, which could be doing the right thing, instead often has atrocious standards and many of the big names happily ignore best practices in the mad dash for something new and shiny.

And just to add to the confusion (which is deliberate by some companies in my mind) open weights models are not open source. There is a huge difference and significant implications. But I digress.

I do think there is a lot to be learned from the tech world, I just don’t want the view presented to be panglossian.
 
Security through obscurity still exists. Systems released without adequate testing or which do not receive ongoing support for known flaws let alone robust security audits are rife. And the money spent lobbying to avoid regulation is huge.

It does still exist although I think less than it used to where it was common. Partly driven by government initiatives - But I'm probably thinking in terms of software that an enterprise would be happy to deploy.

AI, one of the fields with the most money right now, which could be doing the right thing, instead often has atrocious standards and many of the big names happily ignore best practices in the mad dash for something new and shiny.
AI is moving really fast and doing the right thing is often ignored in favor of getting stuff out - also big issues with cloud based AI and the collection of data.

And just to add to the confusion (which is deliberate by some companies in my mind) open weights models are not open source. There is a huge difference and significant implications. But I digress.
I get the point - the dev process isn't accessible to others. But models are fine tunable by others (at relatively low cost).
 
I get the point - the dev process isn't accessible to others. But models are fine tunable by others (at relatively low cost).
Yes, sorry, I know you understand the nuances, I suppose in the context of this discussion the difference seems important as we cannot see the source code, the training data. The methods are hidden even if the results are ‘open’. Which seems very much like the sort of closed science we are advocating against here?
 
Yes, sorry, I know you understand the nuances, I suppose in the context of this discussion the difference seems important as we cannot see the source code, the training data. The methods are hidden even if the results are ‘open’. Which seems very much like the sort of closed science we are advocating against here?

I think its slightly different as the artifact (model) is available and can be externally examined and tested. Its not just a paper that you can read with chosen results. With an open weight model I can (and do) download it and test its capabilities (and bad things it may help me do) - I can also modify it (via fine tuning). I can examine its working in detail as well. So although I have to trust how it is created I can perform a lot of testing. There are also standard benchmarks for comparison which again I can run (as I may not trust published figures given the random token selection would allow cherry picking of results).

I'm not sure what the medical equivalent of that would be for say trial results. For a biological model it would be to be able to explore, test, poke and change a given model and see how well it matches other data.
 
Back
Top Bottom