The Courier Mail: Australian scientists prove CFS is real and have discovered a test for it

I don't agree with this

They are taking steps such as getting every FDA approved drug and getting anyone good they can aboard so on.
That said i agree we should have many teams working on many fronts, we don't know who will discover something we need but without money progress is glacial if not backwards. Its quite safe to say that money spent on ridiculousness is not helpful and can even be harmful. If we didn't have PACErs fighting us we would be better off.

I think that was kind off the point. OMF has got a good team, a great team even. But so do others, and ideally we would fund them all instead of getting angry when one of those other teams get picked over the OMF.
 
I think that was kind off the point. OMF has got a good team, a great team even. But so do others, and ideally we would fund them all instead of getting angry when one of those other teams get picked over the OMF.
I agree but when dealing with very limited funds you achieve the best results by spending them on the best talent
 
I agree but when dealing with very limited funds you don't achieve the best results by spending them on second tier researchers or on people who oppose progress.

I don't know enough about these Aussie researchers to make any claim about them, but in the recent past there was some anger about the NIH funding 3 research centers over the OMF's team which is what Andy and I were both hinting at if i'm correct about Andy. And while the OMF do look very good to me as a semi-informed outsider, it would be wrong to only considering them when it get's to funding, those 3 teams that did get chosen look to have very good programs aswell.

Ideally offcourse they'd all get funded and with a lot more than what was shoved out now.
 
I don't know enough about these Aussie researchers to make any claim about them, but in the recent past there was some anger about the NIH funding 3 research centers over the OMF's team which is what Andy and I were both hinting at if i'm correct about Andy. And while the OMF do look very good to me as a semi-informed outsider, it would be wrong to only considering them when it get's to funding, those 3 teams that did get chosen look to have very good programs aswell.

Ideally offcourse they'd all get funded and with a lot more than what was shoved out now.
As i said i agree that we need to fund many teams, since we don't have a disease mechanism we need to cast a wide net to find it. We also need to cast a wide net to find treatments quickly. But with so little money and peoeple determined to harm us we need to tread as smart as we can. That said my view on this does not seem to be a popular one
 
The problem I have with these announcements from this team is that I feel like they are raising false hopes among patients and their families. This is possibly the third 'big announcement' this year from them of the discovery of a diagnostic blood test (not to mention they've found the cause of ME/CFS) yet absolutely nothing concrete has eventuated.

Each time it happens, local CFS groups are flooded with excited posts from less-informed patients. To make matters worse, this story was picked up in newspapers across the country, so my family have been contacted by excited friends and relatives who had seen it.

I felt very much like the grinch when I had to dampen my mother's hopes and expectations yesterday. I'm still quite upset about it.
 
Last edited:
The problem I have with these announcements from this team is that I feel like they are raising false hopes among patients and their families.
That's what I feel, and also about anyone else who does the same.

There people who are feeling or have felt suicidal because they can't see an end to what they are going through. To have researchers and the media coming out with hype that raises false hopes isn't helping. I know it's not always intentional, but still not helping.

eta - I just want scientists to be measured - I respect Fluge and Mella for their approach, for example
 
This is insane, we struggle to get money for peoeple who are ethical and would do real research to help us, and these shysters get all the money the need and put out junk :emoji_face_palm:
Same thing with PACE/smile/LP etc, they seem to have lots of money to also waste, money that could actually be used for real research by actual scientists :emoji_face_palm: :emoji_disappointed_relieved:

That is a disgraceful comment.

And I say that as somebody who agrees that, on the publicly available evidence so far, their studies are under powered, and their claims are premature.

:grumpy:
 
If the wasted money had gone to Dr Davis instead we would be further along then we are today :emoji_face_palm:
Despite the problems with some of their studies, the Australian group is still producing much better research than Ron Davis. One reason (probably the main reason) that his team was not funded by the NIH grant was because his team does not publish their work. I wasn't aware of that before the NIH funding controversy, and was extremely shocked to hear it - it seriously undermines their credibility.

The failure to publish makes their claims impossible to assess. Hopefully they plan to start publishing in scientific journals very soon, but right now their claims are even worse than from the PACE and other BPS groups. At least we can look at the psychobabble methodology and data to see the flaws, whereas all we've been getting from Davis so far is hype to generate more funding.
 
There are also problems with the systems around publishing in journals, and I can see advantages to just releasing methods and results as they become available for others to review/assess/comment upon. My understanding is that this is what they've been doing (although I've not been keeping an eye on it - that isn't the sort of thing I'd normally read). Has anyone been paying attention to the info they've released? Have I got it right about them releasing their methods and results?

Maybe that isn't the best tactic for CFS, when we're starting with such a poor understanding of things, particularly if one wants to attract funding from government bodies.

I have to admit to feeling less confident in OMF than I was, and it looks like their novel approach to releasing their work may not have worked out, but when they first talked about it I thought it sounded exciting, and possibly like a positive step for how science is done.
 
I know virtually nothing about any of this, but it seems sensible to me, if a little impractical, that various groups should pursue their own idea's/theories, regardless of the funding source.

No one appears to have any clear idea of what ME is, in even general terms, let alone specific, so regardless of who is right, who looks more promising, 99.9999% of all funding is going to be wasted on what are at best dead ends, and at worst actively harmful.

Until a clue is had I can't see any way round this.

No matter how wasteful and time consuming it is.
 
There are also problems with the systems around publishing in journals, and I can see advantages to just releasing methods and results as they become available for others to review/assess/comment upon. My understanding is that this is what they've been doing (although I've not been keeping an eye on it - that isn't the sort of thing I'd normally read). Has anyone been paying attention to the info they've released? Have I got it right about them releasing their methods and results?
I'm not aware of them substantially documenting their methodology and such anywhere. Just statements made at conferences, in videos, etc, without the information necessary to substantiate their claims. This is why no one really knows what the nano-needle is or exactly how it works.
 
I'm not aware of them substantially documenting their methodology and such anywhere. Just statements made at conferences, in videos, etc, without the information necessary to substantiate their claims. This is why no one really knows what the nano-needle is or exactly how it works.

Oh... I thought the plan was that they'd been putting info on methods up on their website, along with their results. I really haven't been following their work though.

If they're just announcing things at conferences without backing stuff up anywhere, that is more of a problem.

There are such problems with the systems of academic journals that finding a way of circumventing them does sound appealing. But maybe they're another rubbish thing which is still less bad than the available alternative.
 
Last edited:
It's worth making a distinction between the researchers and the articles.

Research in a grad student setting is all about getting funding and, this is the important bit for us here, setting promising students on a path to a given field.
I'm sad that the headlines are impacting on Australian PwME. That's exhausting if nothing else. And I personally am disappointed with the hype in media in general leading to jaded populations and an attitude the everything must have already been done (in technology it means that by the time you make an actual, world changing, breakthrough everyone is already so familiar with the idea from science fiction that they think you made a weak attempt at something that already exists).
But it's great to think that these grad students (I'm including phD) are being encouraged so effectively into the field of our condition. That future researchers are being encouraged to investigate PwME as a biomedical puzzle that has the prospect of being solved, or at least getting somewhere.
There is a general understanding (or was when I was involved in such research groups - no, wasn't medicine) that headlines are about selling newspapers, maybe getting attention for grants, but that anyone who really wants to know about the actual research will read the published work. Or at the very least an in-depth article in a reputable science journal.
ETA I'd be more scathing if I read overhyped rubbish in the study's own conclusion. Supposition and optimism, sure, but not an invalid conclusion. Research is wasted when the method was not rigorous, was executed badly, or the results ill-observed. A well executed study is of use to future research in that field, no matter what the study's writer intended or concluded personally.
 
Last edited:
(That's why I'm here: I used to be able to read and interpret such data and synthesise ideas (ie from multiple sources and studies) to see the big picture and the new directions for investigation (including exactly what we need to prove and how). Now I often struggle with words and get tangled in the awareness of greater complexity without actually grasping it. I hope to share our brains so we can collectively nut through it and see what there is to see.)
 
Re: OMF. There's talent and experience in the Stanford team and their collaborators and I'm expecting that useful science will come from them, which may or may not be answers we all want to hear. I understand the desire to work without being slowed down by publishing, which really seems to come from the sense of urgency that Ron Davis has. It's good they're working on publishing now, it's time to put up or shut up imo (I mean that nicely).

I too am really encouraged that some post grads are interested in us, they should be, because we are interesting.
 
ETA I'd be more scathing if I read overhyped rubbish in the study's own conclusion. Supposition and optimism, sure, but not an invalid conclusion. Research is wasted when the method was not rigorous, was executed badly, or the results ill-observed.
Some study conclusions from the Australian group have been exactly "overhyped rubbish". One SNP study compared the number of alleles (not prevalence percentages) for some SNPs between patients and controls, and concluded there were differences. But since the number of patients and controls was quite a bit different (approximately 50 patients versus 35-40 controls if I recall correctly), directly comparing the raw numbers was completely meaningless. They didn't really mention that, however, and still used the comparison to claim a significant difference.

Their other SNP studies weren't as shockingly bad, but didn't make statistical corrections to account for the high likelihood of a fluke false positive when comparing a large number of outcomes (different SNPs) between patients and controls. That's common practice in SNP studies, unfortunately, but that doesn't make it any more acceptable - the results are still almost completely useless, except perhaps for narrowing down what to look at properly in a later study.

But my impression is that their non-SNP studies have used better methodology, so are more likely to be relevant. And most researchers are not competent when it comes to statistics or genetics, so the flaws might be due to a single source which is external to the core group of researchers, or due to honest mistakes. However they do have a duty to understand what they're doing to a minimal extent, to avoid such fundamental errors.
 
They are taking steps such as getting every FDA approved drug and getting anyone good they can aboard so on.

I don't actually think that is what good science is about. Most good science is done by hard working people that nobody has ever heard of with a commitment to particular problem . 'Bringing in' people with names does not have a track record for being productive. To my mind the real problem with ME/CFS is in trying to identify from the symptoms what the physiological problem really is. I doubt it is metabolic failure. Unless people know what question to ask they will not get the right answer.
 
I also do not really understand this stuff about not publishing. Publishing does not slow anything down. And it is easy to put out results in brief form in conference proceedings. I always used to put out data as soon as I thought we were sure it was meaningful. The delay from experiment to public knowledge was rarely more than three months. I do not see any reason why other people cannot do this. What I find unhelpful is the putting out of videos with oblique references to some findings without any controls or proper account of methods.
 
As far as Ron Davis' work goes, he was asked about publishing in his "Bedside Chat with Ben". According to the transcript, his reasons are:
Ben: Yeah, for sure. I think you have hit on a point actually that might be good for some clarification. Obviously with myself being on the forums, I see quite a lot of things being written down. Can you clarify your stance on publications for people, because I think sometimes it’s being confused somewhat about how publications, and OMF (Open Medicine Foundation), and the genome center work and what your aims are? Can you briefly clarify your stance on the publication process? What is perhaps wrong with it in general?

Ron: Well, we feel that the community needs the data that you collect as soon as possible, but what they also need is making sure that the data is in fact valid.

Ben: yes.

Ron: So, that is another process. You may have collected it, but if you are not sure that it is right and it is not necessarily because someone did something wrong. It has to do with uncovering something in the process that’s made you suspicious. Then you have to do a lot more work to make sure, that in fact, it is valid data. You do not want to release the data until you know that it is true.

Ben: yes.

Ron: That’s frustrating. Then you have the problem with publications. I think that what we see in the review process of papers and grants is very much like the experience with trolls on the internet. I basically think of them as a bunch of trolls, reviewers. There is something psychological. I have talked to Janet about this. When you are allowed to criticize and you are given the power to criticize all you want and nobody knows who you are, which you see on the internet all the time, people sometimes get incredibly nasty. Unnecessarily nasty. I think that is a problem with society in general.

Ben: It is that anonymity factor.

Ron: Yes, and I think it’s hiding behind, and you understand, you don’t want those individuals doing the review to be attacked. That’s why you keep them secretive. It seems to me that somebody else needs to overlook that and make sure that they are not being unnecessarily nasty. In fact to check that the criticisms are in fact valid.

Ben: yes.

Ron: That’s the secondary level which is what the editor should do and the grant manager should be doing. And they do not do that. I think it including an extremely negative personal attack should be thrown out. That will stop it.

Ben: Mhmm.

Ron: People will know that if I put something super harsh but that is really a personal attack then I am wasting my time.

Ben: yeah.

Ron: Anyway, I think we need some fixes on that. I am really worried about the young people in science. I have had enough experience. I’d say “Oh yeah, there is another negative, There’s another troll!” and kind of dismiss it. But the young people, sometimes you know, they do not know how to deal with that and they take it very personal that they are not a good scientist. That’s not good for anybody.

Ben: Mhmm.

Ron: There are a lot of very talented people there, it’s incredible. So this is a totally secondary thing, but it’s that people might like to know that the frustrations of actually doing the research. You have the frustrations of the actual research, and then the frustrations of the people looking over your shoulder.

Ben: Yeah, I understand. I mean what one point that comes up, that is kind of tied in with that, but secondary, is that some people have maybe, perhaps, been confused over the fact that they didn’t want to publish. That isn’t true is it? It is not about not wanting to publish. I mean you can tell me about that.

Ron: I certainly want to publish. It’s actually a little, you know, you are going to get heavy criticism so there is a little bit of intimidation there. Someone is going to find something wrong and then you are going to start arguing.

Ben: Mhmm.

Ron: But no, why would you do the research if you didn’t publish. The concern really is the fact is it really correct. And there is some stuff on our patient data that I am suspicious of. I don’t know what is wrong (with the patient data). I am just looking at the data. Is this really valid? People ask why we are not putting it up. It’s the gene expression stuff and I look at it compared to our trauma and it just doesn’t seem right. I think what we are going to have to do is go back. The problem is with the severely ill we really can’t sample those patients again. I am trying to come up with what can we do to see if in fact it is valid. We will have to do an extra experiment and I think it’s the processing that we chose to use to try to get better data. I think it may not have done it. I think it means we have to develop a different process to validate that data.


Tl;dr: RD says that he a) finds peer review a little intimidating, and b) is not sufficiently confident in his results yet.
 
Last edited:
What I find unhelpful is the putting out of videos with oblique references to some findings without any controls or proper account of methods.
I think it has to do with getting donations.
The OMF need to let people know that things are happening, and research is "finding things" but at the same time, Ron doesn't have enough to publish by the sounds of it.

So the videos help keep patients interested enough to continue donating.
 
Back
Top Bottom