If they are sincere and serious about this, they will need to have a serious and awkward talk with the parent company and the editor-in-chief. I doubt they're at the point where they can even understand the connection, yet.

Because this is like being a subsidiary of a major tobacco company and trying to raise awareness about the health hazards of smoking, without actually blaming smoking. Doesn't work like that.
As of Aug 22, ~250 million people have had a confirmed SARS-CoV-2 infection in the WHO European Region. Current evidence suggests that ~10 to 20% of this population develop Long COVID. A collaborative effort is imperative to expedite diagnostic & treatment development.
 
Which won't happen because it would basically invalidate 95% of clinical psychology and all psychosomatic medicine.

I think it would be useful to have someone expert enough in actual clinical psychology actually do the maths on this to get a way forward on this. The area existed alright before BPS and EBM came along and there was a research base. Much of which still exists and can be applied (e.g. counselling), and yes might need some deprogramming out of the workforce where it has been 'tainted'.

There will be loads of academics who can do the old stuff and aren't BPS (and unis are super-fast on turning round new masters in particular) plus loads of the counselling courses are run by charities.

I'd been keen to work out what bits have contributed to it being 'so' or more importantly what and who is left that could sort one or the other. WHy hasn't scientific psychology managed to rise up and sort it this time?

Psychsomatic medicine always needed to be invalidated - and always has been every few decades. It's got itself a real old grip on it this time though. And very mixed up with beliefs of 'patients not looking after themselves' causing more work etc. It feels it needs to be done sooner rather than later, it feels too many new areas (e.g. rehab stuff) have now caught the message of 'less robust' might reward.

The diagnostic centre idea seems a good place to move power away from both (and way to train up new gen psychologists - there are loads who have accredited psych degrees) - and a good way of hiding a reverse ferret if they wanted to rejig clinical psych back to diagnose (which might be situational --> functional support being funded again e.g. CAB, advocacy, noise teams, housing etc) and give specific treatment rather than sausage machine and label everyone. Lots of issues going on in that sector where most inpatient provision is delivered by the private sector for NHS etc. getting more who are able to actually diagnose or spot misdiagnosis and if something is working or not is pretty key.

One good start might be someone to be allowed to note how many psychosomatic diagnoses under certain things ended up in something else and what that delay cost. Take the toy away from CCGs or wherever it's getting gatewayed, find out actual demand for certain services.

I'd hope given recent FND articles (e.g. on CJD) there are post-mortem reports on anyone who dies early following that diagnosis if they are so interested in it - a new policy to that effect going forward might just stave off any 'too many' or 'scandal' before it happens, and don't see why that shouldn't apply to all 'new' conditions.
 
Going through this is interesting:

"Essentially the same". Makes the changes to outcomes during the trial especially unethical and unjustified. This was their 4th try at a formulaic methodology. Although they did clear that up by specifying later on that it's because they preferred better results. It's a good thing that medical research is rigorous and accountable. Wessely tried to explain this as necessary course adjustments, even though by their own admission they used a formula they themselves applied at least 3 times and had already sold as is to NICE.

Costs are as follow:
  1. Research staff costs (I count 12, mostly nurses or data entry): £1,1M
  2. Overheads: £504K
  3. Equipment: £36K (most of which seem unnecessary, includes a too-small number of actimeters)
  4. Travel: £64K
  5. Additional costs including additional travel: £218K
The NHS provided therapist costs for a total of : £1,1M. So this would not count in their budget but probably is counted in the total cost.

I guess this was just the rough proposal and was revised, but this amounts to : £1,92M in direct expenses (described as total cost to the MRC) with an additional : £1,1M covered by the NHS for a rough total of : £3,02M.

Is the £5M cost commonly cited actually accurate?

And because he has revised the history of his involvement:

I don't understand the wording here, what was Wessely "director" of here? Ah, the CTU:


So Wessely was director of the Clinical Trials Unit. In addition to having co-authored the manual. Total passenger who was merely admiring how beautiful the cruise probably was (not that he was there, just imaging it must have been, I guess).

Hang on at £1.1m for research staff costs, if it really was 12 staff that would average £100,000 cost per person. How many years was this mostly data entry? Nurses get around £35k at best (pre-tax) + pension or whatever other bits today.
 
They can see the problem when they don't believe in the treatments. Even though the problem has nothing to do with the treatments, and everything to do with a process that allows any BS to be "shown" to "help". I wrote many times how I'm surprised that the alternative medicine industry do not abuse this any more than they do, the formula is simple and cheap and can only be countered by fixing the underlying issues with EBM. Which won't happen because it would basically invalidate 95% of clinical psychology and all psychosomatic medicine.

How about a trial where the leaders were the inventors of the model (so in this case it would be the inventors of homeopathy) running the "definitive" "independent" trial where they showed that useless treatments "help", by cheating, no need for quotes here.


upload_2022-10-22_11-6-27.png

Would be interesting to know how well the control and blinding was implemented in practice in those trials, because this does not really tally with the fact we know homeopathic treatments have no active ingredient in them, so any apparent success has to be down to bias creeping in somewhere. In which case there must be bias that escaped their controls and blinding; maybe deliberately so.
 
Could anyone direct me to the document received via FOI on the questionnaires used in the PACE trial?

Thanks in advance.

(The document/ link is posted somewhere on the forum, it seems I accidentally deleted a draft post with the copied link when I posted a related question here: )

From the wording on [edit] some of the questionnaires used in the PACE trial it appears that these weren't filled in by the participants themselves but by study staff in a face-to-face situation with the participant.

So study staff read the questions to the patients, and the staff fills in their answers.

I wonder how common that is/ was in studies in general and who fills in the questionnaires in such settings?

Does 'blinding of assessors' mean only those who actually analyze the data are blinded to the trial arms, or also those who sit together with the patients to fill in the answers?

Edit: So my question mainly is: Is it possible that therapists or other staff who see the patients also on other occasions actually fill in the questionnaires together with the patient? And could the investigators in this case still say that the assessors were blinded?
 
Last edited:
I have a vague memory of participants not filling in questions and the researchers filling them in on their behalf based on what they thought the answer would be. Does this ring a bell with anyone?
 
I have a vague memory of participants not filling in questions and the researchers filling them in on their behalf based on what they thought the answer would be. Does this ring a bell with anyone?
IIRC they used the last answer submitted. Which is clearly making up data. This fact alone would normally be a fatal error in any serious professional setting. Inventing data is obviously not permitted in a serious process, in research it's blatant misconduct, even when it's a trivial amount.

From memory the filling in questionnaires by the therapists themselves is a common practice with IAPT, maybe that's what you're thinking of. Likely still happening.
 
Thanks everyone for replying and @Pustekuchen for the li nk.

Now not able to word a correction of my post quoted above but leave a 'snippets-excerpt' here (word document attached) and uploaded the relevant snippets (p. 49).

If anyone felt up to do a transcript of the relevant points from the snippets (only from "9. Assessments..." ), that would be extremely helpful.
[Edit: done / see following post -- thank you @Pustekuchen ]

(I'm working on a submission to the IQWiG draft report and not sure yet if I'm able to add some points that refer to those details but I think it's useful anyway to have them available in text form.)

PACE_protocol_9_Assessments_p49.png PACE_protocol_9_Assessments_p49_2.png
 

Attachments

Last edited:
The text recognition had some errors, I hope it's correct now.
Here is the transcript:
9. Assessments and Procedures

9.1 Schedule for follow-up


See above Figure 1 section 3. 1.1, and Figure 7, section 10.2.2.

9.2 Assessments

A participant's guide to completing their questionnaires will be given verbally to the participant by the RN. The questionnaires should be completed without conferring with friends or relatives and all questions should be answered even If the participant feels them to be irrelevant.

All participants will be re-assessed in clinic. Those participants who cannot attend clinic will be offered home assessments (or failing this assessment by telephone or by post). Before second and consequent RN assessments, self-rated measures will be posted to the participant prior to the visit and checked for completion at assessment by the RN. If the participant fails to bring them to the visit they should complete them at that visit. If a participant becomes too tired or ill to continue with the assessment/they will be offered the opportunity to complete the assessment on another day, within the next seven days.

Because we do not think it practically possible for the RN to remain blind to treatment group allocation, we will not attempt to achieve this. All our primary and secondary outcomes are therefore either self-rated or objective in order to minimise observer bias. Participants who drop out of treatment will be assessed for outcomes as soon as possible, rather than waiting for the normal follow-up.

When the participant does not attend a research interview, the RN will send the self-rated questionnaires to the participant's home address, with a stamped addressed envelope. If questionnaires are not received back within a week/ the RN will arrange to visit the participant at home and oversee completion of the questionnaires. If necessary, only the primary outcomes and the CGI[48] (to assess deterioration) should be the minimum completed.

9.2.1 Long term follow-up

Permission will be sought from the participant to be contacted annually for follow-up information regarding the participant's health and employment status. The participant will also be invited to remain in contact so that the results may be disseminated to them once published.
 
Thanks @Pustekuchen

So Research Nurses were not blinded to the trial arms and the investigators state that "all our primary and secondary outcomes are therefore either self-rated or objective in order to minimise observer bias."

But I think that is all very 'relative' because the Research Nurses seem to also have the role of observers, e.g. with the 6-minutes walking test and other assessments they filled in [*] and they also could easily influence even how the patients filled in the self-rated questionnaires:

"Those participants who cannot attend clinic will be offered home assessments (or failing this assessment by telephone or by post). Before second and consequent RN assessments, self-rated measures will be posted to the participant prior to the visit and checked for completion at assessment by the RN."

"When the participant does not attend a research interview, the RN will send the self-rated questionnaires to the participant's home address, with a stamped addressed envelope. If questionnaires are not received back within a week/ the RN will arrange to visit the participant at home and oversee completion of the questionnaires."


[*] Figure 9: 10.2.2 Table of research assessments by time points, Pace Protocol, p. 52-53
 
Last edited:
So Research Nurses were not blinded to the trial arms and the investigators state that "all our primary and secondary outcomes are therefore either self-rated or objective in order to minimise observer bias."
From the various discussions and bad excuses, I think that this was all dismissed because the data analysis was "blinded", as if it removes all the other biases that existed at every single step along the way. They don't speak of observer bias because apparently it's good enough that whoever fiddled with the fake biased numbers was not aware of whose fake numbers they were.

And as if matters whether your data analysis is blinded when this is what it comes out with. It truly takes a blind person to not see that this is FUBAR.

PACE-walking-test-in-perspective.png
 
Back
Top Bottom