More PACE trial data released

Edit: Does this board have a "code" mode, to preserve the spacing of a piece of text? I had this table nicely formatted, but then all the spaces got eaten.
I looked at the BBCode rules. Can you try (code)text(code) replacing round brackets with square brackets
From : https://www.cartographersguild.com/misc.php?do=bbcode

e.g.
Code:
This is row one    one      two     three
Now try row two    four     five    six

EDIT: oh well, it's meant to use a monospace font but it doesnt. Doesn't seem you can use both the font tag (eg for courier font) and the code tag together
 
Last edited:
EDIT: oh well, it's meant to use a monospace font but it doesnt. Doesn't seem you can use both the font tag (eg for courier font) and the code tag together
I tried exactly the same thing using 'code' and courier font, but the 'code' seems to override the font, and 'code' does not automatically give equi-spaced font, which is annoying.
 
The less encouraging news is that the mean difference values don't match the published table.

As MS would say, "read the paper!" - in particular, the footnote to table 3 explains that the comparisons shown are for the final adjusted models.

Mean difference is NOT the same as difference in means.

We already know that the original FOIA PACE data is OK. We are now entering very dangerous territory by putting a non-legit dataset out there.
If anyone has downloaded it and wants to look at it, I would suggest that they remove the former PACE data from it and separate out the secondary variables into separate sheets.
If they can't do this, I would delete it entirely. Otherwise there is a danger of coming across it at a later date, thinking it is OK, and sharing it.
We need to be really really careful about data contamination.
 
As MS would say, "read the paper!" - in particular, the footnote to table 3 explains that the comparisons shown are for the final adjusted models.

Mean difference is NOT the same as difference in means.

We already know that the original FOIA PACE data is OK. We are now entering very dangerous territory by putting a non-legit dataset out there.
If anyone has downloaded it and wants to look at it, I would suggest that they remove the former PACE data from it and separate out the secondary variables into separate sheets.
If they can't do this, I would delete it entirely. Otherwise there is a danger of coming across it at a later date, thinking it is OK, and sharing it.
We need to be really really careful about data contamination.

I do wonder if they set out to create uncertainty in the data provided as well as trying to give the minimum amounts so they didn't give the Eq5d values just the uk summary.
 
@Barry @Adrian @sTeamTraen

I found the solution to formatting. Replace round brackets with square brackets in the following example

(FONT=Courier New)
(CODE=rich)
This is row one one two three
Now try row two four five six
(/CODE)
(/FONT)

Gives

Rich (BB code):

This is row one    one      two     three
Now try row two    four     five    six

EDIT: Now I know what to look for I found it in the Xenforo BBcode help guide
https://xenforo.com/community/help/bb-codes/
 
I'm trying to follow the conversation and I don't understand what is the problem. If the data has to be reprocessed I can help with that.

The problem was that the data was reprocessed when it shouldn't have been.

It was assumed that the data would be in the same order as the first release. It very much wasn't, and it looks like QMUL have gone out of their way to make sure it wasn't.
Each variable was provided in separate files, with no indication of ordering or even trial group. That makes the data fairly useless, apart from at a cohort level.

Maybe I should have mentioned that I used to work with large patient datasets for an international cohort study.
 
The problem was that the data was reprocessed when it shouldn't have been.

It was assumed that the data would be in the same order as the first release. It very much wasn't, and it looks like QMUL have gone out of their way to make sure it wasn't.
Each variable was provided in separate files, with no indication of ordering or even trial group. That makes the data fairly useless, apart from at a cohort level.

Maybe I should have mentioned that I used to work with large patient datasets for an international cohort study.
Doesn't every patient have a number assigned? If so, why didn't QMUL provide it?
 
Last edited:
It was assumed that the data would be in the same order as the first release. It very much wasn't, and it looks like QMUL have gone out of their way to make sure it wasn't.
Each variable was provided in separate files, with no indication of ordering or even trial group. That makes the data fairly useless, apart from at a cohort level.

Sounds like QMUL was either careless or was actively trying to prevent an accurate analysis of individual patient outcomes.
 
The problem was that the data was reprocessed when it shouldn't have been.

It was assumed that the data would be in the same order as the first release. It very much wasn't, and it looks like QMUL have gone out of their way to make sure it wasn't.
Each variable was provided in separate files, with no indication of ordering or even trial group. That makes the data fairly useless, apart from at a cohort level.

Maybe I should have mentioned that I used to work with large patient datasets for an international cohort study.
So it's lacking an ID between datasheets, basically scrambling individual data points for anything but the cohort level?

If so definitely 100% deliberate. And impossible to prove without the original, which they will be able to keep confidential. Clever. Devious and immoral, but clever. It's a common fuck-your-legal-summons tactic, to give unmarked boxes of scrambled evidence. Takes enormous labor to patch it all back. It takes truly shameless people to do that, though.
 
And impossible to prove without the original

Well, actually it's not impossible. That's why I asked for checks earlier in this thread. The original data they provided in 2016 matched the summary data in the published studies.

If you assume the data are taken in ID order (a reasonable assumption, but certainly not a given) for both datasets and run checks against the summary data for the few secondary variables for which these were published, you don't get what they published by a long way. You get a random selection of numbers around (close to) the cohort mean.

I did just that for the WSAS 52-week results, which show clear differences between APT/SMC vs CBT/GET in White et al, 2011, but those differences vanish in the set they have provided (because the groups are all mixed up).

That's why it's dangerous to make assumptions about the data - and dangerous for *everyone* - because someone could (wrongly) accuse them of publishing false data on the basis of a combined dataset. And they know that. That may be why they did it, I don't know. But it also shows a shocking disregard for the data, the patients who provided it, and the person/people requesting it. It reveals they really don't care about curating this data and looking after it. They should be doing everything they can to make sure that it is treated with respect by everyone, even if they don't like them - and that means making sure that it is provided and used with full transparency and instruction. That is what the Act is for.

After Alem's original request was won in August 2016, QMUL should have realised that it made no more sense to keep refusing these requests, and that the whole dataset should be prepared for release. They have absolutely nothing to hide in this data. All their errors are in their methods. And now they can add bad errors of judgment to that as well.
 
Is this verging on contempt of FOI legislation & judgment, so worth raising formally? Depending on availability of processes.
Judges tend to be picky about the spirit of the law in following their judgments, and with good reason.

If it can be shown plainly, it could be a good way to have a favorable judgment that just forces them to release everything they hold as is and under independent supervision to make sure they don't try that again. It's bad faith piled upon bad faith.

They seem to forget that despite their perception, we're not helpless idiots and some of us have solid professional experience to show when they're full of crap.
 
Back
Top Bottom