Response received from Sarah Tyson .
How will this be used. Protocol for analysis / stats available ?
No the stats plans has not been publicised; this isn’t a trial. But we are using response rates, floor and ceiling effects, Cronbach’s alpha and inter-item correlation to assess construct validity including uni-dimensionality and redundancy; thematic content analysis for content validity; Spearmann rank corelations for criterion-related validity; analysis of variance for discriminant validity. We have not assessed ret-test reliability here as the CAN-Meis not intended to be used repeatedly. In the other assessments were we have assessed this, we have used independent t-tests and analysis of variance; the mean and standard deviations of the differences to calculate the minimum detectable difference; and analysis of variance to assess sensitivity to change.
None of the assessments in the toolkit are related to the professional background of the person who is receiving them. They would be used by a consultant/doctor as much as any other professional.
If you have a look at my replies to other comments, you will see that we have taken great care to make the the toolkit relevant and accessible to people who are more severely affected. This is reflected in the high proportion of participants who are more severely affected.
This assessment does not refer to PEM, there is a separate assessment for that. The paper regarding the development of the assessment will submitted for publication shortly.
In the analysis, we will be looking how people’s needs differ at different levels of severity.
Really?
They would be used by a consultant/doctor as much as any other professional.
Doesn't sound the type of healthcare anyone would want for any other illness? She's hardly forwarded the argument she has in mind when she says this well - how can these be used to help someone's health?
And no, really no proper scientific medicine would be using any of these types of stats tools layered on each other - particularly when such poor checking/regulation of the input data has occured, this hasn't operated like even an experiment where they've drawn x number of people meeting y criteria in and asked them to do z.
They've used MEA to distribute extremely long surveys many won't be able to complete without detriment to their accuracy, and with many ambiguously-written questions that the intention of how they will be used have been hidden ( a massive informed consent issue) to use so many stats it looks like a fishing expedition on an apparently endless and not either verified or confirmed from the list within the new guideline (to ensure they are 'on task') load of features.
SO we neither know the symptoms/aspects being measured, nor the weighting of each, nor who on the people they've recruited. Which makes the quoting of means and std deviations laughable - what do they think those mean? deviation from the middle of what 'group' on what 'factor'? You have to at least be very sure on one (either the group or the factor and how it operates) for that deviation to means something because it is a check of eg representativeness or range of the then dependent factor. ?
The problem is that even if we did finally get some decent, normal, clever independent physicians who saw a pwme and didn't have a bias so just looked into this with an open mind
Then somehow rather than this sort of thing reflect on those who designed it, and then empathising with us on the type of thing we've had to put up with and weather for all these years, hence asking for/stating a complete change of specialism being 'in charge' is necessary...
somehow this ends up reflecting on us as if we must have done something to deserve it, or that we had any power in asking for this.
SO it is important for our reputation as pwme and how we are ever to be seen and respected by other staff and scientists going forward that MEA reject this. And say it 'isn't ours [pwme] as an invention'
or as if our condition is this complicated - when it mightn't be the most straightforward illness but actually if you take out the having to pander to a political compromise/needs of staff it at least means you are just talking about the actual illness not having to include some 'it might also be a delusion' aspect etc.
what we need is for us to filter out other people's wishes and reframing based on their wanting to keep their jobs and solve their own cognitive dissonance, as they were or not wanting to have to set history straight.
So we can just say it is a condition of PEM/PESE (and deterioration if we overdo it) along with x symptoms that we know are pertinent. ANd therefore comes in different severities which have vastly different debility patterns - but that these link on a spectrum because that 'environment' and deterioration means 'within individuals' can move on that spectrum. Just like with other illnesses where you go through stages and get worse.
We need something looking to keep the nuance and the parts that are important by severity, but strip out the political compromise stuff. The worst of all is trying to hide that, make us 'mysterious' by their working and so even if it creates more work we can't strip out the stuff that's the old school 'but it might be how they think' additions because they've embedded it into all the other measures by using this weird method and strange 'psychometrics' instead of basic, straightforward data.
The most basic right we all have after everything we've been through as a group should be access to our own data - to be able to check its vericacy and know its source, see if it has been used in the way expected when that answer was given. Given how many errors and updates to the concepts there have been in the past. I can't see how this is useful or interrogable?
Last edited: