1. Sign our petition calling on Cochrane to withdraw their review of Exercise Therapy for CFS here.
    Dismiss Notice
  2. Guest, the 'News in Brief' for the week beginning 15th April 2024 is here.
    Dismiss Notice
  3. Welcome! To read the Core Purpose and Values of our forum, click here.
    Dismiss Notice

Rating scales institutionalise a network of logical errors and conceptual problems in research practices, 2022, Uher

Discussion in 'Research methodology news and research' started by SNT Gatchaman, Dec 29, 2022.

  1. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights)

    Messages:
    4,445
    Location:
    Aotearoa New Zealand
    Rating scales institutionalise a network of logical errors and conceptual problems in research practices: A rigorous analysis showing ways to tackle psychology’s crises
    Jana Uher

    This article explores in-depth the metatheoretical and methodological foundations on which rating scales—by their very conception, design and application—are built and traces their historical origins. It brings together independent lines of critique from different scholars and disciplines to map out the problem landscape, which centres on the failed distinction between psychology’s study phenomena (e.g., experiences, everyday constructs) and the means of their exploration (e.g., terms, data, scientific constructs)—psychologists’ cardinal error.

    Rigorous analyses reveal a dense network of 12 complexes of problematic concepts, misconceived assumptions and fallacies that support each other, making it difficult to be identified and recognised by those (unwittingly) relying on them (e.g., various forms of reductionism, logical errors of operationalism, constructification, naïve use of language, quantificationism, statisticism, result-based data generation, misconceived nomotheticism).

    Through the popularity of rating scales for efficient quantitative data generation, uncritically interpreted as psychological measurement, these problems have become institutionalised in a wide range of research practices and perpetuate psychology’s crises (e.g., replication, confidence, validation, generalizability).

    The article provides an in-depth understanding that is needed to get to the root of these problems, which preclude not just measurement but also the scientific exploration of psychology’s study phenomena and thus its development as a science. From each of the 12 problem complexes; specific theoretical concepts, methodologies and methods are derived as well as key directions of development.

    The analyses—based on three central axioms for transdisciplinary research on individuals, (1) complexity, (2) complementarity and (3) anthropogenicity—highlight that psychologists must (further) develop an explicit metatheory and unambiguous terminology as well as concepts and theories that conceive individuals as living beings, open self-organising systems with complementary phenomena and dynamic interrelations across their multi-layered systemic contexts—thus, theories not simply of elemental properties and structures but of processes, relations, dynamicity, subjectivity, emergence, catalysis and transformation.

    Philosophical and theoretical foundations of approaches suited for exploring these phenomena must be developed together with methods of data generation and methods of data analysis that are appropriately adapted to the peculiarities of psychologists’ study phenomena (e.g., intra-individual variation, momentariness, contextuality). Psychology can profit greatly from its unique position at the intersection of many other disciplines and can learn from their advancements to develop research practices that are suited to tackle its crises holistically.

    Link | PDF
     
  2. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights)

    Messages:
    4,445
    Location:
    Aotearoa New Zealand
    It's a long paper.

     
  3. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,504
    Location:
    London, UK
    We're up shit creek so let's develop a meta theory of paddles - could pass the time.
     
    CRG, Simbindi, Hutan and 6 others like this.
  4. Trish

    Trish Moderator Staff Member

    Messages:
    52,310
    Location:
    UK
    I've only read the abstract and the bit quoted in the second post.

    My solution doesn't require meta-anything.

    If something's not working, scrap it.

    Psychology has spent 50+ years trying to pretend to be a science with numerical data that is amenable to statistical analysis, and drawing all sorts of erroneous conclusions that affect real people's lives, as we've experienced to our cost.

    It wasn't too bad back in the days before computers - when realistically all you could do was ask a few questions, draw some graphs by hand and do a simple statistical test on a single set of numbers. At least everyone could see what was being done, and mostly probably conclude that human thoughts and actions are too complicated to be turned into single numbers.

    Now, with the huge expansion of psychology as a 'science' where anyone can make up a set of questions and allocate scores to the answers that suit their prejudices, 'validate' them against someone else's set of questions, and then use social media to collect vast quantities of data from willing participants, stuff it in a stats package that spits out hundreds results they don't understand, anyone can pretend to be doing science.

    I say - junk all psych questionnaires.
     
    CRG, Michelle, Creekside and 11 others like this.
  5. Hutan

    Hutan Moderator Staff Member

    Messages:
    26,921
    Location:
    Aotearoa New Zealand
    The author seems to thinking along the same lines as us.

    I have no idea what at least one of those problems is, but I'm glad that the authors have identified them and have bothered to write a paper suggesting that the field of psychology has major problems.


    The language might be a bit waffly
    but they seem to be recognising the problems we have talked about here - things like how a 'do you still enjoy the activities you used to do?' question doesn't work to diagnose depression when someone has a disabling illness. They seem to be calling for sensible context-driven thinking.


    I'm not sure that I can muster the enthusiasm to read the paper, but the philosophical and theoretical foundations the authors talk about might include finding objective outcomes that have meaning for people when studying psychological interventions. I think this paper could be a useful reference when talking about the problems of specific BPS papers. Maybe the '12 complexes of problematic concepts, misconceived assumptions and fallacies' can be a checklist to tick off.
     
  6. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights)

    Messages:
    4,445
    Location:
    Aotearoa New Zealand
    The paper itself is jargon-heavy (natch) but discusses the failings within a framework of 12 concepts

    The 12 sections are —

    1. Psychologists’ own role in their research: Unintended influences
    2. Beliefs in researchers’ objectivity: Illusions of scholarly distance
    3. Mistaken dualistic views: Individuals as closed systems
    4. Lack of definition and theoretical distinction of study phenomena: Conceptual conflations and intersubjective confusions
    5. Reductionism: Category mistakes, atomistic fallacy and decontextualisation
    6. Operationalism: Logical errors and impeded theory development
    7. Constructification: Studying constructs without also studying their intended referents
    8. Naïve use of language-based methods: Reification of abstractions and studying merely linguistic propositions
    9. Variable-based psychology and data-driven approaches: Overlooking the semiotic nature of ‘data’
    10. Quantificationism: Numeralisation instead of measurement
    11. Statisticism: Result-based data generation, methodomorphism and pragmatic quantification instead of measurement
    12. Nomotheticism: Sociological/ergodic fallacy and primacy of sample-based over case–by–case based nomothetic approaches
     
  7. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    13,504
    Location:
    London, UK
    Unknown-1.jpeg
    There's one word for all twelve of those. It goes with maple syrup.
     
  8. Trish

    Trish Moderator Staff Member

    Messages:
    52,310
    Location:
    UK
    I think hidden in all that jargon there is a core message that is sound, namely that psychology based on questionnaire data has gone horribly wrong. Maybe the author thinks the only way the psychologists trapped in a mire of obfuscation, data mishandling and pretentious waffle is to waffle back even harder at them.
    [​IMG]
     
  9. FMMM1

    FMMM1 Senior Member (Voting Rights)

    Messages:
    2,645
    Trish got there first --- why don't they just look at Brian Hughes blogs and then design experiments that are (relatively) sound --- I'm not noted for brevity but they could have encapsulated it in a paragraph or even asked Brian for permission to use that cartoon ---
    "apart from the lack of blinding, the subjective outcome indicators the ------ it's not that bad a study".
     
  10. Hutan

    Hutan Moderator Staff Member

    Messages:
    26,921
    Location:
    Aotearoa New Zealand
    Thanks for venturing in so others didn't have to.

    :rofl: I think that is saying that Error 12 is that generalisations are made at population level and then it is assumed that such generalisations always and fully apply at the individual level.


    Having skimmed some of the paper, I now must go and like the post with the waffle picture. I think the authors might have been more effective if they had given some examples of each of the problems they wanted to talk about.
     
    Last edited: Dec 30, 2022
  11. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    12,458
    Location:
    Canada
    Finally some people willing to say it. The reason psychology has adopted this is because it gives them the answers they want, it's that simple. And for that reason alone it's invalid. If you're not measuring what you think you're measuring, even more if you're not actually measuring things and instead use ratings, then you aren't doing science. Period. There is no scientific hack that can put science back into fake numbers.

    It would have been bad enough if it wasn't for the disastrous application of this into medicine, but the result of this has been even more catastrophic, not just for the massive harm done, but for having stagnated the foundations of medicine. The errors are all conceptual, they suffer the fatal flaw of being potentially completely invalid, making everything downstream, which when applied to concepts is literally everything, also completely invalid.

    The error here is not a flawed concept, it's that the entire discipline is OK with conceptual errors and in fact escalates commitment to them as more of their work inevitably has to be considered invalid. Point #1 is basically most of it: the problem of wanting something to be true more than to actually make sure it is.

    The fact that this is the present and future of medicine puts major urgency into reforming this. It will only get worse, which will reinforce the natural politics and bickering that stifle medical progress so completely they ended up betting their entire future on flawed magical thinking.

    Numeralisation is a good description of this approach of doing fuzzy math on feelings. It creates the illusion that there are valid numbers to fiddle with, when actually it made the whole thing even less objective. Having been reading this for years, the papers, the studies, the questionnaires, none of what they claim to represent mean anything to me, they are useless.

    But extending this means invalidating most of "evidence-based medicine", and certainly everything BPS. I suspect medicine will be far more zealous in this regard than psychology, and probably keep this error going if and after psychology finally grows up about it. So much embarrassment. All for good, the best outcome for patients, but horrible for giant egos who made catastrophic errors of judgment because everyone was doing them, never thinking about the consequences.
     
    Creekside, Trish, Hutan and 4 others like this.
  12. NelliePledge

    NelliePledge Moderator Staff Member

    Messages:
    13,277
    Location:
    UK West Midlands
    Bit of light reading for T Chalder et al.
     
  13. SNT Gatchaman

    SNT Gatchaman Senior Member (Voting Rights)

    Messages:
    4,445
    Location:
    Aotearoa New Zealand
    Title might need to be changed

    RASCAL ERMINES - RAting SCAales institutionalise a network of Logical ERrors and conceptual probleMs IN rEsearch practiceS
     
    Woolie, Michelle, Trish and 5 others like this.
  14. Woolie

    Woolie Senior Member

    Messages:
    2,918
    Thanks, @SNT Gatchaman, I actually read the paper! Some interesting thoughts if you can wade through the jargon - not entirely new, but there are some nice references to other stuff in there that I've made a note of.

    The waffle is the philosophy type, not the psychology type. I think the author may have a philosophy background (or at the very least has ambitions to be a philosopher).

    Another reason the writing style put me off is that it made use of bald, unsubstantiated claims without supporting argument or evidence, e.g. "Psychology’s core constructs (e.g., mind, behaviour, actions) are poorly defined; common definitions are discordant, ambiguous, overlapping and circular". Is this claim true just because the author says it is? Or because its a known fact and everyone agrees on it? Or because bald negative statements don't require any further justification - they are somehow true by definition?

    On the plus side, there were nice reminders that self-report ratings are likely to be influenced by a number of spurious factors:

    * Interpreting the item/question. Some previous researchers have empahasised that responding to an item isn't done in a vacuum, but involves building a model of what the researcher might be meaning by the item/question. The respondent must also decide on the meaning of each of the key words (e.g. what do they mean by "worry"?)

    * Interpreting the rating scale labels. Deciding which actual rating to choose involves interpreting what the terms mean relative to some internal standard the person has. Some descriptors - such as often/rarely - involve comparing event frequency over a time period (can be subject to recall biases, see see below). Some descriptors involve comparing oneself to some model of what the person thinks other people do or feel (e.g. rating the severity of fatigue). Although its not mentioned in the paper, these types of ratings can be vulnerable to recalibration bias (when you take part in a treatment that encourages to see your pain as more common and widespread than you previousy thought).

    * Timeframe and current context. On scales that ask people to describe events that occurred over the past few minutes (e.g pain ratings), we might see strong context effects - events occurring just prior to or during that period might strongly influence ratings. At the other end of the spectrum, scales that ask about the last few months require accurate recall of both confirming and disconfirming instances and an evaluation of their relative frequency. Most cognitive psychologists agree that memory doesn't really work in this way - when we look back over a period, we remember only salient or unusual events. Plus our current situation can massively bias in our recall of the past - if a person is currently in pain they may recall many previous similar experiences, but if a person is feeling well at the time of the survey, they might recall many fewer negative experiences.

    This doesn't even touch on the bigger issues of expectation and they way interventions can shape rater behaviour.

    I thought a lot of the rest of the paper was not helpful in clarifying the real issues. The "cardinal error" thing just seemed to be phrased at a level that was too general for it to be held to account of any of its claims. While "personality" is an everyday concept, and so is "intelligence", many of the things we study in psychology are not at all related to everyday folk psychological concepts, and are not attempting to address those kinds of concepts in any way.
     
    CRG, Lilas, Pustekuchen and 6 others like this.
  15. Woolie

    Woolie Senior Member

    Messages:
    2,918
    PS On re-reading, this point seemed interesting (I've edited out some of the jargon):
    After wading through the jargon, I came out with the point that some decent proportion of the variation we measure in rating scales might not be genuine variation in the experiences or behaviours of the person doing the scale, it might be variation in the way the person interprets the item.

    This point might be picking up on something interesting too:
    So for example (and this paper could do with a whole lot of examples), the popular five-factor model of personality types might have arisen, not because there are five big factors that account for much of the variation in personality, but because rating questions/items that we have thought of assessing, and that meet the statistical criteria we require for inclusion, fall into five broad categories.

    Another interesting issue that isn't talked about here is that the choice of questions is not theoretically neutral - we pick questions that we think represent aspects of personality that are real, then we pare them down to those that generate good statistical curves, and then we analyse their intercorrelations. But the factors that emerge will depend heavily on our assumptions before we started.
     
    CRG, Hutan, Sean and 5 others like this.

Share This Page