How to read a paper involving artificial intelligence (AI), 2025, Dijkstra et al

Discussion in 'Other health news and research' started by hotblack, Apr 21, 2025.

  1. hotblack

    hotblack Senior Member (Voting Rights)

    Messages:
    636
    Location:
    UK
    How to read a paper involving artificial intelligence (AI)

    Paul Dijkstra, Trisha Greenhalgh, Yosra Magdi Mekki, Jessica Morley

    Abstract
    This paper guides readers through the critical appraisal of a paper that includes the use of artificial intelligence (AI) in clinical settings for healthcare delivery. A brief introduction to the different types of AI used in healthcare is given, along with some ethical principles to guide the introduction of AI systems into healthcare. Existing publication guidelines for AI studies are highlighted. Ten preliminary questions to ask about a paper describing an AI based decision support algorithm are suggested.

    Link (BMJ Medicine)
    https://doi.org/10.1136/bmjmed-2025-001394
     
    alktipping and Hutan like this.
  2. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    2,522
    Location:
    Norway
    I almost stopped reading after their definition of AGI.

    They also talk about how training data might be biased, but bias in AI refers to the difference between the model and the data, and not the difference between the data and the real (or ideal) world.

    There is also no direct mention of Responsible AI (RAI) or Explainable AI (XAI).
     
  3. Hutan

    Hutan Moderator Staff Member

    Messages:
    32,222
    Location:
    Aotearoa New Zealand
    Box 4
    World Health Organization's (WHO) six ethical principles for use of artificial intelligence in healthcare
    Summarised from WHO guidance on ethics and governance of AI in healthcare25

    1. Protecting human autonomy. Humans should remain in control of medical decisions, and people should understand how artificial intelligence (AI) is used in their care (including how their privacy and confidentiality are protected).

    2. Promoting human wellbeing and safety, and the public interest. AI should not harm people. The designers of AI technology should comply with regulatory requirements for safety, accuracy, and efficacy.

    3. Ensuring transparency, explainability, and intelligibility. AI technology should be understandable to developers, healthcare professionals, patients, users, and regulators "according to the capacity of those to whom they are explained".

    4. Fostering responsibility and accountability. Patients and clinicians should evaluate the development and deployment of AI technologies. This approach should include mechanisms for questioning and redress for individuals and groups that are adversely affected by decisions based on algorithms.

    5. Ensuring inclusiveness and equity. AI for health should be designed "to encourage the widest possible appropriate, equitable use and access, irrespective of age, sex, gender, income, race, ethnic group, sexual orientation, ability, or other characteristics protected under human rights codes". AI technologies should not encode biases to the disadvantage of identifiable groups (especially already minoritised groups; ie, fairness, box 3).

    6. Promoting AI that is responsive and sustainable. All AI role players (designers, developers, and users) should "continuously, systematically, and transparently" assess AI applications during actual use. Two aspects are important for sustainable AI systems: firstly, their environmental consequences should be minimal; and secondly, their effect on the workplace, including workplace disruptions, training of healthcare workers, and potential job losses should be dealt with by governments and companies.
     
  4. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,492
    Principle 3 seems kind of hard to follow, since we don't know how an AI makes the decisions it does. We don't know how humans do this either. I expect human doctors occasionally base decisions on information they remember incorrectly, or studies they misjudged the value of. I hope they judge AIs compared to human levels of accuracy, rather than some level that's also unreachable by humans.

    I also hope they throw out principle 1 once AI reach the point of making fewer errors--and patient harm--than humans do.
     
    hotblack and Peter Trewhitt like this.
  5. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    2,522
    Location:
    Norway
    Principle 3 is why we should not roll out AI on a massive scale like we do today.

    Principle 1 can never go, we can’t abdicate from our own lives.
     
    Peter Trewhitt likes this.
  6. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,934
    Location:
    UK
    I've always read bias in AI as being between the model and the real world. (data is just used to train the model). There is work on fairness testing which if I remember correctly groups data according to meta data and checks that the overall results are the same with different splits of the data - thus it can be looked at in terms of how the data fits to the real world. I'm not sure how this applies to LLM type models/data where it is harder to assess results (some use a larger model to assess smaller model results). and there is a stocastic element in the result so they differ (and quite a lot depending on temperature). I also find with LLMs the result is very dependant on the prompt (with/without RAG type context).
     
    Peter Trewhitt likes this.
  7. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    2,522
    Location:
    Norway
    The issue with defining bias in AI as a deviation from «the real world» is that it doesn’t tell you anything about the source of the bias. It might not even have anything to do with the model at all. It’s like saying that a rifle has a bias if you missed the target, even though it might have happened for a number of reasons completely unrelated to the rifle.

    You’ll also have a very hard time defining «the real world».

    There are many sources of bias and it has a different meaning in different fields, but as far as I know, in AI, bias refers to the learning bias: the difference between the training data and the model. All of the other types of bias are not directly related to the inner workings of the model, so they are not «AI bias» per se. They are definitely relevant, though!

    But the point I did at terrible job at making, is that an introduction to AI really should mention the types of bias that are more specific to AI, and not just mention bias in relation to the data that is used for training or input. It would be a bit like just warning people that a lathe can electrocute you if it isn’t grounded properly, instead of telling them that the massive spinning bit can rip the flesh off your bones.
     
    Peter Trewhitt likes this.
  8. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,568
    Location:
    Canada
    Bit funny how literally none of those things apply to traditional health care when it comes to us. The idea is basically that AI should be so far above humans in terms of alignment with human values that it becomes unrecognizable. Health care is not actually aligned with those values in mind. It sometimes manages so, generically or by accident, and most professionals are definitely trying their best, but overall everything falls short of it.

    And it may actually be a more realistic possibility, that machines could be better aligned with values that humans love to espouse, but rarely ever actually follow. But the idea here is that humans already do that, and that it will be very hard for machines to do, when the reality is the main reason it will be hard for machines to do this is because the very human nature that will train them. The problem isn't with machines, it's with the worst humans.

    Although I guess #1 is already a thing, humans are in control of everything. Unfortunately those humans are not us, we have no control over anything that happens to us. Other people, strangers, do, with zero meaningful accountability. And it's truly awful. So awful in fact that with the right prompts, most LLMs already do better at it. Really, they do. Because it's easy, and simple. But human nature gets in the way.

    I've accepted the idea that natural humans will never live up to human ideals. We are too imperfect, too weird. Either humanity evolves beyond biology, or we'll destroy ourselves anyway. We evolved as animals, and we remain so. Barbarians in clothes who can sometimes count accurately and do other things.
     
    Creekside, Hutan and Peter Trewhitt like this.
  9. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,934
    Location:
    UK
    I agree - the paper didn't look like a good paper to me.
    It should also talk about ways to avoid and test for bias.
     
    Hutan, Utsikt and Peter Trewhitt like this.

Share This Page