Achievements of Artificial Intelligence (AI)

Discussion in 'Other health news and research' started by ME/CFS Skeptic, Dec 16, 2023.

  1. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,337
    Location:
    Belgium
    I thought it might be useful to create a thread to organize news on what Artificial Intelligence is able to achieve. Its applications might be useful for ME/CFS advocacy and research.
     
  2. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,337
    Location:
    Belgium
    Noticed this recent Nature article where a large language model was able to help with solve or optimize the solutions of Math problems.

    Mathematical discoveries from program search with large language models
    Abstract
    Large Language Models (LLMs) have demonstrated tremendous capabilities in solving complex tasks, from quantitative reasoning to understanding natural language. However, LLMs sometimes suffer from confabulations (or hallucinations) which can result in them making plausible but incorrect statements [1,2]. This hinders the use of current large models in scientific discovery. Here we introduce FunSearch (short for searching in the function space), an evolutionary procedure based on pairing a pre-trained LLM with a systematic evaluator. We demonstrate the effectiveness of this approach to surpass the best known results in important problems, pushing the boundary of existing LLM-based approaches [3]. Applying FunSearch to a central problem in extremal combinatorics — the cap set problem — we discover new constructions of large cap sets going beyond the best known ones, both in finite dimensional and asymptotic cases. This represents the first discoveries made for established open problems using LLMs. We showcase the generality of FunSearch by applying it to an algorithmic problem, online bin packing, finding new heuristics that improve upon widely used baselines. In contrast to most computer search approaches, FunSearch searches for programs that describe how to solve a problem, rather than what the solution is. Beyond being an effective and scalable strategy, discovered programs tend to be more interpretable than raw solutions, enabling feedback loops between domain experts and FunSearch, and the deployment of such programs in real-world applications.

    Link: https://www.nature.com/articles/s41586-023-06924-6
     
  3. ME/CFS Skeptic

    ME/CFS Skeptic Senior Member (Voting Rights)

    Messages:
    4,337
    Location:
    Belgium
  4. Sly Saint

    Sly Saint Senior Member (Voting Rights)

    Messages:
    10,240
    Location:
    UK
    Terminally ill man 'cured' of immune illness by AI technology
    Terminally ill man 'cured' of immune illness by AI technology
     
    mariovitali, Fero, oldtimer and 2 others like this.
  5. Sean

    Sean Moderator Staff Member

    Messages:
    8,870
    Location:
    Australia
    This demonstrates the one thing I think AI is good for, big improvements in efficiency at dealing with large datasets.

    But I seriously doubt that it is going to deliver amazing new conceptual insights, of itself. Seen no evidence for that thus far.
     
  6. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    2,557
    Location:
    Norway
    I don’t have the capacity to explain fully now, but the field of Explainable AI (XAI) would probably be the best bet when it comes to insight. In short, XAI tries to find out what the model has learned or how it made its choice.

    In the iMCD example, you would use different techniques to try and figure out why it landed at adalimumab.

    I don’t think we’re able to ask the model directly yet, because you usually just end up with a model that’s good at rationalising after the fact.

    The major challenge is that most of the models in use today are subsymbolic, meaning that everything in the model is represented as millions, billions or trillions of values (often between 0 and 1), so we don’t get anywhere by just looking under the hood, so to speak.

    IBM has an intro to XAI here: https://www.ibm.com/think/topics/explainable-ai
     
    Sean, Peter Trewhitt and butter. like this.
  7. butter.

    butter. Senior Member (Voting Rights)

    Messages:
    292
    Identifying and then successfully using drug X for disease Z is a conceptual insight.

    Much of what humans call 'reasoning' is 'guessing' based on necessarily flawed and incomplete data anyways?
     
    Last edited: Feb 7, 2025
    Sean and Peter Trewhitt like this.
  8. butter.

    butter. Senior Member (Voting Rights)

    Messages:
    292
    There are some quite serious people like Wolfram who think it's essentially impossible to explain and understand AIs' 'reasoning.'
     
    Sean and Peter Trewhitt like this.
  9. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    2,557
    Location:
    Norway
    We don’t know if it’s impossible or not. So I tend to not take extreme opinions very seriously.

    Also, there are proxies for «reasoning» that could be meaningful even if we’re unable to get at the pure reasoning, if that’s even a thing.
     
  10. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,572
    Location:
    Canada
    When we consider the level of reasoning coming out of medicine whenever they discuss issues like us, which sometimes is below kindergarten, I think that people massively overestimate how important reasoning really is here, especially when most breakthroughs are the product of brute force with a chance factor.

    It takes some, but the level of reasoning that the average human, even the average scientist, is capable of is massively lower than what most people imagine is needed. A lot of it is just creative insight with a bit of an obsessive streak, and that's something AIs are already capable of. All it takes is being thorough and being able to accept when a hunch doesn't pan out, something humans massively struggle with, and has basically blocked all progress for us.

    Frankly, it takes very little reasoning to be comparable to what the average researcher can do, and that's before you factor being able to work for millions of years subjective-equivalent-time. And AIs will be better at this by year's end anyway.
     
    alktipping and Peter Trewhitt like this.
  11. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    2,557
    Location:
    Norway
    @rvallee the problem isn’t the level of reasoning, but being able to understand why an AI-model gives any given output.

    This is important to avoid e.g. discrimination.
     
    LJord and Peter Trewhitt like this.
  12. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,070
    Location:
    London, UK
    This is a complete joke. I remember a departmental journal club around 1995, before adalimumab was licensed or even had that name where we discussed a case of Castelman's and the known fact that it was associated with high IL-6 levels.

    Adalimumab is an IL-6 inhibitor. Those of us who were aware of its development immediate saw that it would be likely to become a treatment for Castelman's (if it turned out to work that is).

    So to know that adalimumab would be a good bet to treat this illness was just part of common knowledge in a clinical immunology department thirty years ago.
     
    Amw66, Wits_End, alktipping and 4 others like this.
  13. rvallee

    rvallee Senior Member (Voting Rights)

    Messages:
    14,572
    Location:
    Canada
    Sure. But that's an odd standard, given that we don't know what leads humans to make decisions either, and we can rarely explain it either.

    All I'm saying is that the level of reasoning needed isn't as high as people think it is. Lots of non-reasoning people have stumbled onto something without ever being able to explain much about it. Human intelligence is a social construct that works at scale through brute force.

    The trope of lone genius making smart insights through strong reasoning is the rare exception, one that almost never happens. A thousand scientists of average intelligence will completely outperform the smartest people to have ever existed in most cases, as long as they work systematically. But it won't be a thousand, or a million, it will be billions. It will work very well.
     
    Yann04, alktipping and Peter Trewhitt like this.
  14. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    2,557
    Location:
    Norway
    I believe you’ve misunderstood what the reasoning is going to be used for.

    Example 1:
    When it comes to many use cases for AI, there are explicit laws and guidelines that requires the decision maker to use or avoid using certain information. One example is discrimination based on gender.

    Of course humans are not perfect, and of course there will be humans that unconsciously discriminate based on gender. But with AI, we can check the model before we use it. We can look at both simulations to spot patterns of discrimination, and ‘under the hood’ to see if it uses gender (or proxies for gender) as a factor when making its recommendation.

    And because we’re (partially) able to check the models’ ‘reasoning’, we should. Otherwise, we would indirectly accept avoidable discrimination. We already do that with humans today, but AI has the power to make it infinitely worse if we don’t get ahead of the problem.

    Example 2:
    In some use cases, AI models are already better then humans. Chess is a good example. If humans want to improve their ability to play chess based on AI, they can try to 1) mimic the AI, 2) deduce the methods of AI based on how it plays, or 3) check ‘under the hood’ and try to see if it has learned something we don’t already know.

    The third option is the exciting one.

    Because what happens if you do the same to an AI that detects breast cancer? Maybe it will point us to some unknown connection between seemingly unrelated concepts? That might be a starting point for an investigation into new mechanisms that we can screen for or target with treatments.

    Or you can look at the work of @mariovitali - discovering themes and findings years ahead of science. If we could ‘look under the hood’ of his models, we might be able to spot even more patterns. It might give us some clues about why some things are related, and not just that they are related.

    This is all speculative and there are no guarantees of finding any gold. But the chance of being able to stumble upon some nuggets is well worth the effort.
     
    mariovitali likes this.

Share This Page