Achievements of Artificial Intelligence (AI)

Noticed this recent Nature article where a large language model was able to help with solve or optimize the solutions of Math problems.

Mathematical discoveries from program search with large language models
Abstract
Large Language Models (LLMs) have demonstrated tremendous capabilities in solving complex tasks, from quantitative reasoning to understanding natural language. However, LLMs sometimes suffer from confabulations (or hallucinations) which can result in them making plausible but incorrect statements [1,2]. This hinders the use of current large models in scientific discovery. Here we introduce FunSearch (short for searching in the function space), an evolutionary procedure based on pairing a pre-trained LLM with a systematic evaluator. We demonstrate the effectiveness of this approach to surpass the best known results in important problems, pushing the boundary of existing LLM-based approaches [3]. Applying FunSearch to a central problem in extremal combinatorics — the cap set problem — we discover new constructions of large cap sets going beyond the best known ones, both in finite dimensional and asymptotic cases. This represents the first discoveries made for established open problems using LLMs. We showcase the generality of FunSearch by applying it to an algorithmic problem, online bin packing, finding new heuristics that improve upon widely used baselines. In contrast to most computer search approaches, FunSearch searches for programs that describe how to solve a problem, rather than what the solution is. Beyond being an effective and scalable strategy, discovered programs tend to be more interpretable than raw solutions, enabling feedback loops between domain experts and FunSearch, and the deployment of such programs in real-world applications.

Link: https://www.nature.com/articles/s41586-023-06924-6
 
Terminally ill man 'cured' of immune illness by AI technology
A terminally ill patient about to enter a hospice is in remission after AI found him a life-saving drug.

Many people with serious illnesses, from cancer to heart failure, survive for years on various treatments before all available drugs stop working and they face a death sentence.

Artificial intelligence can help by rapidly searching through thousands of existing drugs for unexpected ones which might work.

The New England Journal of Medicine has now reported the case of a man with a rare immune condition whose life was saved by the technology.

The patient, who is remaining anonymous, has idiopathic multicentric Castleman's disease (iMCD), which has an especially poor survival rate and few treatment options.

But an AI tool searched through 4,000 existing medications, discovering adalimumab - a monoclonal antibody used for conditions ranging from arthritis to Crohn's disease - could work.

Dr David Fajgenbaum, senior author of the published study on the breakthrough, from the University of Pennsylvania, said: 'The patient in this study was entering hospice care, but now he is almost two years into remission.

'This is remarkable not just for this patient and iMCD, but for the implications it has for the use of machine learning to find treatments for even more conditions.'
Terminally ill man 'cured' of immune illness by AI technology
 
This demonstrates the one thing I think AI is good for, big improvements in efficiency at dealing with large datasets.

But I seriously doubt that it is going to deliver amazing new conceptual insights, of itself. Seen no evidence for that thus far.
 
This demonstrates the one thing I think AI is good for, big improvements in efficiency at dealing with large datasets.

But I seriously doubt that it is going to deliver amazing new conceptual insights, of itself. Seen no evidence for that thus far.
I don’t have the capacity to explain fully now, but the field of Explainable AI (XAI) would probably be the best bet when it comes to insight. In short, XAI tries to find out what the model has learned or how it made its choice.

In the iMCD example, you would use different techniques to try and figure out why it landed at adalimumab.

I don’t think we’re able to ask the model directly yet, because you usually just end up with a model that’s good at rationalising after the fact.

The major challenge is that most of the models in use today are subsymbolic, meaning that everything in the model is represented as millions, billions or trillions of values (often between 0 and 1), so we don’t get anywhere by just looking under the hood, so to speak.

IBM has an intro to XAI here: https://www.ibm.com/think/topics/explainable-ai
 
This demonstrates the one thing I think AI is good for, big improvements in efficiency at dealing with large datasets.

But I seriously doubt that it is going to deliver amazing new conceptual insights, of itself. Seen no evidence for that thus far.

Identifying and then successfully using drug X for disease Z is a conceptual insight.

Much of what humans call 'reasoning' is 'guessing' based on necessarily flawed and incomplete data anyways?
 
Last edited:
I don’t have the capacity to explain fully now, but the field of Explainable AI (XAI) would probably be the best bet when it comes to insight. In short, XAI tries to find out what the model has learned or how it made its choice.

In the iMCD example, you would use different techniques to try and figure out why it landed at adalimumab.

I don’t think we’re able to ask the model directly yet, because you usually just end up with a model that’s good at rationalising after the fact.

The major challenge is that most of the models in use today are subsymbolic, meaning that everything in the model is represented as millions, billions or trillions of values (often between 0 and 1), so we don’t get anywhere by just looking under the hood, so to speak.

IBM has an intro to XAI here: https://www.ibm.com/think/topics/explainable-ai

There are some quite serious people like Wolfram who think it's essentially impossible to explain and understand AIs' 'reasoning.'
 
There are some quite serious people like Wolfram who think it's essentially impossible to explain and understand AIs' 'reasoning.'
We don’t know if it’s impossible or not. So I tend to not take extreme opinions very seriously.

Also, there are proxies for «reasoning» that could be meaningful even if we’re unable to get at the pure reasoning, if that’s even a thing.
 
When we consider the level of reasoning coming out of medicine whenever they discuss issues like us, which sometimes is below kindergarten, I think that people massively overestimate how important reasoning really is here, especially when most breakthroughs are the product of brute force with a chance factor.

It takes some, but the level of reasoning that the average human, even the average scientist, is capable of is massively lower than what most people imagine is needed. A lot of it is just creative insight with a bit of an obsessive streak, and that's something AIs are already capable of. All it takes is being thorough and being able to accept when a hunch doesn't pan out, something humans massively struggle with, and has basically blocked all progress for us.

Frankly, it takes very little reasoning to be comparable to what the average researcher can do, and that's before you factor being able to work for millions of years subjective-equivalent-time. And AIs will be better at this by year's end anyway.
 
Terminally ill man 'cured' of immune illness by AI technology

This is a complete joke. I remember a departmental journal club around 1995, before adalimumab was licensed or even had that name where we discussed a case of Castelman's and the known fact that it was associated with high IL-6 levels.

Adalimumab is an IL-6 inhibitor. Those of us who were aware of its development immediate saw that it would be likely to become a treatment for Castelman's (if it turned out to work that is).

So to know that adalimumab would be a good bet to treat this illness was just part of common knowledge in a clinical immunology department thirty years ago.
 
@rvallee the problem isn’t the level of reasoning, but being able to understand why an AI-model gives any given output.

This is important to avoid e.g. discrimination.
Sure. But that's an odd standard, given that we don't know what leads humans to make decisions either, and we can rarely explain it either.

All I'm saying is that the level of reasoning needed isn't as high as people think it is. Lots of non-reasoning people have stumbled onto something without ever being able to explain much about it. Human intelligence is a social construct that works at scale through brute force.

The trope of lone genius making smart insights through strong reasoning is the rare exception, one that almost never happens. A thousand scientists of average intelligence will completely outperform the smartest people to have ever existed in most cases, as long as they work systematically. But it won't be a thousand, or a million, it will be billions. It will work very well.
 
Sure. But that's an odd standard, given that we don't know what leads humans to make decisions either, and we can rarely explain it either.
I believe you’ve misunderstood what the reasoning is going to be used for.

Example 1:
When it comes to many use cases for AI, there are explicit laws and guidelines that requires the decision maker to use or avoid using certain information. One example is discrimination based on gender.

Of course humans are not perfect, and of course there will be humans that unconsciously discriminate based on gender. But with AI, we can check the model before we use it. We can look at both simulations to spot patterns of discrimination, and ‘under the hood’ to see if it uses gender (or proxies for gender) as a factor when making its recommendation.

And because we’re (partially) able to check the models’ ‘reasoning’, we should. Otherwise, we would indirectly accept avoidable discrimination. We already do that with humans today, but AI has the power to make it infinitely worse if we don’t get ahead of the problem.

Example 2:
In some use cases, AI models are already better then humans. Chess is a good example. If humans want to improve their ability to play chess based on AI, they can try to 1) mimic the AI, 2) deduce the methods of AI based on how it plays, or 3) check ‘under the hood’ and try to see if it has learned something we don’t already know.

The third option is the exciting one.

Because what happens if you do the same to an AI that detects breast cancer? Maybe it will point us to some unknown connection between seemingly unrelated concepts? That might be a starting point for an investigation into new mechanisms that we can screen for or target with treatments.

Or you can look at the work of @mariovitali - discovering themes and findings years ahead of science. If we could ‘look under the hood’ of his models, we might be able to spot even more patterns. It might give us some clues about why some things are related, and not just that they are related.

This is all speculative and there are no guarantees of finding any gold. But the chance of being able to stumble upon some nuggets is well worth the effort.
 
Back
Top Bottom