I got fooled by AI-for-science hype—here's what it taught me. Nick McGreivy 2025

Discussion in 'Research methodology news and research' started by Murph, May 20, 2025 at 12:11 AM.

  1. Murph

    Murph Senior Member (Voting Rights)

    Messages:
    270
    https://www.understandingai.org/p/i-got-fooled-by-ai-for-science-hypeheres

    I got fooled by AI-for-science hype—here's what it taught me
    I used AI in my plasma physics research and it didn’t go the way I expected.
    Nick McGreivy
    May 19, 2025

    In 2018, as a second-year PhD student at Princeton studying plasma physics, I decided to switch my research focus to machine learning. I didn’t yet have a specific research project in mind, but I thought I could make a bigger impact by using AI to accelerate physics research. (I was also, quite frankly, motivated by the high salaries in AI.)

    I eventually chose to study what AI pioneer Yann LeCun later described as a “pretty hot topic, indeed”: using AI to solve partial differential equations (PDEs). But as I tried to build on what I thought were impressive results, I found that AI methods performed much worse than advertised.

    At first, I tried applying a widely-cited AI method called PINN to some fairly simple PDEs, but found it to be unexpectedly brittle. Later, though dozens of papers had claimed that AI methods could solve PDEs faster than standard numerical methods—in some cases as much as a million times faster—I discovered that a large majority of these comparisons were unfair. When I compared these AI methods on equal footing to state-of-the-art numerical methods, whatever narrowly defined advantage AI had usually disappeared.

    This experience has led me to question the idea that AI is poised to “accelerate” or even “revolutionize” science. Are we really about to enter what DeepMind calls “a new golden age of AI-enabled scientific discovery,” or has the overall potential of AI in science been exaggerated—much like it was in my subfield?

    Many others have identified similar issues. For example, in 2023 DeepMind claimed to have discovered 2.2 million crystal structures, representing “an order-of-magnitude expansion in stable materials known to humanity.” But when materials scientists analyzed these compounds, they found it was “mostly junk” and “respectfully” suggested that the paper “does not report any new materials.”

    story continues at link: https://www.understandingai.org/p/i-got-fooled-by-ai-for-science-hypeheres
     
  2. Murph

    Murph Senior Member (Voting Rights)

    Messages:
    270
    I share this piece because I suspect some researchers are getting excited by AI. I don't think it's anywhere near being generally useful yet. Of course there might be tasks where it can be deployed really usefully. In my own line of work it is incredibly useful at transcribing audio, for example.

    But one analogy i've heard is AI is like the microwave - it plays a role in the kitchen and is great at some things. But if you try to use it to cook the whole dinner you're going to have a bad time.
     
    hotblack, oldtimer, jnmaciuch and 2 others like this.
  3. jnmaciuch

    jnmaciuch Senior Member (Voting Rights)

    Messages:
    728
    Location:
    USA
    This has been my experience trying (and failing) to use it for research. It’s good for tasks where the sheer amount of data to sort through is unreasonable for one person and you aren’t relying on the accuracy of the results. Which only happens to cover vanishingly few tasks in my work. It doesn’t even save me time writing code unless I’m using a language where I’m an absolute beginner.
     
    hotblack, Hoopoe, oldtimer and 2 others like this.
  4. hotblack

    hotblack Senior Member (Voting Rights)

    Messages:
    766
    Location:
    UK
    I think that’s a really good analogy. It’s a tool. Or rather a range of tools, that when used for the right things by someone who knows what they’re doing can be great. But it is not a magic wand.
    I know some people still working in tech and doing lots of interesting things with AI/ML. But are also having to deal with people’s unrealistic expectations. Both positive and negative.
     

Share This Page