A new "reasoning" framework for ME/CFS research

Discussion in 'General ME/CFS news' started by mariovitali, May 13, 2025.

  1. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,622
    The be honest I don't understand the reference to the UAT. The UAT is a general existence result rather than the opposite but yes it certainly allows for explosion of weights. If I remember correctly, which might be wrong, some interest at the time went to towards conditions on the typically non-linear part (a polynomial won't do if you want the function space to approximate to be large, say continous functions, but it was probably not instantly clear what would be sufficient) and I'm sure other insights have been gained as well and there certainly are explicit error estimates and convergence results depending on the setup but when it comes to application it is probably slightly more similar to whatever Max Zorn spent his time working on. There's plenty of computational algorithms to work with if you just want an existence result for approximating continous functions (or more general objects) and from what I remember different UATs typically boil down to exactly those kind of things (either Stone-Weierstraß like results, or in the case of ReLU which I think was the original case or perhaps it was sigmoid, things are slightly different though but still rather standard).

    Depends on what you mean. I think Stockfish now also uses some form of ML methods, but even before that I suspect it would have been more than sufficient to develop new strategies without it, at least that was my extremely limited understanding, but yes it seems ML changed quite a bit in that field.

    But as mentioned that is not what I mean by inventing, as it is an issue that is entirely resolved without the need of having to make "jumps" and that is, at least what I think for the most part, where things start to get really interesting. I also don't have examples on hand where such "jumps" have been made and of course arguing what constitutes a "jump" is a whole debate in itself, quite possibly none currently exist, but I find it plausible that you'll see some "jumps" at some point in time, perhaps if at some point in time things do become sophisticated, even if the "jumps" are very different by nature to those humans make even if the von Neumann quote might read differently.
     
    Last edited: May 15, 2025
    RedFox likes this.
  2. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,622
    I don't think the general idea is odd given that Computational systems biology is a field of study. But the problem is more so that every computational neuroscientist has been trying to do these sort of things for just one organ but they still seem unable to get fundamentally further than understanding the giant squid axon at least on a level that would allow for predictions and that doesn't even touch those problems of modelling all the quantum physical processes for which we don't even have a model yet.
     
    Kitty and Trish like this.
  3. jnmaciuch

    jnmaciuch Senior Member (Voting Rights)

    Messages:
    774
    Location:
    USA
    I apologize for what will probably be too blunt of an answer, I’m running low on energy for refining my language this morning. I hope nothing is taken as a personal attack.

    To answer your question, that proposal is an extremely generic word salad. Unfortunately, I’ve had the displeasure of reading many proposals by supposed experts who churn out something similar without any help from AI: “we plan to use [shiny new technology] to generate big data and mine for insights into [choose any disease]” is about as common as you can get.

    Is it possible that you’ll find differences in glycosylation patterns between ME/CFS and healthy controls? Almost certainly. Just like you’re almost certain to find something different in a metabolomics study, a sleep study, a gut microbiome study, and a million other things. There’s no reason to believe it’ll bring us any closer to understanding the pathological mechanism though. Sure, all those data points might be a useful “clue” towards the actual pathological mechanism even if they ultimately don’t provide any direct actionable knowledge.

    But this brings me back to my main point: AI is offering nothing new that a below-average grad student isn’t already putting together after 20 minutes of googling. We already have people busying themselves with checking under every rock—the chances that AI will point us towards a particularly important rock [edit: on scant evidence] are exactly the same as the chances of regular researchers doing that.

    And they’re the ones who will ultimately be doing the follow-up anyways. They quite simply have no use for a mountain of more vague ideas. They have it covered, believe me.

    Again, I apologize for the bluntness of my point, but I feel that it is an important one to make: the AI output here (and in many other places I have observed it) looks impressive only if you have little semantic understanding of the word salad it’s putting together. I do not find a tool that generates even better-than-average word salad useful on the off chance that it’ll spit out something important I didn’t think of first. If it does do that, it’ll be hidden in a pile of equally unimportant-seeming paragraphs that I would have to waste my time digging through when I could just as easily come up with 20 more well grounded, precise, and testable hypotheses in that same time.

    I think a better use of researchers time is using their human brains to chase down leads that are already quite obvious after reading the literature.
     
    Last edited: May 15, 2025
    RedFox, Kitty, voner and 2 others like this.
  4. jnmaciuch

    jnmaciuch Senior Member (Voting Rights)

    Messages:
    774
    Location:
    USA
    Yes in agreement, @Creekside several research groups are quite ahead of you on that. More comprehensive in silico models are becoming all the rage. I think the problem is simply that modeling the whole human body is a computational task of unimaginable complexity, even to get the most basic incomplete approximation. We’re working towards it, certainly, but it’s more of a pipe dream than anything that can be used for good science at this point.
     
    Last edited: May 15, 2025
    RedFox, Kitty, Trish and 1 other person like this.
  5. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    3,052
    Location:
    Norway
    It’s mostly about the assumptions that have to be fulfilled for NNs to be able to solve a problem: that there is a connection between the data and whatever you want the model to explain with the given data.

    If there is no connection, it can’t be explained with that data. At least that’s my basic understanding of it.

    And obviously, making a model that can explain something with very obscure data with very loose connections might not bring us any closer to understanding what’s actually going on. But that’s another issue.
    That there are too many possible permutations so chess can’t be solved with brute force and the current computing power.
    I’m pretty sure new strategies were developed as a result of the introduction of AI into chess bots, that had not been found with previous methods. Some even goes completely against all conventional wisdom, but they still work somehow.
    I’m not sure I understand what jump means here.
     
    RedFox and Kitty like this.
  6. EndME

    EndME Senior Member (Voting Rights)

    Messages:
    1,622
    The UAT is not a result of the form: This and this has to apply for NNs to "work". It is rather exactly the opposite. Instead of giving you conditions on when things work it simply tells you that there always exists at least one thing that does work but that thing might depend on the nature of the object your looking for. It tells you: No matter how your data looks like you can find a NN to approximate the function underlying said data. The only assumption is that the function is continuous and conditions on what type of NN you allow but those things can all be adapted. Naturally for a given NN you can always construct a function that cannot be approximated well, but that is a different statement. My understaning is that the UAT is not very surprising, rather it is to be expected. The perhaps surprising thing is rather that in many cases (for example inverse problems in imaging) things often work quite well even once the NN has been "fixed" (this however requires a rather different architecture than the one used in the original UAT).

    That is, it is an existence result. That roughly means given an arbitrary function there exists a NN with one layer to approximate said function, so in short it says "no matter how complex the pattern you are looking for is there exists a NN that will get you there for this one specific pattern", so a different pattern might require a different NN and so forth (but there are some bounds and convergence results depending on specific setups). But simply because it tells you there exists one thing that gets the job done does not rule out the fact there don't exist other things that get the job done much better.

    A priori that tells you nothing about how fluctuations in the data effect choice of network or how well your given network still performs, it tells you nothing about there not existing a different network architecture that gets the job done better or there not existing millions of networks that all do the job or that there aren't better architectures which outperform the pre-specified architecture in the UAT.

    All of this is pretty much orthogonal to what @mariovitali is doing because he is given a fixed NN, whose architecture is not even publicly known, and trying to approximate various different functions as good as possible, knowing that the function he wants to approximate in most cases does in fact not match the data precisely because that data is noisy. So you just end up approximating functions you don't want to approximate even though there exists something that would have approximated precisely the things you wanted to approximate! In this case you can be a priori certain that you can construct a function that this NN cannot approximate but you also have no idea what the function you're actually looking to approximate looks like.

    In short: Noisy data is a whole different can of worms that the UAT does not deal with because with the UAT you are assuming that the function you want to approximate is precisely the function you want to approximate, rather than asking questions about stability (and we already know that the NNs of the original UAT are not stable in a certain sense).

    Yes of course, nobody is denying that. I don't understand much of chess nor what happened but it seemed that agressive positional advantages and lines were rather underestimated by humans as well as brute force engines such as Stockfish which were seemingly trained on the value of pieces.

    You are not alone with that. Neither do I. Perhaps sometime I will, but rather likely I will never be able to make the "jump" to get there. But it is clear that it is a form of abstraction that is different to the one of playing a better game of chess once you know the rules and perhaps more similar to coming up with the rules of chess after only having watched one game of chess, but still a bit more involed and different to that. So in some sense a deeper understanding of something that does not simply follow from a straightforward albeit lengthy computation. Perhaps it is more comparable to coming up with and proving the UAT because you see a use for it, after only having been told how Giuseppe Peano counts his sheep at night, which of course von Neumann would be able to do, despite him possibly having only understood what an abstraction looks like which made it sufficient for him to be able to make completely different abstractions elsewhere.
     
    Last edited: May 15, 2025
    jnmaciuch likes this.
  7. Creekside

    Creekside Senior Member (Voting Rights)

    Messages:
    1,567
    That's what I expected.However, I was thinking of it more as a tool for finding out where the model is failing. For example, model a cell or organ, apply known drug interactions, and see where the model is failing. Probably still too complex a model to build yet.
     
  8. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    4,046
    Location:
    Australia
    My issue is that research projects need to be specific. Blood/PBMC glycome might not yield answers even if the glycome is the problem in other tissue. My family has had several cases of GBS and immune thrombocytopenia which is technically 'glycome' related as both involve attack on sialic acid containing glycolypids or glycoproteins. I am interested in the glycome of muscle nerves themselves. Maybe in the brain too, but that requires postmortem donors...

    Gangliosides and N-glycosylation can modify the surface ion potentials and hence alter the excitability of TRP and voltage gated ion channels.

    https://www.sciencedirect.com/science/article/abs/pii/S0006899304009539
    https://pmc.ncbi.nlm.nih.gov/articles/PMC4476505/

    But to repeat, the focus needs to be on the brain or muscle nerves, not the blood. The alteration to the glycome could be a result of recovery to damage of specific tissue rather than any broad body-wide problems (the latter is quite unlikely).
     
    EndME and Trish like this.
  9. mariovitali

    mariovitali Senior Member (Voting Rights)

    Messages:
    575
    @Snow Leopard @jnmaciuch @Jonathan Edwards , please have a look below. @Hutan @Simon M you may be interested as well. Reasoning Engine is o3 . Note that the engine found the upcoming study by the Austrian University (...) :

    Screenshot 2025-05-16 at 13.31.50.png




    and


    Screenshot 2025-05-16 at 13.32.05.png

    and finally :
    Screenshot 2025-05-16 at 13.32.15.png



    EDIT : The part where it writes "B-cell-receptor repertoire sequencing in ME/CFS notes an increased frequency of acquired N-glycosylation sites in variable regions—another hint that B-cell/antibody glycosylation biology is perturbed" is incorrect. The study linked did not find any differences in N-Linked glycosylation. Link of study : https://www.frontiersin.org/journals/immunology/articles/10.3389/fimmu.2025.1489312/full
     
    Last edited: May 16, 2025
    Hutan and RedFox like this.
  10. Snow Leopard

    Snow Leopard Senior Member (Voting Rights)

    Messages:
    4,046
    Location:
    Australia
    I don't find this (output hypothesis) very compelling before and I've already considered it years ago (IgG N-glycans). This is a general modification of IgG activity at best, it cannot explain any specific disease.

    The secondary targets with the exception of muscle biopsy are probably a waste of time, I simply don't think PBMCs is the place to look.

    O-GlcNAcylation plays a key role in schwann cell remylination after nerve injury and is up my alley again (with a focus on nerves) eg
    "O-GlcNAcylation is crucial for sympathetic neuron development, maintenance, functionality and contributes to peripheral neuropathy"
    https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2023.1137847/full
     
    Hutan and RedFox like this.

Share This Page