Artificial intelligence in medicine and science

Probably very little because even if AI was perfect, the editors would still be in charge.
Given how much incompetence, bias and corruption it will expose, if anything it will show the necessity of moving away from human gatekeepers, with all their biases, friends in high places and how cheap humans are to buy.

We've even seen in recent years how even the flagships of medical science, the big journals, have little value of their own outside of the unearned reputation they have for having value. Especially, the entire idea of self-correction has been effectively disabled.

It will be exciting, for sure. A lot of people will be howling mad. What comes after will be unrecognizable, but what exists today is not sustainable, and will be swamped by AI-produced science anyway.

A lot of us will be able to say "we told you so", but things will be so chaotic that it will likely get lost in the mix. Everything depends on who controls the AIs, or if they can be controlled at all. And as much as we need a stable society to function, the current stability is so stagnant that it holds no value to any of us returning to a normal life.
 
"Project OSSAS: Custom LLMs to process 100 Million Research Papers"
November 11, 2025

"Today, we’re introducing Project OSSAS in collaboration with LAION and Wynd Labs. Project OSSAS is a large-scale open-science initiative to make the world’s scientific knowledge accessible through structured, AI-generated summaries of research papers. Built on the foundation of Project Alexandria, OSSAS uses custom-trained Large Language Models, and idle compute sourced from tens of thousands of computers around the world to process scientific research papers into a standardized format. This machine-readable format can be explored, searched, and linked across scientific disciplines."
 
Project OSSAS is a large-scale open-science initiative to make the world’s scientific knowledge accessible through structured, AI-generated summaries of research papers
If they can do this with reasonable accuracy it sounds promising. The metadata could be interesting and scale of the project sounds interesting. They seem to be focusing on relationships between papers/embeddings, but I’m unsure if this would translate into tje granularity of some of the concepts we’d be interested in looking at.

Even just being able to run the models locally could be useful on individual papers, 12 and 14B parameters is a bit beyond my meagre RaspberryPi! I’m sure they’ll be quantised and GGUF or MLX packages available soon.
https://huggingface.co/inference-net/Aella-Nemotron-12B
https://huggingface.co/inference-net/Aella-Qwen3-14B
 
There's a website to explore a subset of the papers summarized so far: https://laion.inference.net/embeddings

I typed in "chronic fatigue" and clicked on one at random out of 8 results, and it happened to be authored by @DMissa. I don't see a way to share the link to the summary directly, but it's for this paper: Dysregulated Provision of Oxidisable Substrates to the Mitochondria in ME/CFS Lymphoblasts, 2021, Missailidis et al

But the title in the website seems different. I don't know if this is a paraphrasing by the AI or an alternate title to the paper: "Dysregulated Fuel Source Preference in Myalgic Encephalomyelitis/Chronic Fatigue Syndrome Lymphoblasts: Combined Transcriptomics and Proteomics Reveal Elevated Pentose Phosphate Pathway, Fatty Acid β-Oxidation, and Amino Acid Catabolism". I can't find a paper with this title anywhere. But the data seems to match.

Here is one of six claims from the summary, which from a quick check seems to match the numbers in the paper:
Details: Glycolytic enzyme expression is unchanged at the protein level in ME/CFS lymphoblasts, while PPP enzymes are upregulated (mean +20 ± 9%; p = 0.034) with G6PD elevated by 43 ± 10% (p = 5.5 × 10^−4).

Supporting Evidence: Figure 3B shows no significant differences in glycolytic enzyme levels (16 enzymes; binomial and t-tests non-significant). Figure 4A shows PPP enzymes upregulated (mean +20 ± 9%; t-test p = 0.034). Figure 4C shows G6PD elevated (p = 5.5 × 10^−4).

Implications: Supports a shift away from glycolysis toward PPP to supply TCA cycle substrates and NADPH, consistent with compensating for inefficient ATP synthesis.

A claim from another paper that was summarized, this time actually using the real title as I see it in the journal: Transforming growth factor beta (TGF-β) in adolescent chronic fatigue syndrome (2017) Wyller et al.
Details: Plasma levels of TGF-β1, TGF-β2, and TGF-β3 are not elevated in adolescents with CFS compared to healthy controls.

Supporting Evidence: Independent sample comparisons showed no differences across all three isoforms (Table 2; Additional file 2: Table S1). Subgroup analyses by Fukuda and Canada 2003 criteria also showed no differences.

Implications: Systemic TGF-β levels are unlikely to serve as a biomarker distinguishing adolescent CFS from healthy controls; focus should shift to neuroendocrine–immune coupling mechanisms.

---

To search, go to the linked website, type something into the search bar, then, while pressing Ctrl, click on a dot that appears in the main area. (Or just tap on a dot on mobile.)
 
Thanks @forestglip I had tried that website earlier, its a bit heavy going on my iPad and also didn’t seem to have much detail. It seems it was giving me the mobile view though and rotating my screen gives a more advanced desktop view with search!
 
Marcus Olang' writes I'm Kenyan. I Don't Write Like ChatGPT. ChatGPT Writes Like Me.

My writing does share some DNA with the output of a large language model. We both have a tendency towards structured, balanced sentences. We both have a fondness for transitional phrases to ensure the logical flow is never in doubt. We both deploy the occasional (and now apparently incriminating) hyphen or semi-colon or em-dash to connect related thoughts with a touch more elegance than a simple full stop.

Or, more accurately, it writes like the millions of us who were pushed through a very particular educational and societal pipeline, a pipeline deliberately designed to sandpaper away ambiguity, and forge our thoughts into a very specific, very formal, and very impressive shape.

There’s a growing community (cult?) of self-proclaimed AI detectives, who have designed and detailed what they consider tells, and armed their followers with a checklist of robotic tells. Does a piece of text use words like ‘furthermore’, ‘moreover’, ‘consequently’, ‘otherwise’ or ‘thusly’? Does it build its arguments using perfectly parallel structures, such as the classic “It is not only X, but also Y”? Does it arrange its key points into neat, logical triplets for maximum rhetorical impact?

To these detectives of digital inauthenticity, I say: Friend, welcome to a typical Tuesday in a Kenyan classroom, boardroom, or intra-office Teams chat. The very things you identify as the fingerprints of the machine are, in fact, the fossil records of our education.

The ability to speak and write this formal, "correct" English separated the haves from the have-nots. It was the key that unlocked the doors to university, to a corporate job, to a life beyond the village. The educational system, therefore, doubled down on teaching it, preserving it in an almost perfect state, like a museum piece.

And right there is the punchline to this long, historical joke. An “AI”, a large language model, is trained on a vast corpus of text that is overwhelmingly formal. It learns from books published over the last two centuries. It learns from academic papers, from encyclopaedias, from legal documents, from the entire archive of structured human knowledge. It learns to associate intelligence and authority with grammatical precision and logical structure.

The machine, in its quest to sound authoritative, ended up sounding like a KCPE graduate who scored an 'A' in English Composition. It accidentally replicated the linguistic ghost of the British Empire.
 
Back
Top Bottom