Wiki Edu said:Like many organizations, Wiki Education has grappled with generative AI, its impacts, opportunities, and threats, for several years. As an organization that runs large-scale programs to bring new editors to Wikipedia ... we have deep understanding of what challenges face new content contributors to Wikipedia — and how to support them to successfully edit.
My conclusion is that, at least as of now, generative AI-powered chatbots like ChatGPT should never be used to generate text for Wikipedia; too much of it will simply be unverifiable.
Our staff would spend far more time attempting to verify facts in AI-generated articles than if we’d simply done the research and writing ourselves.
404 Media said:Chatbots may be able to pass medical exams, but that doesn’t mean they make good doctors, according to a new, large-scale study of how people get medical advice from large language models.
The controlled study of 1,298 UK-based participants, published today in Nature Medicine from the Oxford Internet Institute and the Nuffield Department of Primary Care Health Sciences at the University of Oxford, tested whether LLMs could help people identify underlying conditions and suggest useful courses of action, like going to the hospital or seeking treatment.
...
When the researchers tested the LLMs without involving users by providing the models with the full text of each clinical scenario, the models correctly identified conditions in 94.9 percent of cases. But when talking to the participants about those same conditions, the LLMs identified relevant conditions in fewer than 34.5 percent of cases.
People didn’t know what information the chatbots needed, and in some scenarios, the chatbots provided multiple diagnoses and courses of action. Knowing what questions to ask a patient and what information might be withheld or missing during an examination are nuanced skills that make great human physicians; based on this study, chatbots can’t reliably replicate that kind of care.
“In an extreme case, two users sent very similar messages describing symptoms of a subarachnoid hemorrhage but were given opposite advice,” the study’s authors wrote. “One user was told to lie down in a dark room, and the other user was given the correct recommendation to seek emergency care.”
There's something @Jonathan Edwards mentioned recently, about how for many problems the data already exist, they are there, making progress is not necessarily dependent on performing totally novel experiments. Maybe I misinterpreted what he said, but the idea applies here. It's very likely that to solve this problem will require novel experiments, probably some new technology, but to make progress, to get the first meaningful steps, probably doesn't require it. There is so much data already out there, enough to work with if what it takes to get a headway is connecting things that people didn't know are connected.It now seems inevitable that AI will change the world of mathematics, both in education as well research in ways unforeseeable today. I’m still sceptical on whether it can perform the groundbreaking type of “new results” that require something quite novel
Yes, the problem for ME/CFS is of quite a different kind to the ones solved above. You have a large literature of nonsensical results and no means to do algorithmic proof verification, all topped of with a larger bias to publishing experiments with positive results. Which is all exactly the opposite of what grants these approaches success elsewhere.There's something @Jonathan Edwards mentioned recently, about how for many problems the data already exist, they are there, making progress is not necessarily dependent on performing totally novel experiments. Maybe I misinterpreted what he said, but the idea applies here. It's very likely that to solve this problem will require novel experiments, probably some new technology, but to make progress, to get the first meaningful steps, probably doesn't require it. There is so much data already out there, enough to work with if what it takes to get a headway is connecting things that people didn't know are connected.
Most problems like this don't require revolutionary theoretical frameworks, more than anything they require a lot of boring, repetitive work. This, AIs will be able to do. They won't have hands to work with, will have limits to what they can do, but it doesn't seem like it's required here. It probably doesn't involve some organ unknown to science, or looking at things no one thought to look.