rvallee
Senior Member (Voting Rights)
What I see mostly is people who don't think that AI will improve enough, and are judging its final value entirely based on what they've seen so far. And I don't mean here, this is what's happening in general.I agree with the concerns but also recognise benefits. Do people think AI really carries more risk than any other "tool" that has ever been created?
From that perspective, it kind of makes sense. Ironically, the same mistake psychosomatizers make: every model is coherent with itself when evaluated on what the model says. Current AI is still too limited. It has massively improved since the first loudly publicized blunders, but those blunders created a sort of checkpoint past which most people can't imagine it will get any better, even as it already has. But since it's currently unable to fully replace people, it will never be able to do that. Fallacious, but consistent with itself.
All of those issues go away the second a medical AI is as good as an average physician. At this point medical AIs will already be 100x better than the current system, because availability is a major component of performance, and instant 24/7 availability isn't just transformative, it's completely revolutionary. Which is also the point at which any AI is probably capable of replacing most non-manual jobs.
But it's not ready yet, therefore it will never reach that. Comically famous mistake. And ironically, not especially different from "we don't know what causes ME/CFS, therefore we will never find anything, and that means it doesn't actually exist". It's definitely funny how humans can be