Why Chronic Illness Patients Feel Safer Talking to AI Than to Doctors

I agree with the concerns but also recognise benefits. Do people think AI really carries more risk than any other "tool" that has ever been created? (genuinely wondering). From the level of the humble stick which can be used to get hold of something out of reach or used to poke someone's eye out or even possibly commit homicide, it seems that every new tool we invent as humans is going to be used by some percentage of people in a harmful way.

Engaging with human counsellors and therapists has likewise caused harm to some people. Engaging with the officially sanctioned medical system has widely caused harm though history, and I don't mean just as regard "illegal" illnesses that haven't been officially "medically sanctioned". Medical errors happen, which result in death or injury or harm. I've certainly been harmed by official medicine. I have a friend who died due to medical error. And in the latter two cases there has certainly been no accountability or responsibility.

Having worked in healthcare in the past I saw a number of examples where no accountability or responsibility was taken on occasions where they should have been. Even to the level of statutory organisations that investigated and had a responsibility to do so.

For all the exceptional cases we hear of people misusing AI, there are also many we don't hear about who were helped. I do think education is needed re what AI in its current form actually is, and its huge limits, how it can hallucinate information etc etc. But it's hard to see that the world will go back to not using it at all.
 
I agree with the concerns but also recognise benefits. Do people think AI really carries more risk than any other "tool" that has ever been created? (genuinely wondering
In some ways yes I think there is more risk than previous tools because we've never had any tools before that we can speak to. None we can speak to in the way we are used to speaking to other humans, that's been a unique characteristic and no longer is. Regardless of other capability that is a huge change and has huge ramifications I think. It’s where some of the downsides we already see come from but also where some of the upsides are enabled.

For all the exceptional cases we hear of people misusing AI, there are also many we don't hear about who were helped. I do think education is needed re what AI in its current form actually is, and its huge limits, how it can hallucinate information etc etc. But it's hard to see that the world will go back to not using it at all.
Agree. We’re not going back. I wish a few companies and people hadn’t pushed this technology in the ways they have. Some of the problems are pushback are entirely due to them. It’s made the job of educating and discussing the risks and rewards that much harder. There was an alternative path more responsible people in the field were taking.

There’s some great uses. There will be more. But there are also areas I think over time we will want to keep human through choice or preference as much as anything. As well as areas that the tools of today are definitely not suitable or appropriate and likely the tools of the future will not be for a long time.

Of course anybody who says they know where things are heading is wrong. Me included. If you’d have given me what we have now a decade ago I would have been surprised and would have made all sorts of incorrect assumptions. People I’ve spoken with or just listened to in the field who I respect say the same.
 
It's off the topic of this thread but my biggest concern with AI is how it's used / will be used by authoritarian states to control and monitor citizens (and non-citizens) and how it's being / will be used in war. I don't feel optimistic about that.
 
Back
Top Bottom