StellariaGraminea
Senior Member (Voting Rights)
I agree with the concerns but also recognise benefits. Do people think AI really carries more risk than any other "tool" that has ever been created? (genuinely wondering). From the level of the humble stick which can be used to get hold of something out of reach or used to poke someone's eye out or even possibly commit homicide, it seems that every new tool we invent as humans is going to be used by some percentage of people in a harmful way.
Engaging with human counsellors and therapists has likewise caused harm to some people. Engaging with the officially sanctioned medical system has widely caused harm though history, and I don't mean just as regard "illegal" illnesses that haven't been officially "medically sanctioned". Medical errors happen, which result in death or injury or harm. I've certainly been harmed by official medicine. I have a friend who died due to medical error. And in the latter two cases there has certainly been no accountability or responsibility.
Having worked in healthcare in the past I saw a number of examples where no accountability or responsibility was taken on occasions where they should have been. Even to the level of statutory organisations that investigated and had a responsibility to do so.
For all the exceptional cases we hear of people misusing AI, there are also many we don't hear about who were helped. I do think education is needed re what AI in its current form actually is, and its huge limits, how it can hallucinate information etc etc. But it's hard to see that the world will go back to not using it at all.
Engaging with human counsellors and therapists has likewise caused harm to some people. Engaging with the officially sanctioned medical system has widely caused harm though history, and I don't mean just as regard "illegal" illnesses that haven't been officially "medically sanctioned". Medical errors happen, which result in death or injury or harm. I've certainly been harmed by official medicine. I have a friend who died due to medical error. And in the latter two cases there has certainly been no accountability or responsibility.
Having worked in healthcare in the past I saw a number of examples where no accountability or responsibility was taken on occasions where they should have been. Even to the level of statutory organisations that investigated and had a responsibility to do so.
For all the exceptional cases we hear of people misusing AI, there are also many we don't hear about who were helped. I do think education is needed re what AI in its current form actually is, and its huge limits, how it can hallucinate information etc etc. But it's hard to see that the world will go back to not using it at all.