When it comes to change, I may not be thoroughly comfortable with the effort, but I at least recognize it has to happen. But I also take to heart the following mantras:
All progress involves change, but not all change is progress.
Is it really progress if a cannibal uses a knife and fork?
Our current rush to embrace AI might have severe moral and ethical consequences that require human introspection.
Telling the truth can be inconvenient, particularly when an agenda has to be accomplished (paging George Santos). But what if the lack of veracity comes from a machine and not some cash-addled, morally bankrupt political system (yeah, I’m looking at you, Republicans and Democrats)? Case in point, we’re now seeing errors pop-up from using AI to inform when it is, in fact, misinforming. See the following:
Wrong information is worse than no information. Erroneous information can drive decisions to wrong (potentially catastrophic outcomes), whereas lack of information forces delays and further research.
It would appear that AI-generated “facts” might need to be fact-checked themselves. While AI systems aren’t necessarily trying to be disingenuous, that doesn’t mean they are always accurate. I see a future need for fact-checking/verification of AI-generated results. Something information providers might want to consider before zeroing out their budget for content writers.
There is also a moral and ethical issue with reliance on AI, particularly regarding health and the need for full disclosure.
Using AI to respond to users without full disclosure could be (and perhaps should be) considered a lack of ethics.
So ultimately, we want to read accurate, factually based information that we can act upon to make decisions that will take us where we want to go. And hopefully not into a disaster:
‘Be careful about reading health books. You may die of a misprint.’
— Mark Twain