Where did that lake come from?

When it comes to change, I may not be thoroughly comfortable with the effort, but I at least recognize it has to happen. But I also take to heart the following mantras:

All progress involves change, but not all change is progress.

Likewise:

Is it really progress if a cannibal uses a knife and fork?

Our current rush to embrace AI might have severe moral and ethical consequences that require human introspection.

Telling the truth can be inconvenient, particularly when an agenda has to be accomplished (paging George Santos). But what if the lack of veracity comes from a machine and not some cash-addled, morally bankrupt political system (yeah, I’m looking at you, Republicans and Democrats)? Case in point, we’re now seeing errors pop-up from using AI to inform when it is, in fact, misinforming. See the following:

https://www.vice.com/en/article/wxnaem/stack-overflow-bans-chatgpt-for-constantly-giving-wrong-answers

Wrong information is worse than no information. Erroneous information can drive decisions to wrong (potentially catastrophic outcomes), whereas lack of information forces delays and further research.

https://www.vice.com/en/article/bvmep3/cnet-defends-use-of-ai-blogger-after-embarrassing-163-word-correction-humans-make-mistakes-too

It would appear that AI-generated “facts” might need to be fact-checked themselves. While AI systems aren’t necessarily trying to be disingenuous, that doesn’t mean they are always accurate. I see a future need for fact-checking/verification of AI-generated results. Something information providers might want to consider before zeroing out their budget for content writers.

There is also a moral and ethical issue with reliance on AI, particularly regarding health and the need for full disclosure.

https://www.insidehook.com/daily_brief/health-and-fitness/koko-mental-health-artificial-intelligence

Using AI to respond to users without full disclosure could be (and perhaps should be) considered a lack of ethics.

So ultimately, we want to read accurate, factually based information that we can act upon to make decisions that will take us where we want to go. And hopefully not into a disaster:

https://theweek.com/articles/464674/8-drivers-who-blindly-followed-gps-into-disaster

‘Be careful about reading health books. You may die of a misprint.’
— Mark Twain

It’s Artificial, but is it Intelligent?

The current rage “du jour” for writing is the emergence of AI (“artificial intelligence”) as a tool for writers and automated systems to create written content that blurs the line between what is human and what is not. That may not be the intent, but we’re headed toward a future where what you read and what you see (in the media – news, movies, etc.) is created by automated agents.

The basic premise is that systems are now to the point where they can “understand” input queries in a “human-like” manner and, using a locus of embedded (or fed-in) information, produce a response that is also “human-like” in terms of language usage and syntax. This has implications far beyond the initial intentions of the effort (to facilitate helpful automated responses) toward school-age kids hopping into a browser and, in 30 seconds, getting a relatively decent essay about the Declaration of Independence.  

Beyond a world where no one (school-age or not) needs to put any real thought into creating crappy first drafts, the implications for writers are only beginning to reveal themselves. To that end, I will be doing my own experiment with AI technology, specifically a resource called Sudowrite.com, which you can try for free on the aforementioned website (https://www.sudowrite.com/ ). In particular, I’m curious to try out its “wormhole” capability, wherein it takes the gist of what you have written and “extends” it through the dynamic generation of content. The basic idea is that you’re puttering along with a story (apparently not plotted out, it seems), then you hit the wall of (mythical) “writer’s block,” so all you have to do is tell the AI to continue the story, and voila, your story’s plotline is extended over and beyond your mental pothole. More on that later.

If you want to follow up in detail on what I’m talking about, I suggest checking out some websites on the subject. They go farther into depth than I could, a lowly writer and IT specialist.

Look at OpenAI, the organization behind ChatGPT, to get an overview of effort:

https://openai.com/

This Washington Post article (warning: paywall) is an excellent starting point, discussing Chatgpt and dialog systems.

https://www.washingtonpost.com/technology/2022/12/06/what-is-chatgpt-ai/

If you’re not fond of paywalled content, or you have used up your free articles, check out CNET – Why Everyone’s Obsessed with ChatGPT, a Mind-Blowing AI Chatbot

https://www.cnet.com/tech/computing/why-everyones-obsessed-with-chatgpt-a-mind-blowing-ai-chatbot/

Lastly, look at this article from the Guardian, and implications for academia. It boggles the mind:

https://www.theguardian.com/technology/2022/dec/04/ai-bot-chatgpt-stuns-academics-with-essay-writing-skills-and-usability