Looking For Moral Being Attachments

A ChatGPT logo is seen in West Chester, Pa., on Dec. 6, 2023. (Matt Rourke / AP Photo)
As the recent stench of war grew stronger, I noticed once again how much we love our machines, be they bunker-busting or surgical, life-saving or high-earning, analog, digital or artificially intelligent. But what happens when our doodads and thingamajigs act human … you know … err?
To wit: Last week in this space, the modern marvel autocorrect changed one letter in one word (“defund” to “defend”) in one sentence, in one paragraph of an entire 750-word commentary. The “correction” altered the entire meaning of the piece. But those were simply the details.
More pressing appears to be that we’re moving from machines being able to help us express our thoughts and ideas to machines making them for us — intentional or not. All in the name of technological progress.
Common Wizardry
Such wizardry as autocorrect, ChatGPT or any of a number of AI tools is commonplace today, whether to construct a good declarative sentence or to plan and execute a bombing mission 7,000 miles away. I’m only speculating on the latter, but surely AI played some role when the U.S. bombed Iranian nuclear facilities. Perhaps AI will also be used to resolve with accuracy the debate between “obliterate” and “two months,” a chasm both linguistic and political.
We can stop already with that annoying longing for the good old days. The AI genie is long out of the bottle and is not going back in. While government, business, medicine, manufacturing, agriculture and science race to capitalize on its wonders — which are legion and spectacular — we should also know and embrace its limitations.
Please don’t confuse me with the Luddite down the block. I have for some time embraced technology, even making peace as a former professor of academic writing with the untimely death of the college essay at the impressive hands of large language learning models such as the aforementioned ChatGPT.
You remember ChatGPT: Feed some information into it, such as “Write 600 words making the case for slavery reparations.” Click once and within 30 seconds, you have yourself a serviceable essay, probably in the realm of “Cs get degrees.” A second try might ask it to provide two citations. Which it will. (C+?) Try it. Perhaps you’ll want to retake Comp 101 from freshman year.
But, as I’ve written before, what is missing in these essays is a sense of humanity, not simply in language but also in tone, word choice and if you will, a measure of the writer’s passion for the subject — none of which, to date, ChatGPT can provide. A productive bot may be many things. A moral being is not one of them.
Our health can be affected, too. A critical reading of HHS Secretary Robert F. Kennedy Jr.’s “Make America Healthy Again” uncovered serious flaws, including citations of research that did not exist. Whether or not AI was used to generate the report remains unclear. This much is clear: It makes little difference whether it was the bots or the humans. The information was wrong.
Often Right
AI gets it right most of the time. The question we have to ask ourselves is whether most of the time is good enough. Is it better than a human most of the time? Worse? I’m intolerant of one incorrect word out of 750. What would be our tolerance level in war, surgery, a banking transaction or an immigration hearing?
These AI problems, called hallucinations, have led to PR disasters for American businesses from McDonald’s to Sports Illustrated to Amazon. Mickey D’s had to scuttle its foray into AI ordering when a video of the AI “crew member” ringing up an order of 260 Chicken McNuggets after the customers had begged it to stop went viral. SI somehow gave bylines to AI-generated authors, and Amazon was hyping a recruitment tool that somehow only recruited men.
Pardon my cherry picking. Regular readers here know that of all the modern barriers between us and a better future, access to the truth is paramount. AI can be a powerful force in that kit of democracy’s tools. AI and someone accountable for it.
Had last week’s autocorrect snafu gone unedited, the republic would have survived. But as we edge forward (or backward) with cease fires and bombings and the potential for a wider war, we should ensure all the AI and technological wizardry we have at our disposal has a capable human — a moral agent — attached.
To read more about AI and hallucinations, Google them. An AI bot will tell you all you need to know.
This story was published by Nebraska Examiner, an editorially independent newsroom providing a hard-hitting, daily flow of news. Read the original article: https://nebraskaexaminer.com/2025/06/30/looking-for-moral-being-attachments/
Opinions expressed by columnists in The Daily Record are not necessarily those of its management or staff, and do not constitute an endorsement or recommendation. Any errors or omissions should be called to our attention so that they may be corrected. Contact us at news@omahadailyrecord.com.
Category:
User login
Omaha Daily Record
The Daily Record
222 South 72nd Street, Suite 302
Omaha, Nebraska
68114
United States
Tele (402) 345-1303
Fax (402) 345-2351