As if newspaper columnists and other journalists needed something else to freak people out about this year, along came ChatGPT. It was released by nonprofit creator OpenAI in late November so people can have things written to order for them.
Most media praised its writing abilities while expressing fears about just that.
University professors, teachers and other highly verbal people are worried in the reports. They are discussing changing writing assignment protocols to avoid getting AI papers instead of student-written ones. Some are asking students to handwrite (print?) papers in class.
Reporters and columnists are presenting AI-generated examples, trying, it seems, to convince us that ChatGPT is going to replace humans as thinkers and writers. The media folks have either read and watched a lot of science fiction, or they think we have.
Deep thinkers say they fear AI writing could be programmed to change our politics and minds.
Maybe the fearmongers trust Elon Musk, who once predicted that AI would take over the world by 2025. Just recently at a conference in Dubai, commenting on ChatGPT, he called AI “one of the biggest risks to the future of civilization.”
Momentarily the pundits seem to have forgotten climate change. Could ChatGPT have written to them that the environmental crisis went away?
For Valentine’s Day, the New York Times asked readers to choose a mood, a person, and a famous poet and have ChatGPT draft a Valentine to the person. All I can say is, with all due respect to Paul Simon: This AI provides the 51st way.
Thank goodness Rihanna will never get the “wistful” Rumi-inspired poem ChatGPT wrote for me to send:
“My darling Rihanna, / In my heart your beauty lies / My love for you remains strong / Yet I must live with these sighs / Your love I can’t touch, caress / Still I cherish a blazing light / That will remain as an inner warmth / Until you are here by my side”
Lucky Rihanna. Poor readers. Poor Rumi. I apologize.
[Note: I originally wrote, “Rumi must be rolling over in this grave” here. But I looked it up. Turns out he is buried close to where the earthquake hit Turkey earlier this month. I changed what I wrote. Would ChatGPT have done that? I doubt it.]
I have read essays in the new York Times written by both school kids and ChatGPT responding to an assignment. It was easy to tell which was which. Eighth graders don’t use subjunctive voice, let alone, correctly.
The Boston Globe praised ChatGPT for its handling of a fictional Westminster Dog show that featured robotic dogs rather than real ones. The Globe apparently did not realize it was not a news story, but a piece of fiction. ChatGPT also had the Boston Dynamic robot dogs performing tricks, which doesn’t happen at Westminster.
From examples I’ve seen, ChatGPT is OK at analysis and other intellectual writing, if frequently boring. Though it’s good at grammar, its tone tends to be either frivolous or stiff. Its writing often sounds to me like something I read before, even though I didn’t, not exactly.
Professors and teachers who think AI writing is so good that it’s dangerous to society should consider assigning more creative, contemporary writing projects.
It’s no accident that AI is very useful for writing sports and fitness stories and articles. It is being used to do that at some media outlets already. Any game write-up with AI is like Mad Libs; fill the numbers and basic events in the blanks. Fitness stories abound already for AI to learn, absorb and pretty much repeat.
Any assignment that needs very recent information or needs quotes from a person who has never been quoted on the topic before will be tough for ChatGPT to complete. In short, right now ChatGPT won’t work for most news stories or many other practical tasks, like writing minutes of a meeting.
Everyone (Except freelance columnists and reporters desperate for an assignment?) should calm down about the end of the world. The most ChatGPT could do is end a Valentine relationship.Sandy Storey is the Publisher Emieritus of the Jamaica Plain Gazette