Happy First Birthday, ChatGPT

by Amanda Goldrick-Jones

A drawing of a robot (mechanical person) holding an open book as hundreds of books tumble off of surrounding shelves onto the floor.
Image by Terri Paju from Pixabay

Last winter, ChatGPT burst on the scene and exploded editors’ brains. A year later, what do we know? How does ChatGPT (and its proliferating cousins) affect editors’ workflow, roles, and relationships with writers? 

I wanted to learn more from editors who have spent serious time with Artificial Intelligence (AI) tools, so that’s exactly what I did over the past few months. In this post, I share some highlights of what I’ve learned about the capabilities, limitations, and risks of AI for editors.

Capabilities (features, not bugs)

• Summarizing: According to Perrin Lindelauf (Vancouver-based editor and tech enthusiast), text-trained AI tools, like ChatGPT, do a good job of summarizing and structuring information. This fall, Editors Toronto hosted “Beyond ChatGPT: Actually Useful AI for Writers & Editors,” in which Braveen Kumar also pointed out the tool’s usefulness for summarizing verified information. My experience confirms these claims: I asked ChatGPT to summarize a paragraph in a published Terms of Use agreement using simpler language, and the result was a text much clearer than the original.

Perrin demonstrated the critical point that ChatGPT works best with clear, carefully constructed prompts (here are some other examples). Still, because ChatGPT learns from anything you submit, some editors have expressed unease about allowing it to summarize a client’s writing. 

• Voice control (not meant to be scary): Perrin and Braveen both noted how the tool can help editors maintain a consistent tone and style, including working with or on style guides.

• Grammar and punctuation: ChatGPT not only produces grammatical prose but can correct or improve prose. With some caveats, this is a time-saving affordance that Perrin and Braveen also acknowledge. 

• Boilerplate: Many professionals are excited by AI’s ability to generate human-sounding emails or low-level text. Of course, this comes with strong caveats that nothing is too low-level; all information must be verified to pass the test of truth. As Samantha Enslen of Dragonfly Editorial explained in a web presentation for ACES this summer, AI can “write everything generic, untrue, outdated, unreliable.” In other words, be on your guard at all times. (Did you hear the one about the lawyer who used ChatGPT to prepare a filing?) Another caveat exists: according to Axios, ChatGPT-generated phishing emails are becoming “scary good.” 

Limitations (bugs, not features) 

ChatGPT’s limitations are generally well known, but some become clearer with experimentation. Here are three snapshots:

• Limitation 1: ChatGPT is still stuck in a time warp. You will sometimes receive a message like the following:

“I'm sorry, but I do not have access to real-time information or the ability to browse the internet for current events.” 

• Limitation 2: AI is subject to “hallucinations” (a nice word for errors or fabrications).  For example, ChatGPT warns users that it can make mistakes and tells them explicitly to verify important information. If it doesn’t know the answer, it sometimes makes things up—fictional information that can look plausible. In July, I asked GPT-4 to convert entries from APA style, which uses only initials for first names, to Chicago’s author-date style, requiring full names. What it generated looked convincing, but when I checked, I found plenty of hallucinated first names. In neither case could GPT-4 reproduce the information with 100 percent correct Chicago formatting; I had to add italics to the journal titles and replace hyphens with en dashes. So much for GPT-4 removing all the grunt work—but score one for the human editor. 

Even these small examples show that with AI, fact-checking is critical. Ironically, for a tool that’s touted for efficiency, it could take more time to check AI-generated citations than to edit a writer’s reference list. It’s evident that tools like ChatGPT aren’t yet up to performing these kinds of tasks reliably. On the other hand, as Blue Garret blogger Kirsten Tate describes, AI tools like ChatGPT and Bing can fact-check certain kinds of information accurately and save you time. Though, Tate cautions, handle with care.

Limitation 3: Many editors I talked with expressed concern about unscrupulous writers self-publishing AI-generated books or submitting robo-prose to specialist magazines. Indeed, soon after ChatGPT appeared, sci-fi magazine Clarkesworld closed off submissions after getting a flood of AI-generated short stories from opportunists hoping for quick cash. Clearly, AI wasn’t good enough back in February 2023 to fool sharp-eyed editors.

To test this, last March I asked GPT-4 to write a short horror story about a woman who sees the shadow of her dead husband. I left it to the AI to decide whether the apparition was benign or dangerous.  Here is an excerpt:

“It was a Tuesday afternoon when Marianne first noticed the dark shape. At first, she thought it was a trick of the light or a figment of her imagination, but there it was—a tall, inky silhouette that seemed to have Peter's outline. Marianne could almost make out the shape of his favorite fedora, slightly tilted to the side. Fear and longing wrestled within her, but before she could approach, the figure disappeared.”

When I asked GPT-4 to regenerate the story, it offered this:

“A few weeks had passed since Albert's death, and Martha tried to carry on. She busied herself with gardening and reading, attempting to fill the void left in her life. But the house had other plans. Late one evening, as Martha was preparing for bed, she saw a dark shape in the hallway.”

I invited two fiction writers to critique both stories (I disclosed that AI had generated them, so admittedly there may be some bias). Both writers found the prose riddled with clichés—a pastiche of overused ghost story tropes with a strong dose of sentimentality. I returned to the same prompt shortly before Halloween and asked ChatGPT to try this story again, but it came up with similar results. 

It might be reasonable to conclude that serious writers who respect their readership should have little to fear from AI. Yet writers’ jobs are at risk, so responsible AI use is critical for them as well as for editors. Large-language-model (LLM) AI tools are trained by scraping original writing and artwork from the web. In the US, authors have filed lawsuits against AI companies, and several Canadian writers claim that AI used a data set from their publications with “no compensation and no royalties for the use of copyrighted works.” So, there are good reasons to hesitate about uploading our own or our clients’ original writing to ChatGPT, Bing, or Bard. That said, ChatGPT offers the option of not allowing chats to be used as training data. But you have to dig around in your account settings to find this.

Promisingly, more publishers are joining the “responsible use of AI” club. As of September, Amazon’s Kindle Direct Publishing requires ebook writers to disclose whether any content is AI-generated. The academic publisher Taylor & Francis forbids listing AI as an author and requires authors to document their uses of AI in preparing their work. However, how much we can believe or trust claims of responsible AI use is questionable. 

One year after ChatGPT’s noisy birth, I’ve learned this: (1) we can’t ignore AI or hope it will go away; (2) we need to understand as much as possible about what AI tools can, can’t, and shouldn’t do for our profession; and (3) we should show how and why AI can’t yet (and may never) replace the responsible oversight, dependable judgement, or creativity that editors offer. Most of all, AI requires constant vigilance. We owe it to our clients to be aware of AI’s expanding affordances and its vexatious, even risky, limitations.


Amanda Goldrick-Jones, a former post-secondary writing instructor living in Victoria, British Columbia, has long been interested in technology-mediated communication. A graduate of SFU’s Editing Certificate program, she specializes in academic and business editing.

This article was copy edited by Natalia Iwanek (she/they), a freelance copy editor, stylistic editor, and proofreader who works with a variety of clients on a diverse portfolio of projects.

One thought on “Happy First Birthday, ChatGPT

  1. The AI conversation is fascinating. A dear friend of mine who is a librarian compares AI to the advent of the word processor – it won’t replace original work, but can take some of the tedium out of creative work. It will be interesting to see how it plays out over the next few years.

    Like

Leave a reply!

This site uses Akismet to reduce spam. Learn how your comment data is processed.