The Medium Is the Message (And So Are We!)

by Andrew Park

“The new media and technologies by which we amplify and extend ourselves constitute huge collective surgery carried out on the social body with complete disregard for antiseptics.”

Marshall McLuhan, Understanding Media: The Extensions of Man (1964)

A while back, I wrote a blog post about large language models (LLMs), advertised it on LinkedIn, and at least one person read it (just call me “the anti-influencer”).

An illuminated human brain.
Photo by randa marzouk on Unsplash

Anyone reading that post would quickly conclude that I had a great hatred for LLMs in general, and ChatGPT in particular. Although I articulated my reasons well enough, I opened myself to the criticism that, since I’d never tried ChatGPT myself, how could I understand it?

My counter-argument is that I don’t need to practise vivisection on kittens to understand that it’s a bad idea. And while fiddling around with ChatGPT wouldn’t involve animal cruelty, the (so far) negative effects of LLMs on society have given me similar ethical objections to using them.

Because when it comes to ChatGPT and LLMs, “the medium is the message.” And when we fool around with LLMs, organize lessons around them, or flood literary journals with crappy LLM-generated stories, we become co-opted into that message.

The message still needs translation

When Marshall McLuhan wrote his famous aphorism in 1964, he was signalling that the mechanics of a technology, and even its products, are incidental to its effects on society at large.

Thus, the “message” of railways changed the scale of cities and the way humans thought about distance. The real “message” of the printing press was not words and sentences or even arguments, but the homogenization of language, acceleration of communications, and fomentation of revolution.

Reading them today, McLuhan’s 60-year-old words appear both prescient and as relevant as ever in the age of smartphones and artificial intelligence (AI). In particular, he had a lot to say about the numbing habituation of humans to advanced technologies.

However, if ChatGPT is the message, we still need to translate it into words we can understand, share, and debate. Here are a few words that I believe capture the essence of its influence so far.

Totalitarianism. Many AI creators and pundits believe that AI is the solution to everything from curing cancer to solving climate change. This belief is a classic example of mistaking everything for a nail when your only tool is a hammer. A corollary to this conceit is the ideological claim that AI is the only way forward, and AI entrepreneurs are the ones to bring about this future. This claim represents a totalitarian impulse, and even tech insiders have issued warnings about the dangers of such unwarranted techno-optimism.

Technological inevitability. This is the idea that “if we don’t develop it, someone else with lower ethical standards will do so.” We’ve seen this movie before – with atomic weapons, human cloning, Google Glass – and there’s nothing inevitable about it. Laws and policies (cloning) and market realities (Glass) can prevent or modify technological developments, and the idea that “someone else” will do something is never a valid argument for doing it.

Few winners, many losers? Two years of ChatGPT is long enough to see who wins and who loses as a result of OpenAI releasing this tech into the wild. The obvious winners are the tech billionaires whose primary objective is to make boatloads of cash. The obvious losers are the creative classes, who were already under pressure from book piracy, Spotify, and now – thank you, AI – out-and-out thievery.

Existing in limbo between these extremes are those who plagiarize, those who are chancers, those who are lazy, and those who are desperate. These are the ones flooding Amazon with AI-generated rip-offs of Atwood, Pulman, and many others. These are the undergraduates (in a course on ethics and technology!) who, en masse, used ChatGPT to write short introductions about themselves and their expectations for the course.

The shocked professor, Megan Fritts (@freganmitts), later tweeted, “Students aren’t just using this stuff as a ‘problem-solving tool’ or whatever BS people spout, they’re using it to forget how to talk.”

The good professor’s comments segue nicely to Dehumanization. Calling LLMs and other AI products “artificial intelligence” is both a lazy misnomer and a cynical deception. They are certainly artificial, but they are not intelligent in any sense that a neuroscientist would recognize. They are merely statistical machines using patterns in text to predict the next words in a sequence. They do not, and cannot, understand what they write: they are stochastic parrots.

However, AI promoters – whether naively or deliberately – have consistently conflated artificial “intelligence” with human intelligence. After all, if humans are just stochastic parrots, maybe LLMs differ from us by degree rather than in kind.

Linguist Emily Bender pushes back against this dangerous tendency in an article for New York Magazine’s Intelligencer: “People want to believe so badly that these language models are actually intelligent that they’re willing to take themselves as a point of reference and devalue that to match what the language model can do.”

Tech bros are the message too. Ad hominem arguments are usually weak arguments, but you can’t understand the present ChatGPT moment without looking at how the Silicon Valley crowd thinks. Sam Altman, CEO of OpenAI, is a firm believer in the singularity, the hypothetically inevitable merger of human and machine. AI researcher Blake Lemoine was fired by Google  after claiming that Google’s LaMDA LLM had actually achieved sentience. Many tech bros also believe the transhumanist fantasy that human consciousness could be transferred out of our bodily “meat sacks” into a disembodied technosphere.

These sci-fi inspired fantasies demonstrate that LLM creators have been absorbed into the McLuhanesque message of their own creations.

More than this, many tech bros subscribe to a profoundly false, strangely disembodied view of intelligence. They’ve forgotten the fact that brains are connected to bodies. Real living beings embody experience as they move through the material world. They process and interpret signals and stimuli received through their physical senses. Recent research demonstrates that events within our physical bodies can affect our moods and thought patterns. So, that gut feeling you have may actually come from your gut!

Conclusions

As for ChatGPT’s effects on the editing community, there’s more speculation than data. PerfectIt CEO Daniel Heuman wrote a fairly balanced blog post on this subject. He predicts that AI will never replace the author’s voice, that editors will embrace LLMs, and that editors will be in greater demand than ever. Amanda Goldrick-Jones makes similar points in a BoldFace blog post: ChatGPT may be able to “generate 300 words of error-free text,” but it won’t replace the understanding and empathy of a real editor. She also repeats the maxim that “ChatGPT is just one tool in an editor’s toolbox.”

It’ll come as no surprise that I respectfully disagree. The text generated by ChatGPT is not the message. It’s the nature of the algorithm, the mentality of its makers, and the effects on society that constitute the real message.

Mediocre and hallucination-prone as it is, ChatGPT is just a waystation on the road to more sophisticated and disruptive LLMs. Like the Borg in Star Trek, AI promoters and fatalistic collaborators want you to think that resistance is futile.

My view is that we can and should resist the continued penetration of these algorithms into our creative lives, education systems, and information streams. We should refuse to become part of their message. That’s the way I’ll continue to roll.

To misquote that vengeful curmudgeon, Captain Ahab from Herman Melville’s Moby Dick: “From Hell’s heart I stab at thee, thou all-destroying but unconquering ChatGPT.”


Andrew Park spent 20 years as an ecology professor before deciding he never wanted to grade another term paper. He now edits and writes both fiction and non-fiction from the Gatineau Hills. You can find him at wordfishereditorial.com.

This article was copy edited by Alex Benarzi, a freelance editor working in Calgary. Alex edits fiction, academic papers, and blog posts.

One thought on “The Medium Is the Message (And So Are We!)

Leave a reply to wawanash Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.