With the progression of technology and a little help from the sci-fi genre, it has become a part of our wider culture to anticipate profound changes to the way we work and live within our own lifetimes. With the development of large language models (including Open AI’s ChatGPT, Google’s Bard, Meta’s Llama, Microsoft’s Orca and many more), we are presented with a sea change in the ways we navigate creative and logical tasks in all aspects of daily life.
Despite the widespread applicability of these models, it’s hard to argue that any other profession is as directly impacted as copywriting. Before we can discuss the impacts on user experience in this field, we should share a quick explanation of what these models are. The following has been borrowed from an outside source:
“A Large Language Model (“LLM”) is an AI software programme designed to generate coherent text responses to user queries, or “prompts” … An LLM “learned” language by being exposed – through “training” – to “massive amounts of text from various sources,” such as code, webpages, and books, in 20 languages … LLMs develop emergent capabilities to use the building blocks of language in extraordinary ways, including to “generate creative text, solve mathematical theorems, answer reading comprehension questions and more”.
The reader may have expected me to prompt an LLM like ChatGPT to write the above explanation. However, to demonstrate a different point, I actually chose to pull this quote from a recent court filing written by Meta’s lawyers. (1) There are many concerns over copywrite and plagiarism when it comes to this budding technology, and so there are currently multiple lawsuits being levied at the creators of LLM’s. These lawsuits argue that the libraries of text used to train LLMs include works from many authors without seeking permission.
Meta and other developers of LLM’s argue that the training of these AI’s fall under ‘fair use’ terms as the works themselves are never copied or stored. The training instead uses the information about how words follow on from other words and uses this to ‘learn’ how to sequence words. The ultimate effect is that instead of conceptualizing and writing out whole paragraphs ahead of time, an LLM only ever thinks about the next word it is writing, making up each sentence as it goes along.
This sometimes results in the AI dreaming up falsehoods, often referred to as ‘hallucinations’. This is often realised in the reporting of facts which are not actually true. This issue is further compounded when the developers take countless bits of data from online sources, including blog posts, articles, reviews, product descriptions and many forms of work produced by copywriters. There is no telling how accurate a lot of the data going into the AI training really is, and so many impurities can find themselves factored in during the process.
This highlights the potential problems around accuracy of LLMs, and a new kind of plagiarism that arises in the use of them. The work may not be specifically derivative of a given human author, but if one just takes the output of Google’s Bard for example, and publishes it as their own without admitting to it, there is a clear breach of transparency and authenticity. Multiple controversies of this kind have already been occurring alongside the public rollout of this technology.
A predictable consequence is students allowing an AI to do their homework for them only to be caught out when they fail to edit out artifacts that give away the AI-generated origin of the work. And it’s not just students; professionals are also tempted, and an infamous and unexpected case of inappropriate AI application happened when a lawyer used ChatGPT to write a court filing he submitted. (2) He was caught out because the document featured citations to several court cases that didn’t actually exist: a drastic example of what kind of troubles AI ‘hallucinations can lead to when the user isn’t paying enough attention.
Fortunately for copywriters, the stakes are not usually so high as to concern the law, but for those who wish not to cross ethical boundaries involving transparency, authenticity and even journalistic integrity, it is important to play fact checker and editor while offloading the task of writing itself. Diligence is required in order to turn out work that isn’t just of an acceptable quality, but is also original.
We’ve grown accustomed to computers, algorithms and AI playing a profound role in our professional and personal lives as they often offer us a path of least resistance. The experience is so streamlined and invisible that we barely acknowledge the spell check and grammar checking functions in our word processing software, nor do we consciously appreciate the search suggestions that any web browser or search engine generates as we search the web. We barely even notice that we’re assisted by AI every time we write a message on our phone in the form of predictive text.
Interestingly, it’s usually only when these AI’s make a mistake that we notice their involvement. Auto-correct has become relatively maligned for its tendency to change the words and spellings we use when messaging friends and family. Due to its ability to change the words we type in real time and thus cause us to accidently send typos, we tend to catch the error right away. It’s easy to catch when we see our thinking clash with the AI’s thinking in real time. But when we ask an AI to write out an entire message for us, we offload nearly the entire thought process to it, and so we more easily accept any clashes of intent and the words returned to us.
While we continue to come to grips with a technology that is so new and impressive, the shock of AI making great strides and impeding our relevance is being replaced by the shock of AI making mistakes and impeding our workflow. The steps to normalizing these kinds of technology have been gradually traversed in recent years, with services such as Grammarly taking a much greater presence in how we write everything from essays to emails. Younger generations that are growing up with this technology may be much more likely to take it for granted and bypass ever learning how to turn out work without the aid of AI. It’s tough to predict how the workloads will be shared between AI and copywriters in the future.
The evolving dynamics between human creativity and AI assistance necessitate a delicate balance. As AI becomes increasingly integrated into daily workflows, the challenge lies in maintaining originality while leveraging the efficiency of LLMs. The future distribution of workload between AI and copywriters remains uncertain, underscoring the ongoing journey to harmonise these elements in shaping the landscape of creative industries.
PS – ChatGPT wrote the last paragraph, it seemed a fitting way to conclude
References:
https://cdn.theatlantic.com/media/files/meta_response_silverman.pdf (page 7)
https://shorturl.at/gyAF6 (Forbes.com)
An article by: Madeleine Doporto, Beyon Sr. Copywriting Manager