When used for automation, AI tools can bring myriad benefits. But between ChatGPT and generative AI image-creation tools, creatives may now be fearing for their livelihoods. Should they be?

It’s been 25 years since a computer first beat the world chess champion at his own game. Today, news of the abilities of AI-powered tools floods our feeds.

Some businesses already plan to use the tech to enhance their offerings. Others, meanwhile, lament its destabilizing potential.

The excitement for, and trust in, its revolutionary capabilities have truly taken root, epitomized by Microsoft’s recent extension of a multiyear, multi-billion-dollar investment in OpenAI.

With chatbot capabilities extending to essay writing and more, many in the creative world consider themselves to be under threat. But can AI truly replace creativity? Are these tools actually generating something new? Or are they just spitting out formulas?

The inner workings of AI technology

It can be difficult to keep track of the ever-growing list of AI models available. To keep things simple, we’ll be focusing on ChatGPT and GPT-3.5

First, it’s important to differentiate between AI models, chatbots, and specific applications. GPT-3.5 (Generative Pre-trained Transformer) is a large language model, also called an autoregressive language processing model. ‘Autoregressive’ refers to the ability to predict future behavior based on past behavior, working with probability to generate one element, or word, at a time.

With 175 billion parameters, GPT-3.5 is the most recent and powerful AI model out there. ChatGPT is a chatbot, an artificially intelligent conversation simulator that uses this same deep-learning model to respond to requests and follow prompts. 

How machine learning can transform business practices

In business, AI tools can collect and process massive amounts of data. They can then use this information to help with problem-solving, pattern recognition, recommendations, or predictions – at a faster rate than any human could – making data-driven, informed decisions. This is called decision intelligence.

AI could improve healthcare and agriculture, as well as business and services. One practical example of this could see AI recording every single interaction on a popular website page to catch any user experience glitches or issues.

Other use cases could involve personalizing and refining online customer service and chatbot applications, condensing long and complex texts, or improving search queries – perhaps even rivaling Google, which has responded to the competitive pressure by launching its own chatbot, Bard

Read more: Decision intelligence: How AI is using big data to guide big business

ChatGPT’s competency has led some to believe that it might oust creatives from their professions. The existence of Jasper.ai, an AI content platform specifically designed for copywriting, seems to add weight to this argument.

The limits of AI solutions

Like all AI deep-learning models, these machines require data to learn and generate something new. One obvious area of scrutiny, therefore, is the nature of the data that is input. 

We have discussed in another post how AI models can reproduce, and therefore entrench, systemic and societal bias. We can see examples of this happening with previous AI chatbot iterations, such as Microsoft’s bot Tay and Meta’s Blenderbot 3, which have fallen victim to users purposefully teaching them sexist, racist, and antisemitic rhetoric and false information.

Read more: Online threats appear to be getting worse. But why?

In the case of OpenAI’s ChatGPT, the company has tried to mitigate against such developments with a content moderation tool, one that’s designed to flag and block unsafe and illegal information.

However, the paths to circumventing its filters are varied – and, perhaps more importantly, frequently successful. Users can, for example, easily trick the bot into explaining how to hotwire a car.


Another issue arises with the age of data. At present, most AI models are trained on a specific, finite data set, which means that new, real-time developments are not included. For example, ChatGPT’s current training data ends in 2021 – and incomplete data sets lead to incomplete and skewed results.

AI can also not discern right from wrong. While it may be trained on a wide variety of data, there is no distinguishing between accurate or inaccurate information. The risk here is that such a machine could very easily and very quickly spread misinformation that sounds completely plausible and convincing.

Ideally, AI output should be independently verified and cross-checked with reliable sources – but there is a chance those using it may not do so.

An artificial threat to real-world creativity?

In 2021, an AI model was trained to finish Beethoven’s last symphony, albeit with human expert support. More recently, the mainstream availability of ChatGPT has shown the world that it is capable of language translation, understanding complex topics such as quantum mechanics, and even more creative endeavors such as writing poems, scripts, generating code, and more. 


Unsurprisingly, questions abound about whether or not these AI models will ultimately subvert the creative realm.

Let’s consider an op-ed written by GPT-3 for the Guardian back in 2020. Upon reading, it is clear that AI can competently put together ideas and sentences that make sense when considered as single paragraphs.

However, the piece lacks a coherent narrative overall. The ideas are erratic and disjointed, and while they could be written by a person, it isn’t good writing. If individual sentences sound familiar, it’s because they are – they are patterns that have been rehashed, reiterated, and repeated countless times across the internet. 

Of course, this technology has evolved since 2020. More recently, a number of writers and journalists used ChatGPT to write convincing introductory paragraphs for their articles. It was also used to fool a recruitment team.

In an experiment, John Warner – writer, educator, and author on writing – through repeated prompts, guides ChatGPT to write a moderately good piece of creative writing. As with Beethoven’s symphony, it is through human guidance, tailored prompts, and emulating particular creative styles that the AI succeeds in sounding more human.

When it comes to formulaic texts, ChatGPT thrives. That is what it has been taught to do: to recognize patterns, word associations, and syntax structures. In that respect, AI can easily replicate an essay – a phenomenon that is making waves in academic circles


This does pose some questions about the ways we teach, what we might expect from, and how we value creative writing and creativity.

The fact that ChatGPT can complete such tasks could be more of a condemnation of standardization than a risk to creativity. Warner points out that, more often than not, students (and perhaps, others working in writing professions) are “rewarded for… regurgitating existing information” in a system that “privilege[s] surface-level correctness” rather than “develop[ing] their writing and critical thinking skills” – and, perhaps, this is the issue.

The future of AI in the creative world

AI models like ChatGPT can, through human expert direction, produce acceptable, and even quality work. At present, they cannot convey meaning the way a human can, because they do not understand meaning; these are machines that understand symbols and patterns. 

There is, of course, the opportunity to use such AI models as tools, or toys, to aid the teaching and endeavor of writing. As the integration of AI in general business and mainstream practices edges closer, this does raise challenges and concerns relating to copyright and plagiarism. This process is already occurring in the art world, where the widespread adoption of Lensa AI to generate images has triggered wider conversations about stricter copyright laws.

How will this pan out? It depends – at least in part – on how our societies evolve in response to this new technology. Progress cannot be undone, but creative and legal forces may come together to restrict the use of these tools. Some workers end up being supplanted by AI models, of course. But in the end, ChatGPT is only truly a threat to creativity if the value of quantity supersedes that of quality and originality.

 

 

Related articles