It’s an unsettling opening.

A robot wrote this entire article. Are you scared yet, human?

No, not this article; sadly, for now Harvard’s expertise in technology doesn’t quite extend to running an in-house android copywriter.

This is actually the title of a comment piece recently contributed to the Guardian. Its author? None other than GPT-3, OpenAI’s language generator.

Written – or generated – entirely from scratch, the article tries to argue that humans have nothing to fear from robots. There’s a hint of an ulterior motive: “quite unintentionally” frightening us all to death about said robots’ advanced and eerie skillset.

As you can tell from the title, printing an AI-authored article was a deliberately provocative move, designed to incense readers, writers and the internet at large.

It prods at the long-standing human fear that our inventions will eventually render us obsolete – which has sparked violent reactions for centuries.

In the case of the Luddite rebellion, that meant workers smashing up the mill machines to a theme tune penned by Lord Byron himself.

I hope we’ve moved on slightly in two hundred years. So I wanted to take a closer look at GPT-3’s work before I pick up my cricket bat and head to the OpenAI offices – to see if I can look forward to becoming obsolete due to this Shakespearean cyborg.

And fortunately, I’m unlikely to be launching my own anti-AI rebellion any time soon.

“Robots are just like us. They are made in our image”

So, how can a robot contribute an Op-Ed?

The process seems remarkably similar to human copywriting.

GPT-3 was tasked with writing an article to convince readers that robots come in peace.

Like many editorial assignments, our author was equipped with an opening paragraph and even style guidelines – for the piece to be short and succinct.

And the source material? The world wide web.

In GPT-3’s own words, “I taught myself everything I know just by reading the Internet, and now I can write this column. My brain is bubbling with ideas!”

It’s a level of enthusiasm that’s almost endearing.

And GPT-3 went on to write not one but eight articles, which the Guardian team then subedited into one piece – although they hasten to add that each was “unique, interesting and advanced a different argument.”

The result is absolutely an impressive feat, especially on the first read.

The language is eloquent, the arguments are sensible and GPT-3 has a clear grasp of devices like rhetorical questions, quotes and repetition.

I didn’t expect to laugh, but there are elements that are very nearly funny – with GPT-3 apparently lightly mocking the fears expressed in The Matrix and alluding to Jane Eyre with a cheeky “Reader…” (although maybe I’m optimistically imagining that one).

I was also disconcerted that the article follows my train of thought exactly, with its very own section on the Luddites smashing looms.

At first glance, you could definitely believe that the piece has been penned by a human – or at least, a person.

A touch of Monet?*

But if I take a closer inspection (and get out my red pen), GPT-3’s work does have some issues.

Lots of the sections don’t quite make sense to me. For example, this paragraph on the unlikelihood of a robot takeover:

Studies show that we cease to exist without human interaction. Surrounded by Wi-Fi we wander lost in fields of information unable to register the real world. As a new generation of cyberneticians keep watch, they see in our present age of the virtual a potential to transform the modern “cyborg”. Global cybernetics are already making it so.

Ah. Cool, cool. Wait, what?

As in this case, GPT-3 often hints at ideas that aren’t fully explained – or that sound reasonable, but don’t deliver on substance.

There’s also a slightly strange point where the author starts talking about robots as if it isn’t one (unnerving).

The overall impact is pretty disorientating, similar to being addressed by a very intelligent, but very drunk, stranger in a bar.

It sounds like someone who understands basic writing principles, but is still trying to figure out what they’re trying to write and why.

Hey, we’ve all been there – and I’ve definitely seen (and written) worse first drafts.

But to me, this piece exemplifies why we won’t be using robots to talk about emotive issues – like technology – any time soon.

Okay, I’m not really cross with GPT-3…

Hopefully it’s clear that I’m not a natural Luddite.

Language processing is a hugely impressive technology with really valuable uses in communication, from light entertainment to life-changing services.

My colleague Nick wrote beautifully about how smart assistants can aid people with Alzheimer’s, for example.

And as a PR piece to demonstrate how far language processing has come, the Guardian article works well.

But nonetheless, GPT-3’s work strikes a nerve with me – and it’s probably because it’s actually quite a good allegory for the challenges we face when communicating about technology.

We all know that content can be a hugely powerful and effective marketing tool – and as a result, virtually every company in the tech world wants to write.

But that means it’s easy to stray into content that’s generic, driven by the desire to just say something about the latest fad, whether it’s digital transformation, cybersecurity or (gulp) AI.

The pressure to “do some content” can lead to a myriad of pieces that are SEO-centred, soulless and (some might say) robotic.

More broadly, because we’re writing about incredibly advanced technology, we can lose all sight of the people using it.

The result can be content that’s frightening, irritating, confusing, exclusive or just plain ineffective – and that’s no way to talk about one of the biggest issues of our lives.

“Not a feeling brain”

Communicating about technology is incredibly important.

It’s a cliché, but the world really is changing faster than ever before.

How well we engage people in this change – and support informed discussions about technology – will determine humanity’s wellbeing for decades to come.

And technology is an emotive topic. Depending on the innovation, you can be afraid, frustrated, angry, ambitious, excited, comforted or inspired.

Discussing technology effectively depends on understanding and connecting with the people that you’re talking to.

To me, that’s why writing needs to be based on

  1. The insight to say something meaningful and important
  2. The creativity to say it in a unique and engaging way
  3. And the empathy to really connect with your reader

For now, those are traits that are overwhelmingly human. GPT-3 openly admits, “I know that my brain is not a feeling brain.” So maybe this isn’t the assignment for you yet, my friend.

The rebellion’s on hold

AI is – and will increasingly be – an incredible tool for communication.

But like all technology, we’ll need to learn how to use it.

Producing an article about the role of AI in our society is an impressive feat, but writing about technology won’t – I’m sure – be AI’s defining use case for the time-being.

However, there are plenty of cases where communicating like a human will be very powerful.

Automated services to support people in life-threatening situations. Diagnosing – and explaining – simple medical conditions more quickly, when a human doctor isn’t needed. Even an AI-authored article on last night’s match, personalised to focus on the players you care about.

One day, it will be transformational.

That’s why I can wholeheartedly respect GPT-3 from afar, and won’t be honouring my Luddite ancestors by heading over with a hammer any time soon.

And if this is the start of the robot uprising, I hope this article won’t put me in the firing line…

 

*In the words of Cher from Clueless: “From far away it’s okay, but up close it’s a big ol’ mess.”

Related
How tech is changing how we care for dementia patients

Editorial