I’ve just had a lightbulb moment, and it’s made me inappropriately excited. I think I’ve hit on a new way to describe the hottest topic of the season - Artificial Intelligence.
A lot of my life has been spent taking complex subjects and explaining them simply, telling an interesting story about them, and concentrating on the parts that particular audiences need to know, while dispensing with the unnecessary detail.
And AI is by far the main thing I’m asked to talk about on-stage these days.
The hype is huge (have you noticed?), and I don’t know about you, but my social feeds are full of top-ten lists of how my business can go Full Unicorn simply by using a few AI tools.
It’s all around us, it’s going to save the world, it’s going to destroy the world, it’s coming for our jobs, it’s going to be our best friend (delete as applicable).
The media is of course complicit in all this - any article about AI is usually accompanied by a picture of the Terminator, or that white humanoid robot reading a newspaper. Neither of which are helpful to anyone but the AI companies whose bank accounts are reliant on the world, and investors, believing AI is close to achieving sentience.
This is where I feel I can help. When I speak about AI, I make it my mission to point out that it’s not human. It’s nowhere near.
Yes, I get excited about the possibilities, but I also think it’s vital to help audiences understand the limitations of AI. What it can do, but also what it can’t. Where it should be used, and where it shouldn’t. Y’know, before you invest millions of pounds in something that turns out to not be the panacea you were led to believe.
And that’s where my new description of AI will hopefully make a few good points.
Spot The Fuggler
I still haven’t decided whether this new idea will replace my already proven, slightly bonkers illustration of AI that seems to go down well with audiences. That one’s called Spot the Fuggler.
An audience of cancer specialists at a medical conference was the most recent to play this “mini gameshow”, which always tends to get a good reception, partly because it’s the last thing you expect after a day of (undoubtedly fascinating, but nevertheless) one-way talks and slideshows, but also partly because it’s been called “the best illustration of machine learning I’ve ever seen”. No, not by my mum, by someone who knows an AI thing or two.
I really believe that having a smidgen of understanding about how AI actually works really does help to demystify the whole subject.
In fact two days after I played Spot the Fuggler at a conference full of BBC Commissioners last year, I was asked to front the BBC’s Understand:Tech & AI podcast series, five episodes of which were a really satisfying deep dive into AI. So it certainly seems to do the job. I’ve linked above to Episode 6, which is as good a place to start as any. That kind of numbering worked for George Lucas, after all.
So at the time of writing, I don’t yet know if my new idea (which I will come onto - this isn’t an empty tease) should replace, or sit alongside, the Fugglers.
For me, my explanations, on-stage, on-screen, or on-mic, revolve around the deanthropomorphisation of AI (if it’s too long for a Scrabble board, it’s probably too long for general use, but hey, it felt good).
Because we now have AI that can chat to us and create beautiful images, and conversations and art are “human” things, it’s natural to assume that a Large Language Model’s “brain” might work in a similar way to ours. That it might be able to reason in a similar way, that it might even have a similar sense of values, the same kind of ”common sense”.
This, I think, is dangerous.
AI is not human at all. Actually, it’s just maths.
Cat Stats
I was on an episode of BBC Newscast a while back, and the other guest quoted sci-fi writer Ted Chiang’s answer to the question “what is Artificial Intelligence? A poor choice of words in 1954.” If we’d called it applied statistics, he continued, we’d all be able to understand what’s going on much better.
I’ll leave the details for another time, but the way machines learn to do these human-sounding tasks, like having a chat, creating a picture, or identifying faces in photos, is the same way they’ve learned to do many other complicated tasks. By swallowing absolutely tons of training data, and processing it at incredibly high speeds.
A child can learn to recognise a cat with relative ease. Show them a brown fluffy one and they will fairly quickly point at a black shorthair one and call it a cat too.
But to a computer, every cat looks different. In fact, the same cat from different angles looks different too. It has to be shown millions and millions of examples, before its neural net starts to make connections and find commonalities between all the images. There’s no human-like understanding going on inside the machine.
Instead, it’s just building up the statistics that will make it say “cat” instead of “not cat”.
Every AI that can do anything has been taught in this brute force method - ChatGPT has been fed billions of examples of good English sentences, Midjourney was given millions of images of cats (other animals are available), your phone’s personal assistant has been trained on millions of examples of spoken words.
And when an AI chatbot answers you, each word in the sentence is just statistically the most likely one to come next, based on everything that has come before. It’s predictive text raised to the next level.
It seems human, but it’s mechanical, and it is only possible because those mechanisms can now store a huge amount of training data, and process it very quickly.
This is not a bad thing, and it actually means AI is starting to outperform us at tasks that require high speed and perfect memory.
Got three minutes? Here’s how AI can help detect cancer…
The bottom line is that in order to learn a particular task, an AI has to expend much more effort than a human.
And just above that bottom line is my warning that if you are considering using AI, you’ll need to make sure you have a lot a lot a lot of training data available. Because that’s what it needs.
Which brings me to my new idea. Which almost certainly won’t live up to the hype of all that preamble, but there you go.
The Heath Robinson Brain Machine
How best to illustrate the idea that, though the end result may look human on the outside, on the inside, there’s a whole different thing going on? One which actually takes a hell of a lot of work to do something that we naturally find simple?
At some point (can’t remember when or where, but it’s usually on the loo), Heath Robinson popped into my mind. His cartoons of over-complicated machinery doing simple tasks might not be a perfect metaphor, but I think it’s close. And it just so happens that my kids are huge fans of the modern Heath Robinson: Joseph’s Machines on YouTube.
The next time you’re tempted to think AI might have a human side, or even the next time you’re considering the amount of effort, and therefore energy, that might be needed to train an AI to do a new task, it might be worth remembering how hard it is to get a machine to serve you a slice of cake:
I’m still working on the exact on-stage delivery, but I think it has legs.
Or wheels and cogs.
But not a brain.