I recently created a rock song about cats for my daughter. It took me 20 seconds, and I humbly offer it for your Grammy-consideration below. Once you’ve heard enough, do feel free to hit pause and read on.
If you haven’t guessed, Feline Fury was generated by AI, which took my simple instruction “write a hard rock, electric, gritty song about cats” and created this in less time that it takes to listen to the first verse.
My immediate reaction: “Wow.”
Followed closely by: “Well, that’s an entire strata of the music industry gone then.”
Followed a few minutes later by: “Hang on - what music has this software been trained on? Because it sounds like the answer is ‘all of it’”.
And then I just sat there, listening to the song a few times, speechless.
No, it’s not ground-breaking. No, it’s not a chart-topper.
But it’s also NOT BAD.
It’s catchy. It’s coherent and consistent. It has none of the weirdness that you still find in AI-generated images and videos (have you SEEN AI Gymnastics?).
The AI can of worms blew wide open a couple of years ago, and we still don’t know if we can put the genie back in the bottle (…that also exploded with the worm can - I’m mixing my metaphors, aren’t I?). The red-hot issues raised by Large Language Models and generative AI images - chiefly, putting creative people out of a job after sucking in all their work - is now on a collision course with the AI music scene. I’ll touch on some of the finer points later, but first I want to take you on a slight detour. Follow me, if you will, down the rabbit hole into which I have plunged. Although if you’re allergic to shellfish, you might want to wait here…
Dead Internet Theory
Don’t know if you’ve heard of this, but it’s a joke-cum-conspiracy theory that’s been doing the rounds for a few years, and Kyle Hill, one of my favourite science communicators (and part-time god of thunder), recently made a great video explainer about it.
In a nutshell, it’s the idea that the internet has become so overwhelmed by AI-generated content, that there are relatively few humans left posting anything. In fact, you might be the last human left browsing. Everything you’re reading, and everyone you’re interacting with, is actually a bot.
Back in 2021, when this started to gain attention, it sounded suitably silly. Yes, Twitter and Facebook had their fair share of bots pumping out fake news, to garner clicks and attempt to swing elections, but the web was wider than just hot-takes and hit-jobs. Plus, you could spot a spam-bot tweet a mile off, right?
But.
Now we have Large Language Models (LLMs). And image generators. And song generators. And video generators. And voice-over generators.
And right now my YouTube feed is full of trailers for movies that I honestly can’t tell if they’re genuine or not. Is Adam Sandler really playing Homer Simpson?
Is Tom Holland really starring in Back To The Future 4?
And is there a movie that hasn’t been re-imagined into 1950s Super Panavision?
Once again it was Kyle Hill (who I assume is human, if not super-human) who called out the amount of AI-generated science communication videos swamping his corner of YouTube. With scripts generated by LLMs, AI voice-overs, and imagery auto-lifted from genuinely good science channels, these videos can be created in their thousands, with catchy click-bait thumbnails and titles, and content that is, at best, regurgitated facts, and at worst, regurgitated conspiracy theories.
It was another of my YouTube faves, CGP Grey (who, from his videos, could actually be a bot), who once speculated that AIs will soon use platforms like TikTok to start auto-generating videos, measuring their audience engagement, and then using that training data to create ever more viral AI-generated videos.
And, lo! Something similar now seems to be happening on Facebook - have you seen Shrimp Jesus?
Much has already been written about this phenomenon - this Christacean, if you will - so I’ll only briefly summarise it here:
Bots are posting AI-generated images to social media, watching what gets the most likes, and blending them together. Looks like both the Son of God and my mum’s favourite Christmas dinner starter did well in tests, and so they’ve been thrown together in some kind of second-hand second-helping of the Second Coming.
And here’s where it goes one layer deeper: It looks like it’s not only humans who are clicking on the pics. Other bots are also boosting and reposting these images. We’re not sure why, but one possibility is that it’s to elevate the posting account’s status and follower count, and therefore the number of eyes (albeit AI-eyes - A-eyes, maybe) viewing the adverts displayed alongside the content. And because some platforms share their ad revenue with major content creators, these fake accounts could start to make money if they get enough engagement.
Another theory is that as the social media platforms strive to rid themselves of spam bots, accounts that can become big enough might survive the purge.
And just one more step into this rabbit hole takes you to the suspicion that, having built a large following through innocent images of holy shellfish, these accounts might then be used to deploy mass misinformation at some crucial point in the future. Like, I dunno, during an election, or something.
And of course, thanks to LLMs, this time, the bots could generate individual, more natural-sounding posts, and even have convincing conversations with the recipients of their propaganda messages.
And what’s happening in the world of Shrimp Jesus is also happening in the world of music. AI is creating songs in vast quantities, which are then listened to by other AI at even higher volume (please appreciate the double-meaning, I’m very proud of it).
A man in the US has been accused of using bots to stream music which itself had been AI-generated. He is reported to have claimed more than $10 million in royalties from Spotify.
At least one of the AI-music generating services certainly looks geared towards mass-generation - for around $30 a month, you can generate 60 songs a day.
So then, is there some truth in Dead Internet Theory? The billions of online humans may not be leaving, but the proportion of human vs bot generated content could well be shifting.
What cost legality?
It does look like the music industry is ready for the fight, though. The Recording Industry Association of America has filed a lawsuit against two music generation startups, based on the assumption that these companies have trained their AI software on copyrighted music. In what’s looking like a rerun of the AI-image legal situation, the argument likely won’t come down to imitation of particular artists or styles, but instead to the copying of existing works into a training database without consent.
And what happens next might also follow what happened in the picture space.
Adobe has trained its Generative AI image creator Firefly on its own huge catalogue of stock photographs, thereby eliminating copyright concerns by making sure it either owns the rights to its training data, or compensates those who do. Now you can make your own unique stock images, specific to your exact needs. Great for you - but maybe not for all those budding stock photographers, who suddenly find their work drying up.
Might one of the massive music companies do exactly the same?
The background music I use every week in the making of our programme is selected from huge collections of so-called library tracks, all lovingly written and played by musicians who earn royalties whenever their work is featured in a programme, advert, or jingle. There are hundreds of thousands of tracks to choose from, all labelled with the mood and possible uses (baroque comedy? youth-oriented sports news?).
So, what if I could make my own backing music? A specific track for a film about coffee-making robots in Korea? Or Indian village life? It’s not a big leap to imagine a music label training an AI on its entire back catalogue of library music, and allowing producers to do just that.
And then all those human musicians can go and join the stock photographers and raise an ironic glass to “progress”.
Happy Birthday to [*]
Given that Feline Fury is my daughter’s new favourite song, who’s to say that personalised music won’t, for a short time at least, be everyone’s new favourite thing?
Can you imagine a kids’ birthday party where everyone dances to a song specially made for the occasion? About the birthday girl and her friends?
Parents and DJs - you know I speak truth. Feline Fury is no Duck Song, but it’s certainly up there with the Fast Food Rockers.
End credits
I went to a song-generator and typed in “Dead Internet Theory. Gothic Horror.” Nothing else.
What it produced demonstrates that the bots now know enough about themselves to become their own autobiographers.
In other words, AI has become self-aware.
Off topic to this but last night Saturday 9th November I suddenly found a new episode of click available to watch. Has the programme been revived as it all seemed to be new stuff, or are they just using up previously filmed reports?
I'm fairly new to TwitterX and it took me a while to figure that most of the comments are just auto-generated descriptions of the OP image,video or comment. They all have the blue ticks and add nothing to the conversation, so it's been dawning on me that you have to scrollllll right down the comments to find other humans , otherwise it's just bots chatting up other bots, weird world. BTW that music AI generator is wild. https://suno.com/invite/@jovialdissonance5563