#32 – On Stochastic Parrots, Anthropomorphising AI, David Attenborough, our lack of imagination, and fixing briefs
It’s a sunny Friday in London.
I don’t know about you, but this week my feeds were full of people talking about AI (again) and building their own GPTs (new), fine-tuning them into the imperfect perfect “copilots” they can be, for creatives, strategists, etc. So, unsurprisingly, this issue has a bit of an AI lens. It explores the differences between toddlers (“small language models”) and GPTs (“large language models”), why humans talk to animals (and AIs), how AI helps you to get David Attenborough narrating your life, and why our lack of imagination currently keeps AI (and deep fakes) so harmless. To finish up, we’re looking at some trend lines for 2024, and a handy resource for everyone who is responsible for making the output of marketing better.
Enjoy the read.
Here are your six links of inspiration:
Is my toddler a stochastic parrot? There’s not a week going by where we don’t witness some sort of (minor) quantum leap in AI. This article juxtaposes those leaps with the bumbling advances of natural intelligence: small language models, the size of a toddler.
Why Do Humans Talk to Animals If They Can’t Understand? We have a dog. She’s a small rescue dog, called Minnow, and some of you have met her. Like a lot of pet owners, we talk to her. From the common commands (“Stop pulling!”, “Come here!”, “Down!”, etc.) to less obvious utterances (“What have you been up to?”, “What are you doing up here?”, etc.) The latter don’t make any rational sense considering we’re all fully well aware that animals don’t understand us. We do this, thought, because “humans are natural anthropomorphizers, meaning we naturally tend to [ascribe] all kinds of thoughts and meanings to other things in our lives. Humans can do this with just about anything […]. But that impulse is especially strong for things that are or seem animate, like animals and AI—and when it comes to pets, people often think of them as little members of the family.”
The article is from 2017. Back then most of us had no idea how prevalent AIs and GPTs would be in our lives by now. And here we ware, anthropomorphising machines in the way that comes so natural to us. Even the article above mentions that tendency – and how this might actually get in the way of us seeing the difference between artificial intelligence and our intelligence. Research suggests there are two main reasons to anthropomorphise: “first, that someone lacking social interaction needs to “create” a human to hang out with; second, that someone lacking control wants to feel more secure in uncertain circumstances, and anthropomorphizing allows him to predict an animal’s action based on interpersonal experience.” This makes intuitive sense when you think about interacting with animals. But it even seems to hold up when you think about our interactions with AI these days. There seems to be ample proof that we’re collectively lacking “real” social interaction (thanks to our smartphones and social media) and we live in a time of heightened uncertainty – certainly when it comes to understanding how AI actually works. It seems only natural then that we’re treating AI as a human-like entity. Potentially giving us a false sense of control and connection.Your Life Narrated by David Attenborough. Talking of AI, here’s something a little bit more lighthearted (even though you can see how something like this quickly could get scary…) Charlie Holtz has created a demo that uses GPT-4 vision and ElevenLabs to create a narrator that retells your life (or what it sees of it) in the tone and style of David Attenborough. It’s funny, it’s weird, it’s a sign of things to come. It may also be an exception to what the next article laments.
What the Doomsayers Get Wrong About Deepfakes. As deepfake-video technology has improved to the point where it rivals reality, everyone, including academic researchers and politicians, has warned about an immediate future in which the public will be unable to discern fact from fiction. Daniel Immerwahr, however, argues in this essay that “it’s easy to find videos that demonstrate the terrifying possibilities of A.I. It’s just hard to point to a convincing deepfake that has misled people in any consequential way.” That moment may still come, but for now the technology has mostly demonstrated the limits of the human imagination. “Able to depict anything imaginable, people just want to see famous women having sex,” he writes. “A review of nearly fifteen thousand deepfake videos online revealed that ninety-six per cent were pornographic.”
Metrics to keep an eye on in 2024, from solar cells to superhero movies. The Economist has published the latest issue of “The World Ahead” and it’s packed with dense reading about a challenging year. This article is a bit lighter, and shows all the trend lines that are worth keeping an eye on. Some questions this piece raises: when will China take the lead in car exports? Will robotaxis turn the corner? Will your cup of coffee get more expensive? (The answer might be: yes.) Is enthusiasm for AI chatbots in decline? (The answer might be: maybe.) And, most importantly: will superhero films make a comeback?
The issues with briefs. I’ve written about briefs before (creative briefs, that is) and how hard they are to get right. If you agree (and lots of people seem to agree), this is a resource for you: the IPA, Tom Fishburne, and BetterBriefs have published this handy booklet on making briefs better. Please go and read it, your clients and/or your organisation will thank you for it. And the cartoons alone are worth it.
I’ll leave you with some of my favourite quotes from the PDF:“Many hands make impeccably mediocre work” (Daryl Fielding)
“Briefs end up as a portmanteau of convoluted ‘buck shot’ requirements which at best obscure the job to be done and often miss it entirely” (Wiemer Snijders)
“For all our talk and posturing, two thirds of brands cannot even communicate who they are for (and who they are not for) in their strategic planning” (Mark Ritson)
“Endless rounds of exhausting revisions are a symptom of not making decisions upstream.” (Rosie and Faris Yakob)
This is it for this week. If you enjoyed the read, please consider subscribing and sharing this issue.
See you all next week,
Maximilian