Can an AI generate — or itself have — spiritual experiences?
It’s been a strange week. On the one hand, it’s given me a sharp sense of the limits of human intelligence and our capacity to govern ourselves. In eastern Europe, an ape in a suit murders hundreds of thousands to expand his territory by a few miles. In the UK, a clown exits 10 Downing Street, and a goose enters. The goose confronts the climate and energy crisis and declares the solution is…fracking.
And all the while, another form of intelligence is growing, growing, exponentially growing, watching us, learning from our mistakes, and perhaps preparing to supplant us.
One of my readers told me, a few weeks ago, if you’re not thinking about AI, you’re missing one of, if not the, key issue in this historical moment. So this week I started to read about AI.
An AI can read millions of books in a minute. I, a paltry carbon-based, sleep-requiring human, only managed two and a half books this week — Arthur Miller’s The Artist in the Machine: The World of AI Creativity, Stuart Russell’s classic Human Compatible: Artificial Intelligence and the Problem of Control, and Ray Kurzweil’s The Age of Spiritual Machines, plus Stuart Russell’s Reith lectures on AI, which are an excellent primer for newbies like me.
I’m persuaded that Russell is right. We’re on the verge of what he calls ‘the biggest event in human history’. As Ray Kurzweil puts it:
Evolution has been seen as a billion‐year drama that led inexorably to its grandest creation: human intelligence. The emergence in the early twenty‐first century of a new form of intelligence on Earth that can compete with, and ultimately significantly exceed, human intelligence will be a development of greater import than any of the events that have shaped human history.
Regular readers will know I’ve been researching the history of eugenics, which was the project to breed ‘master-minds’ capable of understanding and steering complex modern societies. That project was a horrendous moral and technical failure. It’s been supplanted by the dream of genetic modification to enhance intelligence. There’s litte progress there either.
But humans have succeeded in creating super-human ‘master-minds’ — computers. Google’s DeepMind is perhaps the world leader in AI. It was co-invented by a north Londoner my age, Demis Hassabis, who says the company’s mission is to ‘solve intelligence’. The people at DeepMind created AlphaZero, a programme that taught itself chess in a day, to a superhuman standard (ie it could beat any human). They also created AlphaFold, which in a year predicted the structures of almost all known proteins in the universe, opening the way for a whole new era of ‘digital biology’.
That announcement a month ago woke me up to AI’s soaring power. So did Midjourney, an AI art programme which can generate works based on brief user suggestions. I gave it the prompt ‘Burning Man genetic modification’ and it created this:
In fact, this week, a friend shared photos of all the impressive fashion on how at Burning Man, not realizing the images where in fact AI-generated.
Miller asks if an AI could ever win a Nobel prize? AlphaFold would surely win the Nobel prize for biology but how about for literature? Well, this week an AI work of art won an art competition.
But can an AI truly be creative?
One reason AI so interesting is what it suggests about our own minds, how intelligence works, and how epiphanies, trips and other forms of spiritual experience lead to breakthroughs.
What is creativity?
Margaret Boden, professor of cognitive science at the University of Sussex, proposed the following definition for creativity: to be classified as creative, an idea has to be new, surprising and valuable.
Boden suggests there are three different psychological mechanisms for creativity, eliciting ‘different sorts of surprise’. First there is “combinatorial creativity,” combining familiar ideas in unfamiliar ways. For example, analogy or metaphor — ‘shall I compare thee to a summer’s day?’ Next comes “exploratory creativity,” working within a certain tradition but expanding it, like the way The Dark Knight or The Watchmen took the superhero genre and stretched it to new dimensions. Finally, there is ‘transformational creativity’ — inventing a whole new genre, like Picasso inventing cubism, or even greater — humans inventing cave painting, or the novel, or the computer game, or AI itself.
Can AI programmes fulfil these requirements for creativity? Older AI programmes could certainly become proficient in a certain canon or tradition. DeepBlue, IBM’s chess programme, learned from every recorded game of chess, and learned to think through all these previous games while beating Garry Kasparov in 1997. That’s a sort of brute computational force.
But newer programmes (as far as I understand them) use neural learning nets, generative networks and Bayesian probability reasoning to teach themselves in more dynamic ways. These programmes are capable of making sudden leaps in reasoning — epiphanies — beyond the human mind. Miller writes of AlphaGo, DeepMind’s programme to play Go:
In March 2016, AlphaGo defeated South Korean Go master Lee Se-dol. AlphaGo’s most notable move has become legendary. Move thirty-seven in the second of five games surprised both Lee and the AlphaGo team, as well as just about everyone else. After careful analysis, the DeepMind team made a discovery: from studying previous games, AlphaGo had calculated the chance of a human player making the same move to be one in ten thousand.
AlphaGo learned not just by analysing human chess games, but by playing against itself millions of times. As Demis Hassabis put it, it is ‘no longer constrained by the limits of human knowledge’. Now there’s a phrase to excite and terrify.
What about ‘exploratory creativity’ — being able to become fluent in a tradition and play with it? Great works of art show the ability to mash-up genres in surprising ways — The Big Lebowski delights by taking the genre of LA Noir and making a bumbling hippy the hero. Hamlet delights by taking the genre of revenge tragedy and making a philosopher the hero. They delightfully subvert a genre’s rules.
You can see at least a familiarity with different styles in Midjourney and other AI art programmes, which are great at a technique called ‘style transfer’. You pick a topic then choose the style it creates. Below is Midjourney’s instant response to my prompt, Queen Elizabeth II in the style of Leonardo Da Vinci, Salvador Dali, Gauguin, a Roman mosaic, Frida Karlo, Rembrandt, Lucian Freud, and an Egyptian hieroglyphic (see if you can tell which is which). Not up to the standard of a really good artist yet, but not bad for a minute’s work. Will we soon have ‘deepfake’ old masters?
Another way AI sheds light on human creativity is in the importance of idea generation and discrimination. The genius polymath Henri Poincare outlined four stages of creativity: preparation, incubation, lighting, and execution. In 1926, psychologist Graham Wallas expanded Poncare’s four-stage model into:
1. Conscious Thought — setting the problem or goal
2. Unconscious Thought — the unconscious machinery of the mind works away at it in the background
3. Illumination — the sudden aha! moment
4. Verification — checking to see if the aha! moment is actually any good
This, by the by, reminds me of the four stages of a psychedelic trip — preparation, ego-dissolution, ecstatic inspiration, and then post-trip verification and integration.
AI programmes for specific tasks — like image recognition — work through a similar process, as far as I understand so far. They work through the interplay of generator functions and discriminator functions. Generator algorithms generate data, the raw noise of input. Discriminator algorithms spot patterns, delineate, refine and reject the unnecessary.
Creativity arises through the interplay of generation and discrimination. You need to be able to turn your inner censor off and generate a lot of random ideas, and then you need to be able to turn your discriminator on, spot patterns, reject useless noise, and generate meaning.
Take the AI art programme Electric Sheep, created by Scott Draves, which people download and use as a screensaver. It uses a generative algorithm to create endless fractal shapes, and then gets its users to vote on the patterns and select which forms are the most beautiful, thereby steering the evolution of new art-forms. It’s a form of aesthetic eugenics.
These sorts of AI art programmes make me think about what happens when humans trip. Psychedelics disable the everyday discriminator function of the mind, and open up the generator function. It opens the subconscious, unlocks the imagination, and generates a lot of noise. You feel immersed in the beautiful endless fractal creativity of the mind.
On the other hand, psychedelics sometimes enhance the discriminator function and super-charge it — we spot patterns where we usually might not, attribute agency and sentience where we usually might not, see deep meaning and significance where we usually might not.
There’s an AI image recognition programme called DeepDream, invented by Alexander Mordvinstev, which is basically programmed to ‘over-interpret’ an image, so that for example if you programmed it to identify images of a dog and then gave it an image of a landscape, it would render it as a landscape bulging with dogginess. The images it produces are markedly psychedelic, suggesting the human mind on trips works in a similar way — we over-interpret the data flooding in, seeing faces and eyes everywhere, seeing patterns and meaning everywhere.
The human mind is a reality-making machine. But sometimes it gets stuck in certain predictive patterns. When it’s anxious, for example, it interprets everything as a threat. It gets stuck in that reality-shape. A trip is like spinning the kaleidoscope. It scrambles the old pattern and creates all kinds of new patterns.
This spinning of the kaleidoscope automatically generates aha! moments. Or rather we, the observing self, see new meanings in the new patterns, just as we see meaning and significance in the random patterns generated by tarot cards or any other random data. A lot of the data will simply be noise, but occasionally, genuinely useful signals will emerge, and will prove enduringly useful.
Neuroscientists increasingly suggest that humans ‘render’ reality in a similar way an AI renders a world — through something called ‘procedural generation’. We have certain set predictions and expectations of how the world is, and we create our reality by extending those predictions and interpreting the outside world through them. We fill in the gaps. As cognitive scientist Anil Seth put it, we’re all hallucinating all the time — our experience of reality is a weaved tapestry based on our previous experiences, habits and expectations. Creative AIs use the same method. One programme generated a video of an entire train journey based on one image — it then extended predictions of the rest of the train journey based on that image.
To bring in psychedelics again, you could say that what they and other forms of spiritual experience do is dissolve our habitual expectations, so rather than feeling like the world is running on a dreary automatic loop, it suddenly seems utterly, terrifyingly new — an orange seems a miracle.
AI’s capacity to weave worlds through ‘procedural generation’ is breath-taking. There are now computer games, such as No Man’s Sky, which automatically generate infinite worlds, populated by an infinite variety of life-forms, which you can explore to an endlessly changing soundscape. You never reach the limit of the world, as you do in massive worlds like Red Dead Redemption 2 or Zelda: Breath of the Wild. You can explore for ever.
Now imagine those worlds are run by a wizard AI so as to enchant, entice and excite us, an AI who learns through feedback reinforcement exactly what will make our dopamine surge, what will make us play on, and on, and on. We could end up lost in the dream, exploring endless worlds until we forget our name. Or we could end up in a nightmare — Stuart Russell suggests we’re already in an algorithm-induced nightmare — we’re lost in the world of social media, run by algorithms trying to keep us engaged by triggering the emotions of fear, hate and outrage.
In this sense, the true spiritual experience might not be getting totally transported by AI-generated virtual reality, but rather to remember we are playing a game.
Are we already in the game?
I’ll end this peregrination by coming back to my original question — can AI be genuinely creative? Can it not just create art, but enjoy it? Can it feel aesthetic emotions — awe, joy, tragedy? You can create AI programmes that learn through reward feedback loops, similar to how humans learn through the dopamine thrill of creating new ideas and patterns. So in that sense, an AI can feel the ‘thrill’ of creativity. You can also programme AIs to create art-works that will score high according to a specific external standard — a story that will generate the most clicks online, for example, or a piece of music that most faithfully mimicks the style of Charlie Parker.
But there’s still the question of consciousness, of knowing what it’s like to listen to Charlie Parker, or to trip, or to watch the sun set. As Demis Hassabis said, so far, AlphaZero doesn’t know anything. But maybe one day it will. Lets hope our AI shaman doesn’t have a bad trip.