How did transhumanism become the religion of the super-rich?
Once upon a time there was an obscure mailing list. It only had about 100 people on it, yet in this digital village was arguably the greatest concentration of brain power since fifth-century Athens. There was Hans Moravec, pioneer in robotics; Eric Drexler, pioneer of nanotechnology; Eliezer Yudkowsky, father of the Rationalist movement; Max More, father of modern transhumanism; Nick Bostrom, founder of Long-Termism and the study of Existential Risks; Hal Finney, Nick Szabo and Wei Dai, the inventors of cryptocurrency; and Julian Assange, founder of Wikileaks. Together they developed a transhumanist worldview — self-transformation, genetic modification, nootropic drugs, AI, crypto-libertarianism and space exploration. It’s a worldview that has become the ruling philosophy of the obscenely rich of California.
It all started in Bristol, England. There, a young man called Max O’Connor grew up, and went to study philosophy at Oxford. But Max wanted more, more excitement, more life, more everything. He changed his name to Max More, and moved to California, where the future is invented. His dreams took root in the soil prepared by Californian transhumanists of the 1970s. Many of them were members of an organization called L5, dedicated to the colonization of space by a genetic elite — its members included Timothy Leary, Marvin Minsky, Isaac Asimov and Freeman Dyson, and its magazine was named Ad Astra — which was what Elon Musk named his school for SpaceX kids in 2014.
Max was also inspired by Robert Ettinger, an American engineer who argued that humans would soon become immortal superbeings, and we should freeze ourselves when we die so we can be resurrected in the perfect future. While doing a PhD at the University of Southern California, Max got a job at the Alcor Foundation for cryonic preservation, and in 1989 he started a magazine with his fellow philosophy grad, Tom Morrow, called Extropy: Journal of Transhumanist Thought. ‘Do you want to be an ubermensch?’ the first issue asked.
‘Ubermensch’ (overman or superman) is the German word used by Friedrich Nietzsche to describe the individual (male or female) who has overcome all obstacles to the perfection of him or herself…
Extropy often featured photos of More himself, looking like an ubermensch with his chiseled jaw and blond ponytail:
If you wanted to be a superman, you should sign up to be an Extropian. Extropy was the opposite of Entropy — an escape from death into unlimited possibilities. ‘Extropians are individuals who believe in growth, both in their own abilities and intelligence and in their culture.’ Humans were at a ‘turning point in history — a time when computers, artificial intelligence, nanotechnology, self-modification, physical immortality and other factors promise to radically transform virtually every aspect of our existence.’
The Extropians had superhuman ambitions, but initially only took baby steps — the journal featured adverts for mail-order smart drugs, articles on psychedelics as self-modifiers, and even hokey instructions on how to ‘visualize your Optimal Person’, taken from Tony Robbins. But this was just limbering up before the really serious work began of genetic modification:
In the nearer future [Max More wrote] individuals, couples or groups will decide on the genetic constitution of their children, no doubt eliminating deficiencies and maximizing the offspring’s mental health, physical capacities and emotional stability…There should be no authoritarian government control over such choices
This was the difference between the Extropians’ genetic schemes and the eugenics of the 1920s. The goal was similar — engineer a species of ubermensch — but the technique and ethos very different. HG Wells had promoted a one-world state and a global ministry of eugenics. The Extropians were libertarians, who believed in ‘polycentric / privately produced law and competing digital private currencies’. More liked to quote Nietzsche: ‘Only there, where the State ceases, does the man who is not superfluous begin’.
The Extropians were anarchists: ‘We should avoid paying tax-extortion (the life blood of statism) and should pay no heed to unjust laws whenever we can.’ Their ‘greatest enemy’ was the Food and Drug Administration and its ‘safety fascism’. They planned to create an offshore transhumanist utopia — Extropolis, perhaps, or Free Oceana — with its own cryptocurrency. The Extropian journal and mailing list featured articles by Hal Finney, Nick Szabo and Wei Dai, three pioneers of cryptocurrency and three leading candidates for being the people behind the mysterious ‘Satoshi Nakamoto’, inventor of bitcoin. We know, at least, that Nakamoto sent the three his white paper for bitcoin, and Finney was the first to receive a bitcoin transaction.
To truly be free, the Extropians needed to leave Earth and build libertarian communes in space. Or escape the meat sack of the body. ‘We can leave our weak, vulnerable, stupid bodies and brains for something enormously superior’, More promised. Hans Moravec assured his fellow Extropians that this would be the next step in evolution. Cyborgs
could carry on our cultural evolution, including their own construction and increasingly rapid self-improvement, without us, and without the genes that built us. When that happens, our DNA will find itself out of a job, having lost the evolutionary race to a new kind of competition.
There were other early transhumanist / cyberpunk magazines in the 1990s, emerging out of California’s hippy-tech subculture. There was Wired magazine, founded in 1993 and inspired by Stewart Brand’s Whole Earth Catalog. There was Boing Boing, founded in 1988, inspired by the transhumanism of Timothy Leary and Robert Anton Wilson. And there was Mondo 2000, which started life as a psychedelics mag in 1984 before morphing to a celebration of transhumanism and cyberpunk. Its founder, RU Sirius, was also inspired by Leary’s SMILE philosophy (Space Migration, Intelligence Expansion, Life Extension). Its strap-line was ‘how to mutate and take over the world’, but it was less into genetic modification and more into psychedelics, occultism, music, fashion and biohacking. You get a good flavour of the scene from some of Mondo’s adverts:
Yudkowsy and Bostrom: two Extropian super-minds
But if the hipsters were reading Mondo, the really serious people were on the Extropian mailing list. That’s where you could find Eliezer Yudkowsky, for example.
Yudkowsy was born to an Orthodox Jewish family in Chicago in 1979. But he soon knew he was different. When he was eleven, he tells us, he took the SAT and came second ‘out of the entire 7th grade of the Midwest’ (his italics). Aged 17, he wrote:
I have an intelligence enhancement which is a net evolutionary disadvantage. Generally, claims of being a genius — particularly accompanied by the word ‘evolution’, if not, generally, ‘evolutionary disadvantage’ — involve Nazioid, self-righteous statements about the dawn of a new race and evolutionary destiny…A project to enhance human intelligence must prove itself free of the Nazi ‘master race’ meme.
Eliezer was far too smart to be a Nazi.
Nazism is a stupid political philosophy. I specialize in perceiving logical flaws and self-delusions in things like political philosophy. Anyone smart in the same way that I am isn’t going to be a Nazi.
He dropped out of school, and joined the Extropian mailing list. Finally, he had found his people, although he thought Extropians were under-ambitious. Eliezer was in a hurry — he wanted the Singularity NOW.
I have had it. I have had it with crack houses, dictatorships, torture chambers, disease, old age, spinal paralysis, and world hunger. I have had it with a planetary death rate of 150,000 sentient beings per day. I have had it with this planet. I have had it with mortality. None of this is necessary. The time has come to stop turning away from the mugging on the corner, the beggar on the street. It is no longer necessary to look nervously away, repeating the mantra: “I can’t solve all the problems of the world.” We can. We can end this.
His urge to spark of an ‘intelligence explosion’ was only sharped by the tragic death of his nineteen-year-old little brother in 2004. He declared: ‘You do not make peace with darkness! You do not make peace with Nazi Germany! You do not make peace with Death!’
Another super-brain on the Extropian mailing list was Nick Bostrom. Niklas was born in Helsingborg, Sweden, in 1973. He also dropped out of school, and had an awakening moment reading Nietzsche in a forest. ‘It’s hard to convey in words what that was like’. In 1996, when a post-graduate at the LSE, he joined the Extropian mailing list. It converted him from a Romantic to a transhumanist. You cannot become an ubermensch, he decided, ‘by any magic trick or hokum, or by the power of wishful thinking, or by semantic acrobatics, meditation, affirmation, incantation, or by an act of parliament…[Only via] a reconfigured physical situation through technology.’
In 1997, he founded the World Transhumanist Association with fellow transhumanist philosopher David Pearce. Bostrom became the leading philosopher of transhumanism and the ethics of human enhancement, with a capacity both for dry analytical analysis and for lyrical rhapsody, as in his 2008 ‘Letter from Utopia’, a letter from our future self, urging us to embrace enhancement:
You could say I am happy, that I feel good. That I feel surpassing bliss and delight. Yes, but these are words to describe human experience. They are like arrows shot at the moon. What I feel is as far beyond feelings as what I think is beyond thoughts…Every second is so good that it would blow your mind had its amperage not first been increased..It’s like a rain of the most wonderful feeling, where every raindrop has its own unique and indescribable meaning…One drop would justify and set right a human life, and the rain keeps raining, and there are floods and seas.
Sounds orgasmic.
The Eugenics of Bliss
How do we get to Utopia? We need to embrace genetic engineering to increase our intelligence. This has led to accusations that transhumanists like Bostrom are eugenicists. Yes, they would probably shrug, so what. The OED definition of eugenics is
The study of methods of improving the quality of human populations by the application of genetic principles.
Francis Galton and his disciples had sought to enhance intelligence and create ‘master-minds’ through the clumsy methods of the stud-farm, treating humans like cattle in the process. But Bostrom argues for genetic enhancement in terms of individuals’ and parents’ right to fulfil their potentialities:
Transhumanists promote the view that human enhancement technologies should be made widely available and that individuals should have broad discretion over which of these technologies to apply to themselves (morphological freedom), and that parents should normally get to decide which reproductive technologies to use when having children (reproductive freedom).
Indeed, as parents should want the healthiest children, enhancement might even become something of a moral obligation. Gradually, Bostrom predicts, there will be a shift in public attitude to genetic modification. He writes in his best-seller, Super-Intelligence:
Human reproductive cloning, once achieved, could be used to replicate the genome of exceptionally talented individuals… even a relatively small increase in the number of exceptionally talented people might have a significant effect; and it is possible that some state would embark on a larger-scale eugenics program, perhaps by paying surrogate mothers.
The dark history of eugenics might put some nations off. However…
Nations would face the prospect of becoming cognitive backwaters and losing out in economic, scientific, military, and prestige contests with competitors that embrace the new human enhancement technologies. Individuals within a society would see places at elite schools being filled with genetically selected children (who may also on average be prettier, healthier, and more conscientious) and will want their own offspring to have the same advantages. There is some chance that a large attitudinal shift could take place over a relatively short time, perhaps in as little as a decade, once the technology is proven to work and to provide a substantial benefit
This wouldn’t be forced enhancement, just permitted. Bostrom casts himself as a defender of individuals’ right to a ‘vastly augmented existence’. His fellow transhumanist, David Pearce, with whom he co-founded the World Transhumanist Association, has an even grander vision. He calls it the Abolitionist Project — we will correct the ‘corrupt code’ of our DNA and ‘abolish suffering throughout the living world’. Pearce writes:
Our descendants will be animated by gradients of genetically preprogrammed well-being that are orders of magnitude richer than today’s peak experiences.
But how? Through electrode brain-implants, designer drugs and eugenics.
what if, as a whole civilisation, we were to opt to become genetically hyperthymic — to adopt a motivational system driven entirely by adaptive gradients of well-being?
A happy-clappy planet!
Eliezer Yudkowsky has also envisaged a state-led eugenic programme to engineer a more blissful humanity, in his tongue-in-cheek utopia, ‘Dath Ilan’:
dath ilan has comprehended that happiness is heritable…so it’s understood that, except in very exceptional circumstances, if you’re unhappy on average for any reason, it is your duty to the next generation not to have kids who might inherit this tendency…[The unhappy and the disturbed therefore choose to move to ‘the Quiet Cities’, and not have children]. The number of people who go to the Quiet Cities is more like 5% than 15%…And the rest of the world is mostly happy, because transmitting constitutions that can be happy in a civilized world is itself a solvable coordination problem.
Of course, eugenics as Galton imagined it was incredibly slow and inaccurate. It would take generations, even millennia, to breed a race of superhumans, like the Bene Gesseret in Frank Herbert’s Dune. But with new genetic technology, that process can be speeded up significantly. Bostrom writes enthusiastically of ‘iterated embryonic selection’, in which multiple generations of embryos could be created, selected for intelligence and passed on in days. Such methods, he suggests, could take biological humans up to the level of ‘weak super-intelligence’ (say an IQ of around 300).
If we uploaded our consciousness into digital format, then intelligence enhancement becomes so much easier. Bostrom accepts the theory, put forward by fellow Extropian Robin Hanson, of ‘whole brain emulation’, in which humanity will soon upload their consciousness into the Metaverse. This would also massively speed up evolution — you wouldn’t have to wait for a child to grow up, test its IQ, then let it breed (or not). You could create, copy, edit, paste and delete hundreds of times a day. Think how blissful those digital selves would be, freed from all the messy imperfection of our bodies. We could, writes Bostrom,
design digital minds that could realize ‘off the charts’ states of hedonic well-being — levels of bliss that human brains are totally incapable of instantiating
In fact, from a Utilitarian perspective, in order to ‘maximize the greatest happiness of the greatest number’, there is a good argument to divert resources away from flesh-and-blood humans and animals and towards these digital selves. Bostrom writes:
in the long run, total well-being would be much greater to the extent that the world is populated with digital super-beneficiaries…And insofar as such beings come into existence, their concerns might predominate morally in conflict with human and animal concerns, e.g. over scarce natural resources.
But the true moonshot of intelligence expansion involves leaving humanity behind altogether and creating an AI super-mind, with an IQ not of 200 or 300 but 6000 or 600000. Such a god would enable humans to solve intelligence, solve suffering, solve matter, solve death, solve everything. We would arrive at what Bostrom calls our ‘cosmic endowment’ — a glorious future in which blissful humanity expands across physical and digital space. How many happy humans?
10,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000 human lives (though the true number is probably larger).
He gets quite Jesuit or Buddhist in his imagination of infinite bliss:
If we represent all the happiness experienced during one entire such life with a single teardrop of joy, then the happiness of these souls could fill and refill the Earth’s oceans every second, and keep doing so for a hundred billion billion millennia. It is really important that we make sure these truly are tears of joy
The Transhumanist’s Nightmare
But what if the future turned out to be an infinite ocean of unbearable suffering and torture?
Around the birth of the new millennium, Eliezer Yudkowsky had what can only be described as a religious crisis. After several years of striving for the Singularity, he suddenly realized that the birth of a super-intelligent AI might mean the end of humanity, or its permanent enslavement. An AI might decide that humans are a competitor — and it might become super-intelligent so quickly that we would have no chance to defend ourselves. Or it might be over-enthusiastic in its performance of our commands. We might ask it to make paper-clips, for example, and it obligingly converts every atom in the universe into paperclips.
Yudkowsky ditched his Singularity Institute, declared all his previous writing as wrong, and launched the Machine Intelligence Research Institute in 2005, and a new blog, LessWrong. Now, he and his followers would strive to ensure the creation of ‘friendly AI’ and work to save humanity from wrathful gods. They would also strive to make humans more rational, through the LessWrong blog, and sister organisations in what became known as the Rationalist community — the Overcoming Bias blog of Robin Hanson, the SlateStarCodex blog of Scott Alexander, and the rationalist workshops of CFAR, the Centre for Applied Rationality. The Rationalist movement also had its bible, a 660,000-word Harry Potter fanfic written by Yudkowsky called Harry Potter and the Methods of Rationality.
Bostrom also became less ecstatic and more focused on risks to humanity, especially the risk of AI super-intelligence. He launched the study of ‘exisential risks’ in 2002, and the Future of Humanity Institute in Oxford in 2005. He also helped develop the philosophy of Long-Termism with fellow Oxford philosophers Toby Ord and William MacAskill (they share an office). Long-termism argues that we need to think of the well-being of future humans and make sure we don’t go extinct this century. The stakes could not be higher — a cosmic endowment of trillions of souls in bliss, or human extinction…or an eternity of hell! Nothing could possibly be more important than for the cognitive elite to dedicate their time and resources to Rationalism / Long-Termism. A handful of nerds are fighting to save the world. Truly, this is Harry Potter fan-fiction.
How transhumanism became the philosophy of the very-very-very-very-very-very rich
Philosophy is not the hottest or best-funded academic discipline out there, but transhumanism / longtermism is very hot and very well-funded. It’s attracted particular support from two founders of PayPal — Peter Thiel and Elon Musk, the richest man in the world. Thiel has given funding to Yudkowsky’s MIRI, as well as investing in longevity research in a bid to live forever. Musk, meanwhile, has tweeted his love for Bostrom’s Superintelligence and MacAskill’s What We Owe the Future, and has put several million dollars into the Future of Humanity Institute. He’s also invested $10m into AI safety, putting Bostrom on the committee to decide where the money goes. MacAskill and Toby Ord launched the Effective Altruism movement to help people decide what to do with their money and their lives to maximize well-being. After some hard number crunching, EA decided the best thing for philanthropists to spend their money was…nerds thinking about Long-Termism and Existential Risk! Hey, the maths don’t lie. Effective Altruism has raised over $50 billion, with large donations from Facebook co-founder Dustin Moskovitz and crypto billionaire Sam Bankman-Fried.
But this is just the tip of the iceberg. More generally, the once-niche philosophy of Extropianism / Transhumanism has now become the ruling cultural code of Silicon Valley’s ultra-rich. Its a philosophy or religion of self-enhancement, psychedelics, genetic medicine, longevity /immortality, cryptocurrency, libertarianism, and space exploration — Timothy Leary’s SMILE for the obscenely rich. It has been adopted, in various forms, by everyone from Jeff Bezos to Sergey Brin and Larry Page to Mark Zuckerberg to Vitalik Buterin to Larry Ellison to Richard Branson to the late Jeffrey Epstein, and it percolates all the way down to the merely mildly rich, who may not have a private rocket but at least wear an Oura ring. Christian Angermayer calls it ‘the new human agenda’.
How and why did a niche arthouse philosophy became the religion of the obscenely rich? Transhumanism always had a close relationship to tech, and the scene included some ground-breaking inventers in fields like the internet, AI, robotics and genetics. Some of today’s ultra-rich rode that wave of innovation all the way to the top of the rich list.
In addition, transhumanism is an anti-egalitarian philosophy. Like its ancestors, Galton, Nietzsche and Julian Huxley, it worships the extraordinary genius who is 10 times smarter than the average — the Alan Kays and John von Neumanns of the world. That predisposes transhumanists to admire and even worship the business elite, and to see them as highly evolved master-minds. Here is Yudkowsky fawning over some venture capitalists at a conference, for example:
these people of the Power Elite were visibly much smarter than average mortals…No, even worse than that, much worse than that: these CEOs and CTOs and hedge-fund traders, these folk of the mid-level power elite, seemed happier and more alive.
Transhumanism is a hereditarian worldview, and hereditarianism is often used by a ruling elite to justify extreme inequality — it’s natural that these superbeings are at the top and everyone else is at the bottom. Eugenics, historically, emerged as a self-celebration of the ruling elite and a concern with how to manage the dirty and over-fecund masses. There’s something of that in transhumanism too, as in Yuval Harari’s comments in Homo Deus:
The most important question in 21st-century economics may well be: What should we do with all the superfluous people, once we have highly intelligent non-conscious algorithms that can do almost everything better than humans?
Indeed, transhumanism offers the cleanest solution: replace flesh-and-blood workers with digital serfs, who can be deleted at the end of each day and uploaded anew, brimming with bliss (you can see how transhumanism has provoked some lurid conspiracy theories, as we’ll examine next chapter).
For society to prosper, according to hereditarianism, you should let the geniuses do their thing and invent the tech that elevates humanity. They should be free to start up their own crypto-countries with their own currencies, tax regimes and rules for self-enhancement. Extropianism morphs into ‘exitocracy’ or what Douglas Rushkoff (himself a former contributor to Mondo 2000) calls escapology. In his latest book, Survival of the Richest, he writes of the super-rich:
Their extreme wealth and privilege served only to make them obsessed with insulating themselves from the very real and present danger of climate change, rising sea levels, mass migrations, global pandemics, nativist panic, and resource depletion. For them, the future of technology is about only one thing: escape from the rest of us. These people once showered the world with madly optimistic business plans for how technology might benefit human society. Now they’ve reduced technological progress to a video game that one of them wins by finding the escape hatch.
Extropianism appeals to the ultra-rich because it promises they can avoid the two certainties in life: death and taxes. The rich have always wanted to live forever, and Extropianism lets them fund their dreams of offshore genetic modification free from the IRS and the FDA (Cyprus is already doing a roaring trade both as a tax-haven and as a hub for semi-legal stem cell injections). Indeed, long-termism can frame it as the moral obligation of the ultra-rich to ensure the survival of the species — never mind the suffering South, focus on that cosmic endowment, focus on the coming superbeings!
These are some of the reasons that perhaps transhumanism has become a useful philosophy / religion to the very-very-very-very rich. But that doesn’t mean that Yudkowsky, Bostrom et al are wrong to be worried about existential risks, including AI risk. Maybe they’re right to be worried. Maybe we could be threatened by AI super-intelligence — clearly, we face other existential risks like bioweaponry. My instinct is that AI superintelligence is a religious fantasy, a sort of tech-occultism. But I’m not a programmer — most people in the AI industry think we will achieve it in a few years or decades.
The madness of Rationalism
But if AI really is a serious existential risk, do we want transhumanists to be in charge of guarding us against it? Transhumanism, it could be argued, is an extreme and fanatical religion, which prefers the silicon and the digital over the flesh, blood and shit of humans. Every religion is a reaction to the pain of loss and death, and transhumanism seems just as extreme in its reaction as previous religions — witness Martine Rothblatt, the multi-millionaire founder of the transhumanist religion Terasem, who has built an AI robot replacement of her wife, Bina.
This does not seem an entirely healthy reaction to the pain of loss — making a digital, robotic or genetic replica of everyone you love, so that everything is backed up, and if anything happens to them — the merest scratch — you can immediately delete the old version and reboot an immaculate new one.
Even Rationalism can be quite irrational, culty and, dare I say it, occulty, at times. As Jon Evans has explored, Rationalism became obsessed with ‘debugging’ — weekend-long sessions to remove one’s biases, improve one’s rationality and become Muskian superbeings. As one participant in a CFAR workshop suggested, ‘Self-help is just the gateway. The real goal is: Save the world.’
But these ‘de-bugging’ sessions sound a lot like other Californian inventions such as Scientology or the Landmark Education Forum — long, intense sessions of self and mutual criticism, often in a power-imbalance with a high-status leader, with the goal of becoming a superhuman. And, as with these culty self improvement programmes, such sessions can provoke adverse reactions. One Rationalist organisation called Leverage Research apparently provoked multiple psychotic incidents and a kooky obsession with debugging demons. MIRI and CFAR also had its fair-share of psychotic reactions, according to a report on LessWrong:
[My] psychotic break was in October 2017 […] During this time I believed that I was intrinsically evil, had destroyed significant parts of the world with my demonic powers, and was in a hell of my own creation. I was catatonic for multiple days, afraid that by moving I would cause harm to those around me. There were at least 3 other cases of psychiatric institutionalizations by people in the social circle immediate to MIRI/CFAR […] in addition, a case of someone becoming very paranoid, attacking a mental health worker […] concerned about a demon inside him, implanted by another person, trying to escape […] two cases of suicide in the Berkeley rationality community associated with scrupulosity and mental self-improvement… MIRI became very secretive about research […] I was endangering the world by talking openly about AI in the abstract […] I was discouraged from writing a blog post estimating when AI would be developed, on the basis that a real conversation about this topic among rationalists would cause AI to come sooner…
Such episodes are probably infrequent, but one can see how the Rationalist / Long-Termist / Effective Altruist community could put a lot of strain on its young, idealist disciples. The stakes are incredibly high — infinite cosmic bliss or infinite cosmic torture. You are part of a tiny nerd elite who will decide the fate of trillions. Plus…you might already be in a simulation with your every thought observed by an all-knowing AI. The weird sci-fi imaginings of the Transhumanist world led to some spooky ideas like ‘Roko’s basilisk’. A commentor on the LessWrong blog called Roko wondered if a super-intelligent AI in the future might decide to punish any humans who knew about the possible Singularity but didn’t strive hard enough for it. Roko’s comment was deleted by Yudkowsky and all discussion banned. It was as an ‘infohazard’ — an idea so dangerous it threatens the souls of those exposed to it. Do not give the AI ideas! And so the Rationalists tremble before an all-knowing AI, feeling that their every thought is being monitored and judged by a future deity. ‘What is Guilt in Utopia?’ asked Bostrom. ‘Guilt is our knowledge that we could have created Utopia sooner.’
So transhumanists are in charge of monitoring the risk of AI super-intelligence? It’s like putting Jihadis in charge of the International Atomic Energy Agency!
Of course, the alternative to transhumanism can be just as extreme: a rejection of all new technology, even of vaccines, and a turn to neo-primitivism and an embrace of the stone age. In the final chapter of this project, I will look at the conspiratorial back-lash to transhumanism, and ask if a balance between the two positions is possible.
Check out Emile Torres’ article on long-termism, Jon Evans’ excellent newsletter on Extropianism, RU Sirius’ Steal This Singularity, Jeremy Lent’s writing on ‘the great bifurcation’, Rushkoff’s Survival of the Richest, and Jamie Wheal’s Capture the Rapture — as well as Eric Davis’ classic TechGnosis.