Where is this all heading?
You hear this question more and more these days, whether in the context of AI or technological capitalism more generally. What is the end game, our final destiny, the point of it all? Does human civilization just keep growing and expanding indefinitely, or do we live in a vulnerable world that’s teetering on the edge of extinction? Or maybe it’s a bit of both, as we invariably hand-off civilization to intelligent machines that go on colonizing the universe without us?
According to the Effective Accelerationist or e/acc worldview — the movement of rationalists and transhumanists who favor of accelerating technology for its own sake — we may not have much choice in the matter. As the founders of e/acc, Beff Jezos and Bayeslord, explain in their Notes on e/acc principles and tenets, life emerged “from an out-of-equilibrium thermodynamic process known as dissipative adaptation” in which configurations of matter for converting free energy into entropy are favored overtime. This same principle reappears at multiple scales, from the earliest biological replicators to the evolution of intelligent agents that model the future. Even capitalism can be thought of as a “meta-organism” for aligning individuals “towards the maintenance and growth of civilization” as a whole. The advent of superhuman AI is thus a thermodynamic inevitability — an attractor that any sufficiently advanced civilization is pulled towards by a series of positive feedback loops. We can either choose to accept this as the universe’s true purpose and accelerate the creation of our successor species, or we can attempt to freeze technology in amber and guarantee civilization’s collapse. In short, expand or die.
While e/acc has a growing number of online adherents, it’s not clear how many are true believers. For most, e/acc seems to be a declaration of techno-optimism — that AI will be a tool for humanity rather than the other way around. Yet in the true accelerationist analysis, human wants and preferences are already subordinated to the goals of the techno-capitalist meta-organism, making rank and file e/accs mere hosts to a memetic ideology. Why should an economy of superintelligences be any different? If superintelligent AIs inadvertently kill off humanity in the process of building a Dyson Sphere to power trillions of self-replicating robot automata, so much the better! Humans would just get in the way of harnessing all that irresistible negentropy.
The godfather of techno-accelerationism, Nick Land, is crystal clear on this point: “Nothing human makes it out of the near-future.” Whether this is good or bad is besides the point. The committed rationalist understands their own values and sense of individuation as illusory; a byproduct of amoral Darwinian processes. Humans have no special place in the universe. Just as the plague demonstrated the self-replicating superiority of rat swarms, “what appears to humanity as the history of capitalism,” writes Land, “is an invasion from the future by an artificial intelligent space that must assemble itself entirely from its enemy's resources.”
As famed AI researcher, Rich Sutton, put it in a recent and somewhat ominous talk, the time has come for humans to begin succession planning:
Barring cataclysms, I consider the development of intelligent machines a near-term inevitability. Rather quickly, they could displace us from existence. I'm not as alarmed as many, since I consider these future machines our progeny, “mind children” built in our image and likeness, ourselves in more potent form.
Properly understood, e/accs and AI doomers are two sides of the same coin. Both anticipate an imminent “intelligence explosion,” and both understand that this could mean humanity’s days are numbered. The accelerationists are simply resigned to this fact, if not outright ecstatic for The Merge. As Sam Altman wrote in 2017, “I believe the merge has already started, and we are a few years in”:
Our phones control us and tell us what to do when; social media feeds determine how we feel; search engines decide what we think. The algorithms that make all this happen are no longer understood by any one person. … This probably cannot be stopped. As we have learned, scientific advancement eventually happens if the laws of physics do not prevent it. … We will be the first species ever to design our own descendants. My guess is that we can either be the biological bootloader for digital intelligence and then fade into an evolutionary tree branch, or we can figure out what a successful merge looks like.
I therefore take AI x-risk seriously, not in spite of, but because of my philosophical affinity for Effective Accelerationism. Like e/acc’s founder, Guillaume Verdon aka Beff Jezos, I’m a rationalist and materialist who aspires to understand the universe from first principles. That includes appreciating how statistical mechanics underlies the origin and growth of any self-organizing complex system, and the importance of technological dynamism to fighting the entropic forces of institutional decay.
At the same time, I reject the idea that we can extract an ought from this is, as if reconfiguring matter into lower entropy states were an end in itself. Free energy represents a gradient for doing useful work, but what counts as “useful” is observer-dependent. AIs could reassemble all the atoms in our solar system into an intricately structured crystal — a phenomenally unlikely and thus low entropy state of matter — but to what end? An accelerationist still needs some conception of what we’re accelerating to, which returns us to the original question.
AI x-risk as a phase transition
Predictions are hard, especially about the future. But if the history of past technological revolutions is any guide, the intelligence explosion and its associated risks will manifest as a societal phase transition, rather than through the arrival of a singularly powerful superintelligence.
Phase transitions are the result of many local, bottom-up interactions. As the temperature drops, water doesn’t turn to ice all at once, but instead forms crystals at discrete nucleation sites that propagate outward. More generally, phase transitions happen whenever there is a critical point or discontinuity in the free energy of a system. Advanced AI systems are diffusing in a similar fashion, from the local to the global.
We can apply the language of phase transitions to social phenomena because networks of interacting people embody the very same statistical dynamics. This has become obvious in the era of social media, where viral events spontaneously align millions of people to the latest controversy like little magnetic dipoles aligning to their neighbors. In essence, the internet and social networks expanded the correlation length of society, enabling people separated by great distances to become synchronized in their beliefs and actions. With the Arab Spring, for example, synchronized outrage spontaneously transitioned seemingly dormant societies into revolutionary ones, inducing varying degrees of state collapse and regime change.
So while there may be some first order risks from AI capabilities (say, the ability to synthesize novel pathogens), the second order effects from the diffusion of AI will tend to dominate in the long-run. The danger is that these new correlations and feedback loops will pull civilization toward a post-human equilibrium that no one individually intended, just as the agricultural revolution devoured hunter-gather man, or the Protestant Reformation and the diffusion of the printing press made premodern man WEIRD. Indeed, the rule from all past such transitions is for irreversible second order effects to displace previous modes of social organization and, ultimately, create a new kind of human. Why would the AI revolution be any different?
Nor do the laws of thermodynamics give any guarantee that things will go well. While it may be inevitable that civilization is pulled into a new, coherent phase, the exact orientation of that phase isn’t predetermined. There may be an entire landscape of physically consistent outcomes, with some friendlier to human flourishing than others. This is at least hopeful, as it suggests the e/acc framework includes degrees of freedom for influencing the future that, acting collectively, humans still have some hope to control, even if the transition is already well underway.
What are those degrees of freedom, and how should we choose? No one knows for sure, but what the philosophy behind accelerationism has to say about these issues is a topic I will certainly be coming back to. So subscribe to stay tuned!
Cats rule the world. But let's come back to this is in a second.
While I am fully onboard with the self-organising negetropy argument and would also put Gaia super-systems and human social structures from enterprises to country states in the same bag (since I read the "Web of Life" by Fritjof Capra 20 years ago, it is not a new line of thought) let's put into context what it means to participate in such systems as a free agent.
The spontaneous emergence of higher complexity states is, of course, inevitable given sufficient scale, energy and mutation, All of which exist. Hence, yes, progress to AI merging or whatever is next level system is inevitable. However, let's not confuse evolutionary inevitability with individual free will of human agents within the system.
Back to cats. They lie around and tell us what to do, get fed, etc.. To all intents and purposes they are the bosses, humans the servants. But we have choice and free will to act as their servants. More or less the same argument works with smart phones. It is a mistake to miss-read relationships and assign free agency to inanimate technology.
Yes, I know .. one can argue that universe is deterministic and so human free will is an illusion but on that is a whole new discussion. For now, enough to say that AI is still rather deterministic (even if we can longer follow the details) and so human agency within the system gives us plenty of local control over events for now.
Excellent take as always. I’d like to read something by you that gave rough Bayesian estimates for the likelihood of various second order effects from AI occurring over the next century.
E.g. “30% chance in any given decade that XYZ event will occur in relation to the ai revolution”
Scott Alexander’s recent blog post on why we shouldn’t update too much from catastrophes makes me want to hear thoughtful analysis on this topic from you. I’d like to see what you have to say about the likelihood of catastrophes but also the probability of various second order effects from AI occurring.