The Singularity Is Near made a significant impact on me as a kid. Kurzweil catches a lot of flack for the boldness of his predictions and high miss rate, but that doesn’t account for the relative dumbness of his forecasting methodology. He simply extrapolated exponential trends far out of sample and yet still got an awful lot right. So what that he claimed we’d have real-time translation by the 2010s and instead got pretty good translation that’s less than real time? The error bars around the timing of an exponential trend get exponentially wider the farther you forecast given the sensitivity to initial conditions. We’ll have real-time translation through AR glasses this decade. His linear-thinking detractors, meanwhile, could barely see past dial-up.
Between Stable Diffusion and the recent release of ChatGPT, now no one can deny that deep learning is capable of borderline magic. Kvetching about bias and its tendency to confabulate incorrect outputs aside, progress in ML could grind to a halt tomorrow and we’d still have a decade or more of new commercial applications to explore. It’s crazy to think it’s just getting started.
Futurist statecraft
I got into my line of work, US public policy, because I didn’t see anyone else working on the nuts and bolts of futurist statecraft. Sure, there are people paid to research “AI safety” and “the future of work,” but in my experience those efforts tend to fall prey to the “horseless carriage” fallacy of acting like a new technology will change one big thing but leave everything else the same. The automobile didn’t simply replace horse-drawn carriages, spurring a wave of equine technological unemployment. The automobile changed everything, including our institutions.
Likewise, I sense that the second order implications of near-term AI have not been fully grokked, at least not by America’s political classes. Metaculus now predicts that the first AGI will become publicly known by 2036 — six years sooner than previous estimates. To put that in perspective, we may well achieve AGI before Democrats win their next trifecta. And yet Congress is still fighting over the debt ceiling. For Christ’s sake! Plan accordingly.
(Correction: That's old news. Metaculus now predicts weak AGI by 2027 & strong AGI with robotic capabilities by 2038).
For my part, I started this Substack because I felt an urgency to push out my ideas while there’s still time to credibly claim them as my own, not fully knowing what comes next.
In particular, I suspect near-term AI will break a lot of things, starting with our legacy institutions. The firmware of the US government is 70+ years old. We validate people’s identity with a nine digit numbering system created in 1936. The Administrative Procedure Act, which governs all regulatory process, came only ten years later. The IRS Master File runs on assembly from the 1960s. Our labor laws are from the assembly line era. Unemployment Insurance — the safety-net for helping people adjust to employment shocks from AI or otherwise — is so broken that Congress found it easier to give everyone an extra $600 a week and live with $150 billion worth of fraud than to recruit the retired Cobol engineers necessary to simply update the code. There is a great deal of ruin in this nation.
Contrast that with Estonia, which has the most sophisticated e-government in the world. Everything from taxes and transfers, incorporating a business, paying bus fare, applying for citizenship, certifying a marriage, and tens of thousands of other services can be performed electronically. It’s then all saved on a distributed, cryptographically secured ledger developed in the 2000s as a precursor to the blockchain. Many systems are thus automated, like auto-enrolling your kid in school four or so years after filing their birth certificate.
Estonia’s e-government arose out of the “hacker ethic” of the young civil servants who filled the government after the Soviet Union collapsed, leaving behind a relatively blank slate. There was a demand component too. Given the espionage threat from neighboring Russia, Estonians felt a need to fortify their institutions from within.
Policymakers in the US government have neither the hacker ethic nor requisite sense of threat to motivate such deep structural reforms. On the contrary: we feel invincible, separated by oceans and believing, as Churchill said, that “America will always do the right thing, only after they have tried everything else.” Yet sometimes the order of operations matters. Sometimes, in fact, you need to do the right thing before it’s obvious, or else you lose the ability to do much at all.
Unstable Diffusion
Certainly, plenty of people took serious the claims of Russian disinformation in the 2016 election. Perhaps too seriously. But their solution, per the horseless carriage fallacy, was to propose a new disinformation body that would squash the problem while leaving the basic structures essentially untouched. America’s political class is thus not merely resistant to reform, but will actively adopt drastic reforms to prevent technology from disrupting 20th century incumbents.
AI will be different. There’s no keeping it in a box. While the biggest models are expensive to train, the marginal cost of using a model is pennies on the dollar. And whenever a new model is released, it’s a matter of months before it’s open source and stripped of any licensing guardrails. Take “Unstable Diffusion,” a fork of Stable Diffusion that’s optimized for generating pornography using special training sets that assure anatomical correctness. Since chat bots can sustain a bajillion para-social relationships at once, the days of human Only Fans creators are surely numbered.
Peter Thiel has a saying: “Crypto is libertarian, AI is communist.” Nothing could be farther from the truth. Central bank digital currencies will eventually displace shitcoins and give the government an eye into every transaction. AI may end up being a centralizing force in China, where the technology for Big Brother is already at scale.1 Elsewhere, however, having a personal super assistant in every person’s pocket will do far more to empower the network edge.
Indeed, within a decade, ordinary people will have more capabilities than a CIA agent does today. You’ll be able to listen in on a conversation in an apartment across the street using the sound vibrations off a chip bag. You’ll be able to replace your face and voice with those of someone else in real time, allowing anyone to socially engineer their way into anything. Bots will slide into your DMs and have long, engaging conversations with you until it senses the best moment to send its phishing link. Games like chess and poker will have to be played naked and in the presence of (currently illegal) RF signal blockers to guarantee no one’s cheating. Relationships will fall apart when the AI lets you know, via microexpressions, that he didn’t really mean it when he said he loved you. Copyright will be as obsolete as sodomy law, as thousands of new Taylor Swift albums come into being with a single click. Public comments on new regulations will overflow with millions of cogent and entirely unique submissions that the regulator must, by law, individually read and respond to. Death-by-kamikaze drone will surpass mass shootings as the best way to enact a lurid revenge. The courts, meanwhile, will be flooded with lawsuits because who needs to pay attorney fees when your phone can file an airtight motion for you?
The resulting miasma will be enough to make the stablest genius feel schizophrenic. All the while, your Vanguard ETF will be skyrocketing, while those prescient enough to capitalize on the moment will be filthy rich, living and working in settings designed to do what our government can’t.
It doesn’t have to be this way. We can fight AI fire with AI fire and adapt our practices along the way. But there are an awful lot of laws that will need changing. So it’d just help if our leaders understood what’s at stake and seized the first mover advantage.
According to Acemoglu and Robinson, liberal democratic institutions exist within a “narrow corridor” between anarchy and authoritarianism. Whenever technology changes the power balance between society and the state, institutions must adapt to keep the two in harmony. That’s where our 70 year old institutions came from in the first place: as an update to 19th century liberalism given the new challenges created by the second industrial revolution. The so-called “fourth industrial revolution” will force institutional change just as sweeping, lest Snow Crashian anarchy or a Chinese-style panopticon become paths of least resistance. Like the Red Queen said to Alice, we need to start running just to keep in place.
Read the next in this series: How to profit off AI
Reading 1984 I always wondered how Big Brother could see everyone at once. Wouldn’t you need as many people monitoring the screens as there are being monitored? That’s a solved problem now.
Unfortunately "AI risk" has come to mean "an AI might desire very much to make us all into paper clips" rather than the much more plausible scenarios you mention above. There are too few people thinking about the opportunities and risks of AI in the middle ground.
Thanks for an illuminating article.
I completely agree that our institutions will need to adapt, but history suggests that institutional change will be slow and reactionary. I think we should take that as an assumption.
Luckily, there is resilience built into society that I think is often overlooked when we’re taking about the initial impacts of AGI (long term impacts are too weird to bother guessing). Parents concerned about purchasing “meaningless” children’s books written by AI will go out of their way to purchase from an author they believe is human, and AGI won’t be able to fully circumvent our empirical tools for determining who is human, and least not right away. Courts will require that lawsuits be filed in person and require significant lawyer time (interestingly, that’s wasted time!). Music lovers will pay extra for a service that delivers real music from real musicians, etc.