AI and Leviathan: Part III
A timeline of our techno-feudalist future
This series is about the potential near-term impact of Artificial Intelligence on our government and institutions.
My null hypothesis is that the democratization of powerful AI capabilities will be at least as destabilizing as the printing press. The printing press was also a mere information technology, and yet it led to civil wars and uprisings against the established order, and ultimately drove the consolidation of the modern nation-state.
Institutions are shaped by the transaction costs associated with bargaining and coordination, search and information, and monitoring and enforcement. While the internet impacted these cost structures to an extent, near-term AI will likely alter them dramatically, dislodging us from the basic institutional structures we inherited from the early-20th century.
As the saying goes, “there’s no way out but through,” but which through-path we take isn’t predetermined. Liberal democracy exists within a “narrow corridor” between despotism and anarchy. In an ideal world, our political leaders would rapidly co-evolve our institutions with AI, striking a new balance between centralism and decentralization — a constrained AI Leviathan. But that’s a vision for Part IV.
This essay, Part III, is about exploring the default path in which our government moves with the slowness and incompetence that we’ve grown accustom to.
In the default scenario, the technology shock from AI will cause slower governments to either fragment or recede to a few core competencies, pushing the provision of various public goods (including security against AI misuse) into private hands. Call this the techno-feudalist timeline.
Take airports, which operate as mini, opt-in surveillance states complete with face scanners and awkward pat-downs. As a kind of company town, airports are essentially landlords. They’re largely staffed through outside contracts, and the food is always overpriced. And while most U.S. airports are owned by public entities, the wealthy country norm is to privatize them, as privately owned airports tend to be much nicer. Airports thus illustrate the many advantages of delegating security and other public goods to quasi-feudal organizations, as private companies:
aren’t obligated to respect your rights in the same way as governments;
are easier to trust due to reputation mechanisms, market competition, and explicit contracts that tie their hands; and
can use their “right to exclude” to create vertically integrated, technologically sophisticated user experiences.
Indeed, we voluntarily cede our rights to private organizations on a daily basis. We have nitpicky Home Owners Associations, casinos that track how much you drink, comedy clubs that confiscate your phone at the door, companies that make you sign an NDA before a lunch meeting, and employers that digitally monitor your productivity second-by-second. If you don’t like trading-off these natural liberties, you can always move, quit, or leave.
As AI democratizes capabilities with significant negative externalities, it will simultaneously unlock new institutional forms for dealing with those externalities. More and more of social and economic life will thus be driven behind walled-gardens. Different organizations will cater to different preferences by offering different bundles of rights and services, and compete on their ability to keep-up in the defensive-offensive AI arms-race. Examples include:
Location-based bans on devices that can read private thoughts from facial expressions; extract keystrokes from background audio; cheat at games undetected, etc.
Private schools that maximize student outcomes with the help of AI tutors and disciplinaries;
Ultra personalized healthcare services that arbitrage around medical privacy laws to access genetic and biometric data;
New forms of identity verification to mitigate the proliferation of deepfakes and catfish bots, a la Worldcoin;
Monitoring and compliance platforms for underwriting goods and services in lieu of static regulatory frameworks;
AI-based arbitration mechanisms for rapidly adjudicating disputes;
Gated communities and privately owned “smart cities” that offer all of the above in addition to predictive policing, secured infrastructure, and a variety of AI-based amenities.
Versions of these already exist, but are either low-tech or highly concierge. As AI lowers costs and expands wealth, access to superior, privately-provided services will blossom and drive substitution away from legacy public goods. The cumulative effect will be to pick away at many basic government functions.
The rapid rise of ride-sharing apps like Uber and Lyft is an example of this dynamic in miniature. Prior to ride-sharing, every city regulated its own taxi service, quality was poor, and only the wealthy could afford “black car”-style options. Then the internet and mobile revolutions arrived and dramatically reconfigured the structure of transaction costs. Suddenly riders and drivers could be connected directly, slashing search, information and bargaining frictions. Quality could be enforced through reputation and monitoring mechanisms rather than formal licensing regimes. And price discovery could be supplemented with machine learning to improve safety, reduce wait times, and optimize travel routes. Today, Uber is even exploring the use of predictive AI to “pre-match” ride requests based on users’ contextual data.
Despite often violent resistance, the benefits from ride-sharing ultimately forced a microcosmic regime change in cities worldwide. Traditional taxi services still exist, but often only because of latent regulatory privileges, like special access to the airport. Regardless, in markets like New York City, the percentage of trips done by taxis flipped from ~90% to ~10% in just five years, shifting the market’s governance from a public commission to competing private platforms with inbuilt social credit scores. The next step is Level 5 self-driving. Robotaxis are already on the road in several jurisdictions, and will surely be equipped with intelligent monitoring systems to keep riders’ on their best behavior.
As AIs reach human-level capabilities and beyond, the organizational economies that drove centralized forms of regulation will collapse in other areas of life as well. In some ways, this process will look like the 19th and 20th centuries running in reverse. Communities were once much more gated, for instance, if only by distance. Local school houses were the norm. Social insurance was based in mutual aid. Fake news was commonplace, so information flowed through trusted networks. Law enforcement was sparse and supplemented by private security. And regulation, to the extent it existed, was often supplied privately through exclusive clubs and associations.
The Default Future
To see how we get from here to there, let’s extrapolate AI progress on its current trajectory under the default scenario where the U.S. government evolves minimally or not at all. Predictions are hard, especially about the future, but informed speculation is better than nothing, if only for scenario planning.
2024 - 2027
Based on current trends, in a few years the vast majority of content on the internet becomes synthetic. Shared narratives breakdown and the public becomes some mixed of confused, panicked and entertained by the pace of change. Between the twilight of copyright and the disruption to centralized content distribution, traditional news media and Hollywood are the first to Jihad against AI, like the techlash of the 2010s turned up to eleven.
LLMs and multimodal models start hitting enterprise, automating substantial amounts of make-work, data collection, and regulatory compliance. By nature, the technology easily integrates into legacy processes, at least compared to the bespoke automations companies were used to buying from the IBMs of the world. The economic impact resembles the downsizing wave of the late-1990s.
Several weakly agentic AIs leak onto the internet, infecting computers with intelligent malware that reminds security experts of the famous Morris worm. While they aren’t about to destroy the world, the internet starts to balkanize as the value of AI and the proliferation of cyberattacks spurs a global rush to nationalize compute and telecommunications infrastructure. Open platforms begin to gate access to counter against bots, and online discussion shifts hard into secure channels, a la Signal or Telegram, with zero-knowledge protocols for verifying users as human.
Science starts accelerating, but society as whole feels infinitely less legible, as if hidden behind digital hills.
2028 - 2031
Based on the Direct Approach forecast produced by the researchers at EpochAI, it becomes possible to brute-force an AGI that is indistinguishable from humans on most tasks by 2029. At first, these gigantic models are grossly inefficient, so there is still demand for narrower, distilled forms of AI that require thoughtful integrations. Yet as the inference cost from truly general models comes down, unified AI systems are able to simply shadow human workers and learn to emulate their workflow in-context, causing implementation frictions to collapse.
Job losses in cognitive sectors ramp up. Many businesses go the way of Blockbuster, but most knowledge sectors undergo an accelerated version of what the internet did to media and publishing. That is, rather than vanish overnight, a subset of incumbents consolidate in the background of rolling bankruptcies, the rise of new business models, last-ditch efforts at regulatory capture, and a long-tail of amateur creators.
By the early 2030s, the knowledge jobs that remain are highly bimodal. A subset of entrepreneurs are highly remunerated, while the best paid jobs involve co-piloting large teams of AIs. This looks a hypertrophied version of what the internet did to the income distribution of lawyers, only extended to many other sectors.
Most other knowledge jobs either feature intense monitoring and performance management; are rooted in personal relationships and other sources of economic rent; or are intrinsically identity or celebrity driven, just as “Youtuber” and “Twitch streamer” are today. Regardless, cognitive labor markets are now increasingly characterized by highly skewed returns and an often explicit reliance on patronage. The division of cognitive labor matters much less than it used to, and so the extent of the market — and thus the need for common legal and regulatory frameworks — begins to contract.
While the stock market as a whole is booming, the Great Repricing is well underway — a kind of Napster moment for everything. Many asset prices go to zero while a handful of companies blow past trillion dollar valuations. The limiting factor is energy and capital. Most compute infrastructure now goes to inference, and new datacenters can’t be built fast enough.
Congress is in a panic. Member offices are flooded by emails from AI lobbyists and robo-callers that affect their constituents’ local dialect. Every lawmaker has a special interest for whom the AI wave is existential, spurring a rash of ad hoc and reactionary proposals that go nowhere. Overtime, however, a broader reshuffling of public choice constraints is afoot, eventually unlocking a flurry of reforms on issues that used to be stalemated, but which still aren’t radical enough.
By now, the White House and Congress have taken steps to regulate frontier AI companies. While the most powerful models must undergo safety evaluations that assess for bias and their vulnerability to jailbreaks, the classic alignment problem turns out to get easier with scale, as the biggest models prove eminently controllable. Nor does AGI immediately cause a superintelligence hard take-off, as data and compute bottlenecks still limit the amount of cross-entropy that bigger models can feasibly harvest.
While the open source ecosystem is thriving, the gap between open source and proprietary models has widened, in part because of the logarithmic nature of neural scaling laws, and in part because regulatory and liability risk have pushed the most ambitious open source efforts underground. Attention thus shifts to the broader proliferation risk from powerful AI agents as the compute requirements to train and run them trickle down.
2032 - 2035
Multi-billion dollar startups are now created by as few as 3 people designing clever workflows around teams of interacting AIs. AIs don’t shirk and work diligently 24/7, making even a single human in the loop a potential bottleneck. As agency and monitoring costs collapse, AI-native organizations begin to interface with each other at inference speeds through a nexus of genuinely smart contracts, blurring the boundaries between one AI firm and the next. The owners of the AI companies with the deepest moats start to resemble The Power Elite described by sociologist C. Wright Mills — the horizontal network of military, economic and political elites that sat atop the corporate giants of the mid-20th century.
The institutional infrastructure created in the New Deal and Great Society eras begins to crack. Aggregate economic activity is taking off, but regulatory agencies simply lack the capacity to track it all, and in some cases suffer de facto Denial of Service attacks. Indeed, sensor technologies now generate more than 10^20 bits of data per second, surpassing the collective sensing throughput of humanity. This demands a paradigm shift in the way governments extract relevant information, but the technical debt from generations of process accumulation and kludgeocracy is a binding constraint. While high-trust countries with ministerial systems embrace sweeping civil service reforms, the analogous reforms in the U.S. are caught-up in interagency process, judicial review, the Senate filibuster, procurement and talent acquisition issues, and protests from public sector unions.
An explosion in lifesaving drugs and medical devices are stuck in the FDA pipeline, spurring gray markets and state-level “right to try” laws that end-run the approval process. Gated industries like medicine and law try in vain to maintain their regulatory privileges, but are ultimately pushed to embrace AI and cannibalize their older business models. Enforcement agencies, from the NLRB to the FTC, can now only enforce a sliver of their increasingly anachronistic jurisdictions.
Tax revenues decline and the IRS’s audit ratio collapses as income shifts from labor to capital and AI tax accountants work to complexify everyone’s liability, such as through convoluted partnerships. The court system is overwhelmed by an explosion in AI-assisted lawsuits and is forced to triage disputes based on type. This pushes more and more civil and commercial law into private arbitration, as AI judges can digest terabytes of evidence to render provably neutral decisions in an afternoon.
Private forms of regulation begin to emerge. While the likes of the USDA, OSHA, and CPSC are still sending humans to inspect commercial farms, do workplace visits, and issue product recalls, their de facto regulatory aperture is increasingly narrow. Consumers begin to put more trust into AI underwriters and multi-sided platforms that assure food, workplace and product safety through automated compliance and reputation systems, eliminating the problem of asymmetric information outright.
Many other federal responsibilities are simply rendered obsolete. Now that most of the cars on the road are fully autonomous, for instance, the National Highway Traffic Safety Administration feels lost for purpose. The democratization of autonomous sensors and commercial satellite networks has even displaced the value of the National Weather Service. The response to natural disasters is now primarily mediated by private initiatives, including through parallel early warning systems.
2036 - 2039
Strong AGI comes for motor control and robotics. Just as LLMs supplanted a dozen distinct subdisciplines in natural language processing, general purpose motor-action feedback models supplant the dozens of ad hoc planning and control algorithms used in today’s robotics. That is, a pre-trained model can now plug into robots with arbitrary shapes, sensors and actuators, and find an optimal control loop with a bit of play-like practice.
General purpose robots begin to be manufactured at scale, driving down costs. The dynamics that played out in the knowledge sector thus begin to affect goods production and manual forms of labor. Service innovation was already pushing GDP growth above 5%, but now physical productivity really takes off, although in a way that lingering bottlenecks make highly differential. Labor intensive human services, such as nursing, education and policing, suffer a severe version of Baumol’s cost disease. State and local governments are thus forced to either absorb accelerating labor costs or embrace AI alternatives.
Given the highly uneven quality of local governance, more people begin opting-out of municipal services. Cheap and customizable AI tutors spur a mass exodus from the public education system in favor of high-tech boarding schools and home- and community-based education collectives. Neighborhoods purchase their own drones and use the equivalent of hundreds of doorbell cameras with facial recognition to form their own private surveillance network. Package thieves and burglars don’t bother enter these neighborhoods, as local residents receive push alerts the moment a street camera recognizes the gait of a crook known to a proprietary database.
Pockets of state capacity still exist but in a way that is alarmingly derivative on the private sector. While the U.S. government of 1940-70s did the Manhattan and Apollo Projects inhouse, such initiatives are now outsourced to the likes of Amazon, Google, Microsoft, Palantir and SpaceX. The U.S. government couldn’t even build its own cloud infrastructure if it wanted to. In the face of system failure, more and more administrative functions are thus offloaded onto private providers, turning the federal government into a glorified nexus of competitive contracts.
2040 and beyond
Moore’s Law hits the Landauer limit, but is carried forward thanks to advancements in parallel computing and low energy memristors. Exascale computers are now commonplace, causing the AI safety regime from the decade prior to break down. In practice, however, the permissions required to deploy new AI systems simply shifts from governments to private infrastructure providers.
The World Wide Web is a wild west of deepfakes and intelligent malware, reminiscent of the early days when one mis-click would unleash a flood of popups and .exe downloads. This has forced the development of new protocols, certificate authorities, and access lists that use AI to monitor network traffic for security threats and deny routing to unvetted users and algorithms. Telecom providers contract directly with neighborhoods and private cities, using geofencing and network firewalls to strictly control the traffic in and out of each local area network. The richer of these neighborhoods even have their own GPU clusters to insure against network outages and help heat the community pool.
There are now individuals as powerful as today’s large corporation, and large corporations as powerful as today’s nation-states. Many city governments thus abandon their historic charters and reincorporate as Singapore-esque company towns. The corporate structure provides a means for cities to pool investors’ capital and finance public goods through land rents, the most important of which is security. AVs entering city limits must pass through checkpoints that automatically scan for contraband and log their passenger’s identity; waste water is continuously monitored for genetically engineered pathogens; and EM pulse guns scan the airspace for unauthorized drone swarms. Unless you’re rich enough to afford private security, few venture beyond city limits except for travel between secure zones. The agglomeration externalities to AI are simply too great, as rural holdouts run the risk of being ransacked by roving militias or the agents of the synthetic drug cartels.
It’s an increasingly post-scarcity world in everything except land and capital. Yet between fusion, solar and advanced geothermal, energy is not only cheap and abundant but also locally generated. Paired with robotic labor, this enables a radical re-localization of supply chains, putting globalization in reverse.
Countries now divide into the three broad categories: Chinese-style police state / Gulf-style monarchy; anarchic failed state; or high-tech open society with an AI-fortified e-governments on the Estonia model. The world map is thus redrawn as strong states use AI to conquer their failed neighbors and reestablish regional security.
America would be a failed state but for its archipelago of micro-jurisdictions with varying degrees of flourishing. The U.S. military thus focuses almost exclusively on internal security threats, from the growing number of sovereignty movements, to the anarchic conditions of large swaths of the country. The situation is unstable, however, as AI has resurrected dead ideologies like communism and cybernetic fascism, devolving national politics into a referendum on competing utopian movements.
Thanks for reading Second Best! Subscribe for free to receive new posts and support my work.
It’s almost 2045, eerily close to Ray Kurzweil’s date for the technological singularity. In the background of all this political instability, research and development toward superintelligence has continued from the safety and comfort of Solano County — the free city of choice for ML engineers and other post-AGI trillionaires.
The city is home to a fusion-powered supercluster with billions of times more computational power than every human brain combined. It just completed its first big training run and the new model is ready to be tested. The engineers have read The Sequences and know the danger, but their pride, curiosity and Benthamite expected value calculations all scream “turn it on!”
Besides, who’s going to stop them?
And that is why the order of AI risks matters. Even if the intermediate stages of AI don’t kill us all, they may indirectly affect x-risk by upending the very institutions we'll need during the stage that does.