Discover more from Second Best
The order of AI risks matters
How well we adapt to AGI will determine our response to ASI
The main risk from AI isn't it waking up and incinerating humanity but rather the societal destabilization and potential state collapse that will occur when 5000 years of economic history get compressed into a couple months.
AI only needs to be better than every human expert (but not superhuman) for it to be possible to duplicate AI agents into billions of artificial workers. In the Solow model, doubling the stock of capital and labor leads to a doubling of total output. The limiter will be capital.
Indeed, AI doesn’t even need to surpass human level to begin steamrolling existing institutions — the very institutions we’ll be reliant on to manage whatever other AI risks lay ahead. Thus whatever the specific risks associated with superintelligence, it won't be developed until this economic phase transition is already well underway.
The order of events matters. If we bungle the invention of superintelligence it will probably be because we bungled the adaptation to sub-superintelligence.
Thanks for reading Second Best! Subscribe for free to receive new posts and support my work.