The main risk from AI isn't it waking up and incinerating humanity but rather the societal destabilization and potential state collapse that will occur when 5000 years of economic history get compressed into a couple months.
AI only needs to be better than every human expert (but not superhuman) for it to be possible to duplicate AI agents into billions of artificial workers. In the Solow model, doubling the stock of capital and labor leads to a doubling of total output. The limiter will be capital.
Indeed, AI doesn’t even need to surpass human level to begin steamrolling existing institutions — the very institutions we’ll be reliant on to manage whatever other AI risks lay ahead. Thus whatever the specific risks associated with superintelligence, it won't be developed until this economic phase transition is already well underway.
The order of events matters. If we bungle the invention of superintelligence it will probably be because we bungled the adaptation to sub-superintelligence.
Yes, and also, it takes time to get political consensus to start up regulatory institutions. Some people worry about what happens if AI gets smart enough to talk people into letting it out of a box, but maybe we should first worry that we haven't gotten to putting it in a box?