Second Best

Share this post

The order of AI risks matters

www.secondbest.ca
Old School Blog

The order of AI risks matters

How well we adapt to AGI will determine our response to ASI

Samuel Hammond
Apr 26, 2023
1
Share
Share this post

The order of AI risks matters

www.secondbest.ca

The main risk from AI isn't it waking up and incinerating humanity but rather the societal destabilization and potential state collapse that will occur when 5000 years of economic history get compressed into a couple months.

AI only needs to be better than every human expert (but not superhuman) for it to be possible to duplicate AI agents into billions of artificial workers. In the Solow model, doubling the stock of capital and labor leads to a doubling of total output. The limiter will be capital.

Indeed, AI doesn’t even need to surpass human level to begin steamrolling existing institutions — the very institutions we’ll be reliant on to manage whatever other AI risks lay ahead. Thus whatever the specific risks associated with superintelligence, it won't be developed until this economic phase transition is already well underway.

The order of events matters. If we bungle the invention of superintelligence it will probably be because we bungled the adaptation to sub-superintelligence.

Thanks for reading Second Best! Subscribe for free to receive new posts and support my work.

1
Share
Share this post

The order of AI risks matters

www.secondbest.ca
1 Comment
skybrian
Writes Skybrian’s Blog
May 4

Yes, and also, it takes time to get political consensus to start up regulatory institutions. Some people worry about what happens if AI gets smart enough to talk people into letting it out of a box, but maybe we should first worry that we haven't gotten to putting it in a box?

Expand full comment
Reply
Top
New
Community

No posts

Ready for more?

© 2023 Samuel Hammond
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing