Unfortunately "AI risk" has come to mean "an AI might desire very much to make us all into paper clips" rather than the much more plausible scenarios you mention above. There are too few people thinking about the opportunities and risks of AI in the middle ground.
I completely agree that our institutions will need to adapt, but history suggests that institutional change will be slow and reactionary. I think we should take that as an assumption.
Luckily, there is resilience built into society that I think is often overlooked when we’re taking about the initial impacts of AGI (long term impacts are too weird to bother guessing). Parents concerned about purchasing “meaningless” children’s books written by AI will go out of their way to purchase from an author they believe is human, and AGI won’t be able to fully circumvent our empirical tools for determining who is human, and least not right away. Courts will require that lawsuits be filed in person and require significant lawyer time (interestingly, that’s wasted time!). Music lovers will pay extra for a service that delivers real music from real musicians, etc.
If I am reading the Metaculus page right, the latest prediction for publicly revealed AGI is 2027. This may well be over indexing on very recent events, of course.
Great synopsis, thank you. Dumb question if someone has a moment for it: Grad school taught me that research outcomes are often pre-determined by the way the research question is asked, which is often pre-determined by the funder of the project or the researcher themselves. How does AI training avoid this outcome? I see it as being particularly vulnerable in areas of subjective issues and policy solutions based on causality linkages. Please forgive my ignorance if this is, as previously stated, a dumb question.
Enjoyable writeup. Disagree with your refutation of Thiel however. The beauty of crypto is that even with CBDCs and the limitation of on/off ramps, crypto transactions can run independently. AI is communist in that you depend on centralized entities (i.e. OpenAI) for your most capable models - no single individual has the compute to train something of that nature. All "prompt-engineers" are at the will of this company and the parameter weights it chooses. I am doubtful open-source will be the de-facto with such great potential for profitability.
Great writeup. Btw, there's no way the government's going to change between now and then, more likely figure out how to abuse AGI to their own benefits. Shouldn't the goal be to just get rich then, by your argument?
So, by implication, we’re gonna need a “RealID” to have any contact with any element of government or other credential granting entity (think academia, philanthropies and the like).
The AI and machine learning systems are gamechangers, disruptive and can contribute to more decentralization. This means that even public institutions need to become more decentralized if the want to function better and be part of a better democracy
Interesting read. Are there any concerns about how AI is acting recently with the unproven but obvious meddling by progressives? There are examples floating around where asking for a joke about a man yields some joke, while asking for a joke about women yields a chastisement.
AI art, porn, etc. will dominate for one main reason: the user "creates" it. Sure, the AI is doing 99.99% of the work, but the creator still feels ownership. AI companionship cannot be far off.
To restate the issue: the likelihood of AI ruining people would be more akin to careless teens doing chores, before it even has a chance to be at risk of bot-farming as average intelligence pseudo-humans or hyper-intelligent machine "god". The IQ gap for this to happen is about 18 points (or 3-4 years of formal schooling). https://archive.ph/IkEBqhttps://www.secretorum.life/p/eponymous-laws-part-3-miscellaneous
The reason is that idiots in workplaces and subcultures are more detrimental than talented individuals with malice. This is even more true if one gets into middle management in corporate structure or community moderation in subcultural contexts. People that are "easy" are not necessarily "good", same goes for AI. https://archive.ph/rFPenhttps://alexdanco.com/2021/01/22/the-michael-scott-theory-of-social-class/
Unfortunately "AI risk" has come to mean "an AI might desire very much to make us all into paper clips" rather than the much more plausible scenarios you mention above. There are too few people thinking about the opportunities and risks of AI in the middle ground.
Thanks for an illuminating article.
I completely agree that our institutions will need to adapt, but history suggests that institutional change will be slow and reactionary. I think we should take that as an assumption.
Luckily, there is resilience built into society that I think is often overlooked when we’re taking about the initial impacts of AGI (long term impacts are too weird to bother guessing). Parents concerned about purchasing “meaningless” children’s books written by AI will go out of their way to purchase from an author they believe is human, and AGI won’t be able to fully circumvent our empirical tools for determining who is human, and least not right away. Courts will require that lawsuits be filed in person and require significant lawyer time (interestingly, that’s wasted time!). Music lovers will pay extra for a service that delivers real music from real musicians, etc.
I'm not too interested in handwavy futurism, but it would be interesting to learn about some of these trends in more detail.
If I am reading the Metaculus page right, the latest prediction for publicly revealed AGI is 2027. This may well be over indexing on very recent events, of course.
Wow, you're right! Updated.
fun read. thnx for this
Well done
Great synopsis, thank you. Dumb question if someone has a moment for it: Grad school taught me that research outcomes are often pre-determined by the way the research question is asked, which is often pre-determined by the funder of the project or the researcher themselves. How does AI training avoid this outcome? I see it as being particularly vulnerable in areas of subjective issues and policy solutions based on causality linkages. Please forgive my ignorance if this is, as previously stated, a dumb question.
Enjoyable writeup. Disagree with your refutation of Thiel however. The beauty of crypto is that even with CBDCs and the limitation of on/off ramps, crypto transactions can run independently. AI is communist in that you depend on centralized entities (i.e. OpenAI) for your most capable models - no single individual has the compute to train something of that nature. All "prompt-engineers" are at the will of this company and the parameter weights it chooses. I am doubtful open-source will be the de-facto with such great potential for profitability.
Great writeup. Btw, there's no way the government's going to change between now and then, more likely figure out how to abuse AGI to their own benefits. Shouldn't the goal be to just get rich then, by your argument?
So, by implication, we’re gonna need a “RealID” to have any contact with any element of government or other credential granting entity (think academia, philanthropies and the like).
Good luck with that…
The AI and machine learning systems are gamechangers, disruptive and can contribute to more decentralization. This means that even public institutions need to become more decentralized if the want to function better and be part of a better democracy
Your footnote scares me and I hope it scares others too tbh
Interesting read. Are there any concerns about how AI is acting recently with the unproven but obvious meddling by progressives? There are examples floating around where asking for a joke about a man yields some joke, while asking for a joke about women yields a chastisement.
AI art, porn, etc. will dominate for one main reason: the user "creates" it. Sure, the AI is doing 99.99% of the work, but the creator still feels ownership. AI companionship cannot be far off.
One of the AI risk has been previously conjectured in Jan Hendrik Kirchner's blog post. https://universalprior.substack.com/p/making-of-ian/comment/6861049
To restate the issue: the likelihood of AI ruining people would be more akin to careless teens doing chores, before it even has a chance to be at risk of bot-farming as average intelligence pseudo-humans or hyper-intelligent machine "god". The IQ gap for this to happen is about 18 points (or 3-4 years of formal schooling). https://archive.ph/IkEBq https://www.secretorum.life/p/eponymous-laws-part-3-miscellaneous
The reason is that idiots in workplaces and subcultures are more detrimental than talented individuals with malice. This is even more true if one gets into middle management in corporate structure or community moderation in subcultural contexts. People that are "easy" are not necessarily "good", same goes for AI. https://archive.ph/rFPen https://alexdanco.com/2021/01/22/the-michael-scott-theory-of-social-class/
The only way to fight against human displacement in the job market is to make humans less automatable. "bugmen" and "midwits" (as an insult) have a laundry-list of behaviors which is characteristic of the culturally stagnant service and white collar workers. Lack of diversity in thought and collaborative creation are two major things that makes AI attractive. If one is not of a creative kind but a handicraft kind, then Etsy-fication would be a wholesome alternative. https://archive.ph/J3ICg https://archive.ph/YXHQk https://sachink.substack.com/p/midwits-and-meta-contrarianism https://alima.substack.com/p/midwits-and-the-office
Big Brother doesn't need as many watchers as watched, just enough to make you afraid of being watched. See: panopticon