21 Comments
Dec 6, 2022Liked by Samuel Hammond

Unfortunately "AI risk" has come to mean "an AI might desire very much to make us all into paper clips" rather than the much more plausible scenarios you mention above. There are too few people thinking about the opportunities and risks of AI in the middle ground.

Expand full comment
Dec 12, 2022·edited Dec 12, 2022Liked by Samuel Hammond

Thanks for an illuminating article.

I completely agree that our institutions will need to adapt, but history suggests that institutional change will be slow and reactionary. I think we should take that as an assumption.

Luckily, there is resilience built into society that I think is often overlooked when we’re taking about the initial impacts of AGI (long term impacts are too weird to bother guessing). Parents concerned about purchasing “meaningless” children’s books written by AI will go out of their way to purchase from an author they believe is human, and AGI won’t be able to fully circumvent our empirical tools for determining who is human, and least not right away. Courts will require that lawsuits be filed in person and require significant lawyer time (interestingly, that’s wasted time!). Music lovers will pay extra for a service that delivers real music from real musicians, etc.

Expand full comment

I'm not too interested in handwavy futurism, but it would be interesting to learn about some of these trends in more detail.

Expand full comment

If I am reading the Metaculus page right, the latest prediction for publicly revealed AGI is 2027. This may well be over indexing on very recent events, of course.

Expand full comment
Dec 12, 2022Liked by Samuel Hammond

fun read. thnx for this

Expand full comment

Well done

Expand full comment
Dec 10, 2022Liked by Samuel Hammond

Great synopsis, thank you. Dumb question if someone has a moment for it: Grad school taught me that research outcomes are often pre-determined by the way the research question is asked, which is often pre-determined by the funder of the project or the researcher themselves. How does AI training avoid this outcome? I see it as being particularly vulnerable in areas of subjective issues and policy solutions based on causality linkages. Please forgive my ignorance if this is, as previously stated, a dumb question.

Expand full comment
Dec 10, 2022Liked by Samuel Hammond

Enjoyable writeup. Disagree with your refutation of Thiel however. The beauty of crypto is that even with CBDCs and the limitation of on/off ramps, crypto transactions can run independently. AI is communist in that you depend on centralized entities (i.e. OpenAI) for your most capable models - no single individual has the compute to train something of that nature. All "prompt-engineers" are at the will of this company and the parameter weights it chooses. I am doubtful open-source will be the de-facto with such great potential for profitability.

Expand full comment
Dec 10, 2022·edited Dec 10, 2022Liked by Samuel Hammond

Great writeup. Btw, there's no way the government's going to change between now and then, more likely figure out how to abuse AGI to their own benefits. Shouldn't the goal be to just get rich then, by your argument?

Expand full comment
Dec 7, 2022Liked by Samuel Hammond

So, by implication, we’re gonna need a “RealID” to have any contact with any element of government or other credential granting entity (think academia, philanthropies and the like).

Good luck with that…

Expand full comment

The AI and machine learning systems are gamechangers, disruptive and can contribute to more decentralization. This means that even public institutions need to become more decentralized if the want to function better and be part of a better democracy

Expand full comment

Your footnote scares me and I hope it scares others too tbh

Expand full comment

Interesting read. Are there any concerns about how AI is acting recently with the unproven but obvious meddling by progressives? There are examples floating around where asking for a joke about a man yields some joke, while asking for a joke about women yields a chastisement.

Expand full comment

AI art, porn, etc. will dominate for one main reason: the user "creates" it. Sure, the AI is doing 99.99% of the work, but the creator still feels ownership. AI companionship cannot be far off.

Expand full comment

One of the AI risk has been previously conjectured in Jan Hendrik Kirchner's blog post. https://universalprior.substack.com/p/making-of-ian/comment/6861049

To restate the issue: the likelihood of AI ruining people would be more akin to careless teens doing chores, before it even has a chance to be at risk of bot-farming as average intelligence pseudo-humans or hyper-intelligent machine "god". The IQ gap for this to happen is about 18 points (or 3-4 years of formal schooling). https://archive.ph/IkEBq https://www.secretorum.life/p/eponymous-laws-part-3-miscellaneous

The reason is that idiots in workplaces and subcultures are more detrimental than talented individuals with malice. This is even more true if one gets into middle management in corporate structure or community moderation in subcultural contexts. People that are "easy" are not necessarily "good", same goes for AI. https://archive.ph/rFPen https://alexdanco.com/2021/01/22/the-michael-scott-theory-of-social-class/

The only way to fight against human displacement in the job market is to make humans less automatable. "bugmen" and "midwits" (as an insult) have a laundry-list of behaviors which is characteristic of the culturally stagnant service and white collar workers. Lack of diversity in thought and collaborative creation are two major things that makes AI attractive. If one is not of a creative kind but a handicraft kind, then Etsy-fication would be a wholesome alternative. https://archive.ph/J3ICg https://archive.ph/YXHQk https://sachink.substack.com/p/midwits-and-meta-contrarianism https://alima.substack.com/p/midwits-and-the-office

Expand full comment

Big Brother doesn't need as many watchers as watched, just enough to make you afraid of being watched. See: panopticon

Expand full comment