13 Comments

This is a really compelling synthesis of interdisciplinary ideas. Like Jason though, I think using evolution as a precedent is tricky and nuanced. Evolution operates on large populations while AI training is repeatedly performed on a single "individual". The claim that both processes are at heart the same form of optimization is a type of ergodic hypothesis* - perhaps this is true but it is not self evidently so.

* https://en.wikipedia.org/wiki/Ergodic_hypothesis

Expand full comment

Amazing post, thanks for pulling all those links together into a coherent whole.

Expand full comment
Sep 22, 2023Liked by Samuel Hammond

Your article offers a compelling timeline for the emergence of AGI within the next 10 to 15 years. However, it doesn't touch upon the role of memory consolidation processes like sleep, which are essential in advanced biological cognition. How do you envision AGI addressing the complexities of memory consolidation, learning integration, and synaptic homeostasis? And how might this factor into your projected timeline for AGI?

Expand full comment
Sep 22, 2023Liked by Samuel Hammond

Thanks for an excellent post! I'd had some gut feelings about these issues, but this puts some "wood on the ball". I especially appreciate that you tie seemingly complicated issues back to theories and models that are intuitively grasped by the layman.

I am wondering, however, about innate behaviors and/or the predisposition of animals to learn certain things with little effort (e.g., a fawn walking within a day vice a human taking a year). I wonder if we won't adopt some modular approaches in training that can either improve or speed particularly desired attributes. I'm not of the neurology world, but I've read a bit about the "modular mind" hypothesis and can easily imagine AI moving towards many specialized nets generating options for an executive function AI. This seems especially promising to me since narrow AI has bested humans with regularity.

Expand full comment
Sep 23, 2023Liked by Samuel Hammond

"AGI" already exists in the big LLMs, it's just not human in nature. Where we are now, is on our way to "superintelligence", and there's no reason why it should take 20 years to get there.

Expand full comment

> The same information-theoretic arguments that make near-term AGI plausible also put bounds on the plausibility of a runaway superintelligence, as a system with a given compute budget can only extract so much entropy from its training data and environment. That doesn’t make a “god-like” superintelligence impossible, but it does mean that we won’t get the true singularity until the requisite computing infrastructure is online, perhaps sometime in the 2040s.

I think this argument runs into problems because ASI/AGIs don't have to work by emulating brains -- they can have more efficient algorithms. They can also use more efficient algorithms than what is predicted by current scaling laws. If you assume that a human childhood is around 1e24 FLOP, we're already using more 1-2 OOMs more compute on frontier AIs than we do on humans, suggesting that if we were to match human level algorithmic performance per FLOP, we'd have dramatically superhuman AIs, even just using existing hardware. Not to mention that we're using around 1e-4 or 1e-3 of the global compute supply on individual training runs right now: we get several OOMs by just consolidating existing hardware.

I don't think this can give us high confidence in ASI soon because of uncertainty over the difficulty in finding algorithmic improvements. In general, I think that all arguments that bound AGI or ASI timelines need to factor in the difficulty of algorithmic improvement.

Expand full comment

Great post!

Even as a layman it is obvious that the surprising ability of LLMs to make sense indicate that the human sensation of making sense is an illusion layered on similar algorithms. Hence it is not productive to think in terms of AI and HI as being different, mathematically they are already clearly similar. Physically the jump from billions of elemental neurons firing away to so called conscious thought cannot and should not be seen as some sort of of magical obstacle to silicon based intelligence achieving AGI.

"If it walks like a duck and quacks like a duck then it is a duck". Indistinguishability (is there such a word?) is the only way you and I can hypothesise that the other partly is actually thinking. Hence if an AI algorithm demonstrates the same behaviour as a "thinking" brain then well it is conscious. The logical hypothesis is that AI algorithm development will in any case probably come up with some sort of internal language playback mechanism (aka "thinking") as a strategy optimisation step as it seems to work well for us.

Expand full comment

I read it all carefully, with a lot of fact-checking. That was not an easy read, but it was worth the time. It is hard to find such well-filtered and condensed texts these days when a lot of white noise is around. Definitely thank you for the great job. About me: I am a VC analyst and a former quantum physicist.

Expand full comment

It's bizarre to ask what "special sauce" human brains have that silicon chips don't have - they're totally different systems. We don't even have any solid evidence to believe that human brains think in discrete packets of data like computers do. No evidence at all.

Like every other AI maximalist, you're just incredibly far out in front of your skis, and you are because you want to imagine that AI is going to come rescue you from the mundane existence of human life. But there is no rescue. Human beings have been living in deep and accelerating technological stagnation for 60 or 70 years, and we're all going to live and die in a world that's not substantially technologically different from the one we live in now.

Expand full comment

I'm always interested to find thoughtful analysis of AGI timelines, and you present some ideas I hadn't encountered before (thank you!).

There are, of course, a wide range of estimates for time-to-AGI (even setting aside differences in what definition of AGI is used). I'm starting to reach out to people like yourself to see whether it's possible, by exploring the different approaches and assumptions being used, to reconcile some of the different estimates. Would you be open to a conversation (whether over email, video call, or otherwise)? If so, please drop me a line at the email address listed in the About page of my Substack (linked below).

Brief background on me: I'm a software engineer, [co-]founder of a number of successful startups, most notably Writely (aka Google Docs). More recently, I've been diving into AI capabilities and likely future trajectories, blogging at https://amistrongeryet.substack.com/.

Expand full comment

This is an excellent article. I agree and support almost all of it, and recommend caution not to fall into the "evolutionary optimality fallacy". No matter how long, evolution does not necessarily arrive at the optimal solution, because selection pressure goes down after what is "good enough for now."

Expand full comment
Sep 22, 2023·edited Sep 22, 2023

Evolution didn't just scale up the sub-cortex to get the cortex. It also tested it over countless iterations and selected for scaled up versions that worked. What's the filtering mechanism in the case of AGI? Is it some sort of human feedback i.e. building off of our neocortex? Is this sufficient to produce an AGI that works?

In other words, is today's neocortex as effective as biological evolution when it comes to selecting AI? I lean towards the idea that our wisdom has not kept up with our technology, so I'm pessimistic that we're as close to AGI as you suggest, but I'm interested in hearing your thoughts on this. It sounds like you've thought about this longer than I have.

Expand full comment