Podcast: AI and Institutional Disruption
My chat with Gus Docker of the Future of Life Institute
Special thanks to Gus Docker for inviting me on the Future of Life Institute Podcast to discuss my recent thinking on AI and institutional disruption. Give it watch below!
I was also on this week’s episode of Macro Musings with David Beckworth discussing many of these same themes. The full podcast and transcript are available here.
The one area David asked about that Gus and I didn’t explore is the question of whether AIs will ever be conscious. Here’s that part from the transcript:
Beckworth: Beyond AGI, there's going to be superintelligence, that's the goal. That raises a question in my mind about what it means to be sentient or to be aware. Do you think AIs will ever be aware? Maybe we have to define the terms first, but what is the trajectory of AIs? At some point, will they be peers to us, colleagues to us? What is your sense of where this is going?
Hammond: This is probably more of a debated area. My perspective is sometimes called computational functionalism, that what our brain is, is its own deep reinforcement learning model. The way we learn is through prediction. We're constantly making predictions when we walk into a room and we see something we didn't expect. Our neurons are firing and rewiring constantly. There are also some incredible parallels between these artificial neural networks that we're building and the way our brain works, even to the extent where some image models that are trained to, say, detect faces or classify cats versus dogs, when you're training those models, they learn certain feature detectors, like detecting edges or detecting whiskers for a cat or so forth. There's been feedback with neuroscience, where neuroscientists have actually looked into the visual centers of our brain and found similar circuits that were actually first observed in artificial neural networks. So, our brain and these neural networks are discovering similar features and encoding things in ways that, even if they're not identical, because obviously our brain is wet and self-organizing and it's always on, there are striking through echoes between the two.
So, this raises the question, will these machines be sentient? My position is that there's nothing in principle stopping us from building machines that do have some kind of inner experience. Sentience could just be being intelligent, being intentional, but I think what people are really interested in is the experience of what it is like to have an inner experience. One theory is that what our brain is doing is, we're in a constant dream state, a waking dream, and we have this video game engine in our brain, and for evolutionary reasons, it's useful to have an agent in our brain that's observing this video game model and making decisions.
As we build AIs that are more multimodal, that have multiple sensory inputs, images, audio, and then we also architecture them to be always on rather than just putting in a prompt and getting an output and it goes back to sleep, and then we add this component of self-reflection, which may be useful for developing systems that are autonomous, that are able to reflect on their observations and change their decisions. I think we'll approach something that, if it's not conscious, there will be at least people who think it is conscious and it will be an active area of debate. I think, already, there are researchers who are trying to develop objective tests for subjectivity, anticipating that this day will come.
Beckworth: So, it is possible. There's a chance we're going to have AIs in the future that are aware of themselves and that they exist.
Hammond: Yes, because presumably, we evolved consciousness because it had some utility. There's this thought experiment philosophy called philosophical zombies. Could you imagine somebody who has the identical behavior of a human, but there's no light on inside, so to speak? I think the right way to address that would be to say, "Well, is a philosophical zombie possible?" Because if they didn't have this ability for self-reflection and the experience of living within the video game engine, so to speak, would they have the same behavior in the first place? I think the philosophical zombie thought experiment begs the question. I tend to think that we won't have to build models that are sentient, but as researchers struggle to make agent-like models, things that have a certain degree of autonomy, it may end up being… building some self-reflective loop may end up being something that they stumble on as useful.
For more on AI and consciousness, I highly recommend this recent talk by Joscha Bach titled “Consciousness as a coherence-inducing operator.”