Will AI make us more religious?
Excerpts from my podcast with Brian Chau
Over the holidays, I had the pleasure of recording a podcast with the wonderful. It just came out and is available for listening here:
Brian and I covered a lot of ground, but if you don’t have time for a three hour conversation, I’ve excerpted two of my favorite sections below using OpenAI’s Whisper to generate the transcript. Enjoy!
AI and Religion
Brian: Will the future be more or less religious?
Sam: That is the question.
Brian: It is a question.
Sam: Well, so you know, I have this way of looking at religion. For background, my Master's thesis was on economics of religion and trying to model religions as sort of providers of club goods. It's the very profane way economists approach the issue, because there's this sort of obvious functional role of religion. It’s not all that religion does, but you know, especially organized religion where people congregate in one community, there's a lot of implicit social insurance going on. People watch out for each other, they pool their resources through the collection plate. And there was this open question about why the US secularized later than Europe, and what drives secularization in the first place.
So my thesis was looking at the idea that social insurance is really the key thing, the key functional role of religion. And one of the reasons the US secularized later was because of the arrested development of our welfare state. And if you look cross-nationally, places like Sweden and the Nordic countries are the least religious, at least in terms of active theistic belief. And they also tend to have the most comprehensive welfare systems. And so I did a bunch of empirical work trying to defend this in the US context and connecting it to Medicaid and so on.
So I think for there to be a rebound of organized religion, there needs to be a new functional role for people meeting in common places and ascribing to different belief systems, right? In the classical econ conception, the weird beliefs that you have to ascribe to or commit to be part of a religion are in part a kind of selection and commitment device. If Jehovah's Witnesses have to forgo blood transfusion, that's a very strong, real costly signal of commitment because it's not just cheap talk, there's actually costs associated. And that's one reason why stricter religions tend to grow faster.
If you take labor unions as an analogy, labor unions have declined for many reasons. But one of the reasons is because they were kind of defunctionalized. We moved a lot of labor regulation into OSHA and the Department of Labor and these statutory laws. And so there is less and less for labor unions to negotiate over. And in countries where they still have strong labor unions, again, like the Nordics, it's because they have delegated a lot of that regulatory power to collective bargaining, in some cases letting the union run workforce development programs and unemployment insurance and so forth.
So I think in the big picture, a lot of America's dysfunctional culture is really a defunctional culture. We've kind of removed the functional purpose behind common cultures, including religious cultures. So the way I'd answer this question is, is there in the future going to be new kind of loci to organize around where something that resembles religion becomes functional again?
“…a lot of America's dysfunctional culture is really a defunctional culture. We've kind of removed the functional purpose behind common cultures, including religious cultures.”
This sort of gets a little bit into AI because there's this work by Joe Henrich, the evolutionary anthropologist, who traces the rise of monotheism to the development of nation states and city states in the agricultural revolution. These big gods, these sort of abstract monotheistic gods paralleled the development of centralized states. And so there could be a future where technology leads to a much more radical decentralization and with that brings a more sort of polytheistic culture.
But then there's also this question around AI where you could easily see — you already start to see it with that guy from Google who was convinced that the Google Lambda chatbot was sentient — that there's a sort of bell curve of gullibility. And even though AI hasn't yet passed the Turing Test, it's close enough for people on the left-hand of that bell curve. And as it steadily creeps up and becomes much more convincing and customizable, you could easily see people start forming cargo cults around their preferred AI. It's sort of a return to the forager, pre-modern religions. It's not, it won't be so monotheistic, it'll be much more animistic, right? Because a lot of our physical world will suddenly have intelligence imbued in it and this sharp dichotomy between spiritual and physical won't be as real. But because of customizability, you also won't have people having to, you know, homogenize around some lowest common denominator god. They could each have their own personal god or a community god that's optimized or trained specifically for their needs and would serve as a kind of Cassandra that they could go to in the Holy of Holies and ask for their advice.
So I do think secularization has sort of reached its peak. But what comes next won't look like 18th or 19th century religion, it will look quite different I think.
Brian: Right. So in one of your pieces, you mentioned Peter Thiel saying “AI is communist, crypto is libertarian,” and you said the opposite is the case, right? You already mentioned this a little bit, I think this is a good way to get into it. First of all, why does Thiel think that and why do you disagree with him?
Sam: Well, to give Thiel his due, prima facie it looks plausible, right? Because crypto, Bitcoin especially, sort of has this ethic of agorist economics. We're going to build a counter financial system that's going to disrupt the central banking monopolies and so forth. And there's a long tradition in libertarian thinking that money, the hardness or softness of money drives everything around us, setting aside how plausible that is. There's at least a case that crypto enables not just criminal activity and dark money to move around, but at some point potentially just everyday normal economic life in a way that's shielded from government oversight. So there's a story there.
Likewise, with AI, you already see this in China where they have surveillance cameras everywhere and if there's a warrant out for your arrest, they can identify you by the way you walk with gait recognition technology. And so there's definitely a potential for AI to be a very centralizing force.
The question I have is what is likely to happen in countries that aren't as far along to totalitarianism as China, places where there's still lots of open access and where these AI tools will inevitably diffuse to the network edge, right? Right now, some of these larger models require a bunch of AWS compute time to really run. But we're on a cost curve where eventually optimized models will fit on your cell phone. And when everyone and their grandma has the capabilities of a CIA agent in terms of intelligence gathering, being able to spoof anything they want, being able to have an army of AI laborers essentially running their court cases and so on and so forth, I think it's going to be much more decentralizing than centralizing. There will always be the people at the cutting edge who have the 100 trillion parameter models that ordinary people can't access, but there will be a layer of app development below that and then a layer of completely decentralized open source below that. And I think it strongly favors decentralization for that reason.
Brian: Right. Have you ever read the Emma Goldman quote about majorities?
Sam: Maybe. What is it?
Brain: Okay. I'm just pulling it up now. “If I were to give a tendency of our times, I would say quantity, the multitude, the mass spirit dominates everywhere, destroying quality, our entire life, production, politics, education, rests on quantity, on numbers. The work who once took pride in the thoroughness and quality of his work has been replaced by brainless, incompetent automatons who turn out enormous quantities of things, valueless to themselves and genuinely injurious to the rest of mankind. This quantity, instead of adding to life's comforts and peace, have immediately increased man's burden.”
Sam: Yeah. No, that's getting at a sense that was very common, especially in the Frankfurt School that prefigured the New Left, of the critique of mass production. And obviously, in their case, very influenced by Auschwitz and the Holocaust as a sort of, “here's industrial capitalism taken to its logical conclusion.” We can have mass assembly car lines, but the same technology can produce mass terror and death, and that ended up being extended to this critique of consumerism and mass society; that these capitalist institutions were homogenizing culture, that everyone was going to learn English and all your local cuisine would be replaced by McDonald's and so forth.
My critique of that is, there is some of that, obviously, but it's also not a critique of capitalism per se, but really a particular era of transaction costs where in the second industrial revolution, the late 19th century, early 20th century, the way technology developed made it much easier to build High Modernist institutions, large state building projects, FDR and so forth. That was paralleled with the old progressive era of really big companies scaling up. That was designed to capture certain economies of scale and to economize on transaction costs. We have NIST that set standards for the US and for the world on what is a kilogram and all these other different things that enable a degree of harmonization and coordination. We have the ability to pump out millions of Toyotas that all look relatively the same. Maybe you can pick the interior or something like that, but you're economizing on a certain transaction cost and efficiency of scale. It's not an intrinsic feature of capitalism. It's really a contingent one, and if AI does lead to a world of mass customization and DIY, you could see that turn back the clock quite dramatically.
Balaji Srinivasan has this analogy to laminar flow. If you have a very viscous liquid and you put in some food coloring, you can turn a crank that rotates the liquid and it looks like the food coloring is being mixed in, but if it has the right viscous properties, you can unturn the crank and get back to your original state. There's a sense in which I think the next stage of technological development will be unwinding certain features of the mid-20th century High Modernism. We already start to see that in media and the things that are more exposed to information technology, where the Walter Cronkites of the world are giving way to the Rush Limbaughs, a throwback to when news was much more decentralized and open access. That's one reason why in my blog, I also do a lot of thinking about the development of these institutions beginning in the 1800s to draw lessons about how if you reverse the chronology, what things could arise.
Brian: I think the sentiment, the interesting thing about Emma Goldman is that she was referring to this era of industrial change and that ending up just ultimately trickling down to the media habits where this is much more like a direct thing on the media habits in terms of AI, in terms of these large language models that are able to compose text, that is able to compose poems, plays, articles, of course. Really the question, I think, with regards to this is what is the practical effect of driving the cost of that to zero? Is that a good thing? Does it, as you say or as Balaji says, does it enable people to explore elsewhere to do better things with their life or does it just fill the market with this kind of thing? Make it so that because the cost to this kind of thing is so low, even though there might be alternatives that relatively rise in value, that they might still not be worth it because the cost of producing them is just so much higher, even though they are more unique and valuable compared to the AI-produced pablum.
Sam: It's also useful to read the classical economists like Ricardo and Henry George and these guys, because they were solving for the long-term equilibrium. When all these other rents get driven to zero, the last thing that will accrue rent will be land.
In my job at a think tank, I have a team I've had to hire and in my experience, one of the scarcest factors is writing ability. There are lots of people with great ideas, maybe the right connections or so forth, the right attitudes, even the highly motivated, but they can't write to save their life. Or they can put something down on paper, but it's not persuasive in the right way or isn't bilingual with different ideologies. Now that's a solved problem because you could take a set of bullet points that contain all the good ideas and have a chat bot write the script around it. Then if it's not good enough, you could say, well, do it in the style of the Economist or do it for a progressive audience or what have you.
I think this is going to be really perturbing to the professional managerial class, for lack of a better word, because they've really been benefiting from a certain rent. They've been deriving a rent from this kind of wordcel activity being a scarce, lucrative commodity. If you control something scarce, you can extract rent from it. I think in the medium term, AI is going to be incredibly egalitarian in its effects because it will take people who are just as intelligent on many dimensions, but maybe lack a certain articulacy and put them on a level playing field. Existing sources of status will decline, and even looking back at early 20th century intellectuals, the socialists and so forth like Emma Goldman and others, they were really engaged in a kind of leisure class activity. To be able to sit around in a salon smoking cigarettes and talking about late capitalism is a luxury, and being able to do that and publish manuscripts and so forth signals to everybody that you have all the free time in the world to do this kind of stuff. So ironically, there's a bit of a class critique here where AI could be very, very positive from a class war dimension because these scarce commodities that create and select for the upper class are no longer scarce.
Progressivism and Rationality
Brian: Something that I've noticed is that there are like, no woke apologia, right? For the audience, apologia is like the, the Christian texts are like, basically, reasons to believe in God if you're an atheist, right? They're trying to convince people who are outside of the window. And to me, or at least so far, I talked slightly about this with Freddie DeBoer as well. To me, so far, there's been like literally none of this. Right? It just doesn't exist.
Sam: Well, it's because it's an anti-enlightenment movement in a sense. On the one hand, to the extent it's sort of Protestantism working itself out through history, it's being propelled by language, and in that sense, it's based on reason. But if you look at how the discourse has evolved, it's not through persuasion. It's not through better arguments. It's through a kind of moral blackmail and forced normative conversion. Like the norms have been updated. You didn't get the memo? Did you read the room? Right?
And in that sense, it's deeply non-cognitive. It's deeply, you know, a-rational. And this is also why if you go to progressive meetings, and in my job, I have progressive funders and stuff like that. I go to their conferences and so on. It's always about like, how do we shape the narrative? You know, our policies aren't working. How can we like have better framings? And it becomes very, again, Marshall McLuhan-y, because it's like we're not worried about changing the substance. We just need to package it right. And you even see this with the Ezra Kleins of the world. At one point, he lamented the social psych research that says people are basically unpersuadable. It's all affect. And so don't even bother trying.
So you know, if you really believe that, and I think it's dangerous to believe that, right? Because if you really believe that, then it just becomes warring psyops. It's like who has the better psyop wins.
Brian: Wait, but I don't know. I kind of believe that though. Like the caricature version, I do have the saying, right, I do say this quite often that people don't have beliefs, they have reactions. If you model like people's behavior in the world especially, but also their opinions, it really is basically, you know, like a better starting point, you know, a better basically kind of like average predictor, right, of people's behavior really is, you know, kind of this kind of like evolutionary psychology, like status interest, basically. And it really is not like rationality, right? Rationality is a very poor predictor of people's decision making.
Sam: But it's the wrong level of analysis, right, because it's true that individuals don't have like the Cartesian “I” with capital R Reason, where we just can sit and work through issues. You know, I think that was one of the big mistakes of the Enlightenment 1.0, if you want to put it that way. This idea that reason is just a matter of thinking harder. And you know, we know from tons of research that the smarter people, if anything, are more prone to motivated reasoning. Look at Sam Bankman-Fried, for example. Like, the guy clearly has a 140 IQ, he's super smart, but he's also using the intelligence for rationalizing everything he's doing.
I think the way you modify the First Enlightenment project is to recognize that actually reason was never situated in individual minds. Reason is a social phenomena. It's an institutional phenomena. It's having a courtroom where the prosecutor and defense attorneys battle it out and the jury has to decide. It's having, you know, companies that are more rational than individuals in profit maximizing sense because they have accounting departments and strategy meetings and people who are, you know, running Excel macros to make sure that they're optimized, right?
So you really need assistance, you need systems to make people rational. You could just try to exercise self-control all the time. First of all, you'll get tired really quickly. But second of all, if you look at how people, the people who exercise regularly, like they may have a touch more willpower, but what they also do is they have systems in place, like setting their clothes out to go running before they go to bed. So when they wake up and then they have weakness of will, running is easy for them because they can just throw on their clothes.
So I think that we massively discount the value of rationality and we've been pulled into this sense of moral and epistemological skepticism because all the neuroscience and psychology research is telling us that people are basically irrational. But it was never about the individuals in the first place. It's about the social settings and social scaffolding that gives people the foundation to be rational as a collective.
I think one of the things that's kind of scary to me about going down that rabbit hole of, oh, it's just everything's reactive. People don't form beliefs based on evidence, they just follow, they just confirm their biases. It's like that is true on an individual basis, but then the question is what kind of institutional settings and social scaffolding are needed such that there is a check and balance and people are guided towards rational decision making?
Brian: I mean, first of all, I want you to clarify what you mean by it was never about the individual in the first place. Because from my reading of it, historically, people really did believe it was about the individual.
Sam: Yeah, people did think that.
Brian: Okay, okay. But the reason it succeeded is kind of different. It's just kind of like the Hansonian argument where the reason it succeeded is different from the explicit reasoning or the explicit justification for it.
Sam: Well, I reviewed Hanson and Simler's book for Quillette when it came out. So you can find that and I offer a bit of a critique on this point. I have a similar critique for like Jonathan Haidt and stuff like that where it's just all sort of precognitive moral foundations or something like that. That is true. It's just incomplete.
The way I would think of it is it's the transition from Kant to Hegel, right? Kant has this kind of super formalist architecture where we're autonomous reasoning agents and then people like Fichte or the kind of post-Kantian anarchists like Stirner turn into total egoists. But then Hegel takes Kant and kind of naturalizes it into social practices and says, you know, reason still exists, but reason works through practices and culture and people as a group, not as this like Robinson Crusoe figure that's just like standing and stroking his chin and solving problems as an individual. It's always way more social than that.
Thanks for reading Second Best! Subscribe for free to receive new posts and support my work.