Robin Hanson

Robin Hanson

The sacred, humanity's descendants, social rot


00:31 - Changing careers late in life

01:29 - Philosophy

04:33 - AI

15:20 - The sacred

29:56 - Humans exploring the universe

38:02 - Social rot

46:52 - The Elephant in the Brain



Dan Schulz:

I am joined today by Robin Hanson. Robin is a professor of economics at George Mason University and the author of the Age of Em and the Elephant in the Brain. He's blogged at Overcoming Bias since the early days of the web where he is popularized many ideas such as the great filter, grabby aliens, prediction markets for policy, and the human tendency towards hypocrisy.

His name has even achieved the status of adjective -- ideas that are sufficiently unconventional are said to be Hansonian. Robin, welcome.

Robin Hanson:

Glad to be here. I think.

[00:31] - Changing careers late in life

Dan Schulz:

Should more people change careers in their thirties, forties, or even later?

Robin Hanson:

I don't know. There's this fascinating experiment done sometime in the last decade where they had a website where they said:

If you have a difficult decision, come here and we'll flip a coin and tell you what to do. And they then could test whether people who made the change they were thinking about making later on were happier than the ones who didn't by randomly controlling which decision they made, and it turned out the ones who made the change were happier.

So on average people in a situation where they said, "I don't know what to do, should I make this big change?" And some website flipped a coin and they went with that, they were better off. So I guess that suggests that we make not enough big changes. So this career choice could be one of them. But I mean, there's a lot of things that should go into whether you make a career change.

So I don't know if I can say more generally than that.

[01:29] - Philosophy

Dan Schulz:

You've mentioned that philosophy is mainly useful for inoculating you against other philosophies. You've also recently started a podcast with Agnes Callard, who's a philosopher at the University of Chicago. Have your views changed on this since the podcast?

Robin Hanson:

I still think it's true that philosophy inoculates you against other philosophy, and I think that's a big important benefit. I guess I've come to, in more detail see philosophers as a discipline that engages a wide range of topics without systems. That is, most other disciplines have collected systems and they use those systems to attack questions and they limit their scope of attention. And they limit themselves to a certain set of systems. And then philosophy covers a much wider range of topics and also just doesn't use these systems. So it's interestingly a sort of in independent source of opinion on these topics that is philosophers are willing to just go into where other people have had systems and opinions and just come up with their own different opinions on those same topics, sort of waving their hands, oh, waving these systems away and saying, yeah who cares about that? And they're just gonna go do it from first principles, and it's somewhat healthy to have that independent check on the other disciplines. I think on average, I would probably typically go with the disciplinary, the systems approach.

But I've come to appreciate that intellectual competition, basically that is, I think it's healthy in academia, in the world of ideas, if multiple disciplines or compete and try to approach the same topics in different ways and don't coordinate to agree. That is within a discipline what they often do is decide who's on top and who's in charge and what the official answer is, and make sure they have a unified front to the rest of the world so that they can, you know, maintain their respectability and prestige by not disagreeing with each other.

And that has all the problems of, you know, lack of competition. So I kind of appreciate the philosopher's competition. They sort of go into other people's fields and say, eh, I don't know about that. What about this?

Dan Schulz:

Well, in a way, is this not what you do? You seem to take economics and enter all sorts of other fields and apply economic principles.

So are you a philosopher of economics entering other people's fields?

Robin Hanson:

I usually do it with systems, so in some sense I'm applying some systems, but I definitely am not respecting, like staying in my lane. If you think of people should stick with wherever they were trained and the kinds of places they started and leave other people to do the other things, other people initially were doing, I'm not respecting that.

[04:33] - AI

Dan Schulz:

Let's talk a little bit about AI. You've had several debates on your own YouTube channel actually, that everyone should go check out, over the last couple of months with Scott Aronson, Katja Grace, Zvi Moshkovitz, and a few others. It seems like it's been a topic you've been going deep into lately.

Do you mind just briefly summarizing your take on whether or not we should be worried about this?

Robin Hanson:

You know, the thing that happened recently was that we had these large language models and they gave the impression that they were near human level intelligence. That is, you could get the impression from reading a few of their responses, "Oh wow, all of a sudden we have human level AI."

Now they aren't really there yet, but they give you that impression on first reading, and that allowed people to look farther into the future and say, "What will happen when that happens?" So usually we just deal with the world as it is, or the world as it's about to be. That is the near term versions, and there are some people who focus on the long term changes in society and where it might go, and most of us just ignore that. And we not just ignore, we dismiss it, we say, eh, that's crazy. And here we had a moment where people would go, oh, human level AI. That could be a thing soon.

And then they freaked out because in general people do not like big change. And most of the time we just dismiss these long-term future things even though the long-term futurists are roughly right, that the world's gonna be pretty strange and different in a while. But we set that aside because it's not now.

And I think if we ever really understood how different the future is gonna be and we put up to a vote, we'd vote no, we don't want all that really big change. Well, the only reason the world changes a lot is because we're focused on these short term things and we don't look at the long run and we don't think about it.

So this was a rare moment where a large fraction of the population was confronted with actually seeing an image of how big a change it could be, and they freaked out. That is they said no, what now there have been some people specializing in talking about the long-term future of AI and some of those people had been warning about how AI could go wrong and they have arguments and we can discuss those.

But I think the main thing that happened was all these other people said, "Ah!" and then they had these authorities over there saying, "You should be warning about this." And they were naturally tied together. The ordinary people and the journalists etc., said, "Oh, somebody should be like, scared about this and here are these people scared about it, so let's quote them."

So that's not, directly a knock on the people who specialize in it, but that's my summary of the overall situation here, is at the moment we have a world going, "Ah!" and because they saw this vision of where it could go in the long run.

Now this is, quite different than I think the consensus over the last few decades of maybe, tech specialists or whatever, thinking about the future of AI. They've been relatively optimistic and relatively hopeful, and so this was quite a change. And so I was curious what's going on here? And so that's why I did these conversations. I just did like a dozen conversations of an hour or two where I talked to people and said, "What are you worried about, etc."

And my conclusion from those conversations is that the main thing that's going on here is revulsion or reaction to a very other. That is they're seeing the AI as (I mean, it's not there yet), but the AI, they imagine is very other. It's just very different and that makes the hair stand up on the back of their neck, basically.

It's just intuitively scary to imagine others who are powerful and who could then contend with us and who might, and, it's just that logical possibility that is basically the argument. People give you the impression there's some technical arguments, there's like some complicated things and you work through the math and you realize no, it's just the idea that there could be these AIs and they could be powerful and they would be other, and they would have other motives, other goals, other allegiances, and they could have a conflict with us. That's it. That's the whole thing.

And so I, tried to think more about that. And my framing that I would suggest is that this instinctively revulsion of the other is an instinct we have and it roughly makes sense as an evolved instinct. When you're thinking of dealing with actual things around you in the world, that is evolution, natural selection of both genes and culture plausibly would imprint in us the habit of being wary of things that are more different from us compared the things that are more similar to us.

Because plausibly the things more similar to us share more of our genes and if we ally with them and promote them at the expense of the things that are more different, that will promote our genes. And so that's my, you know, basic explanation for why we have this revulsion of the other. But if you actually think about what natural selection would promote, this is only a heuristic.

It doesn't get it right a lot of the time. So that's the thing to realize that this heuristic can just go wrong, and in the case of AI, I would say it is going wrong. So my argument would be you are thinking of these AIs, you're imagining them in your head, and you're imagining them standing across the street from you and being big and powerful and hostile, and you're going, "Ugh, I'm at risk."

And they are in fact, your descendants. They don't exist yet. And when they exist, they will have arisen from us. They will be generated by us, caused by us, and they will be caused through a process that makes them similar to us. In many ways, they are our descendants. They won't be our descendants through DNA, but they will still be our descendants.

And evolution should make you want to promote your descendants even when they're different from you. That is your children are often more different than you, than your friends and your grandchildren are, and you should expect evolution over time to produce difference and change. But evolution would promote your encouraging and supporting your descendants, and you already expected that you and your descendants might have conflicts.

And you already expected that your descendants would eventually be more powerful than you. So this is what you've already been expecting about your descendants. What you, what your ancestors had to deal with with you was each generation is largely replaced by following generations who become more powerful than them, who potentially can have conflicts with them and will choose differently. That is, they will have their own priorities and their own goals, and they will not inherit everything about their ancestors. They will reject some of the things about their ancestors and choose other things.

So that would be my story on AI, is basically what's going on now is this revulsion of the other very deep instinct based on this very vivid image of this eventually eye that's more powerful than us and may have differing priorities and may have conflicts. And if you think of that as a, like an alien coming from another star, invading the solar system, then you have that level of caution and revulsion and fear. If you think of them as your descendants and you frame them that way, then you should, be okay with a substantial degree of difference and even conflict.

Dan Schulz:

So if it's really just othering our future descendants, do you think that there's some possibility that people that are concerned with AI risk are more likely to have a deep belief in objective moral truth? And kind of an ironic pair of people that tend to have this belief would be like effective altruists, that's like one of the te core tenets of the movement, but then also highly conservative or right wing ideologies or religious ideologies are also very concerned with moral truth, and both of them seem to generally be a little bit skeptical of drastic change in AI.

Do you think there's a correlation there?

Robin Hanson:

So we've had this long history of science fiction depicting technical change, and usually they have the villains be religious people opposing technical change, and the heroes are more liberal, open-minded folk. And we've had decades of that sort of science fiction leading many people to believe that they would be also therefore, more encompassing and accepting of various Transhuman descendants, including AIs. And so the thing to notice here is when you're looking at this far away in the abstract you tend to believe you would be so, you know, encompassing and generous. When it's closer and you actually feel the threat, all that goes away. You're right back to being like everybody else.

So I think, in fact, in our society in the current moment like the woke point of view, if you will, is this idea that we have these innate hostility say about race or gender, and that we are culturally overcoming that we're going out of our way to be generous and inclusive because that's our better nature and we feel we can manage that. But there's a limit, and this is past that limit. That is, even if we try to be very generous about our innate racial animosity or suspicion, whatever, we're not gonna try to do that with this. We're gonna go, "Oh no let's draw this line and we must defend this line against that other."

[15:20] - The sacred

Dan Schulz:

Maybe that's a good segue into this more recent idea you've had on the sacred. So you did a deep dive to try and understand what the sacred is. Do you mind giving a brief description.

Robin Hanson:

I got it into my head that I've been proposing many institutional changes and meeting opposition that's somewhat puzzling, and I thought sometimes what people say is, you are violating the sacred here with your proposals. Your proposals are not treating the sacred as you should. And so I thought, let's figure this thing out. What is this thing, what is the sacred, how does it work? And so I set myself the project of figuring this thing out in the hope that I could then better deal with how it's seems to be in the way of things I wanna do.

So the first order of business usually in my way of thinking is to collect a bunch of stylized facts, just the sort of puzzles that you're trying to explain, and then search for a theory to make sense of those puzzles. I did a fair bit of reading and tried to collect what people claimed were correlates of the sacred. Whatever the sacred is, here are things that go along with it. And now I have a paper I've published and I had a list of 168 correlates. A whole bunch, I think it was 168. Anyway so I collected those correlates into themes and asked which of these themes could be explained.

So the themes I summarized are: That the sacred is valuable, that's one. Two, that we show that it's valuable, that is we sacrifice and have strong feelings as ways to show that we value it. That groups of people are often bound together by a shared view of the sacred, that's three. Four that we tend to idealize the sacred, we simplify it and we take away its blemishes in order to, see it as sacred. Five that we set apart the sacred, we try to distinguish it from other things we don't like it to mix together, and we want it to be this separate, distinctive range of things and behavior, that's five. And six, that we have a normous style that we should emote and feel the sacred rather than calculate and reason the sacred, that's six. And seven, that sacred things are often abstract, but concrete things become sacred by association with the sacred, and that's in our important connection with the sacred. So a sacred holiday is a concrete time associated with a flag, a love letter, that is these abstract sacred things become concretely sacred in a striking way. And this is an important feature about this, that somehow we managed to make concrete things like flags or crosses sacred by association.

So these are the seven correlates or the seven themes of sort of bundles of correlates and the challenges to explain these. Now, a guy named Émile Durkheim a century ago founded the field of sociology and he had a story about religion where the essence of religion was the sacred and his essence of the sacred was that the sacred binds groups together. So that was number three on my list. And you can see number three, if we postulate it as the core concept, explains one and two as well. That is if we are binding together by seeing the sacred the same, that will make us want to value the sacred highly and show that we do so that we can see that we are bound.

But the other four are less clearly implied by that theory. The other four being the idealized setting apart feeling, not thinking, and touching makes concrete things sacred. So the question is, what theory could explain those other four themes? And I came up with a simple story that draws on this psychology: a phenomena called construal level theory. So I pause to explain that. Construal level theory says that whenever we see or think about anything, we have a range of seeing it near versus far. So if you look visually at a scene, you will see a small number of big things up close. You are seeing in a near mode, and a lot of little things far away you are seeing in a far mode. And the idea is our brains just think about things differently near versus far. And near things we see more concretely and far things we see more abstractly. So when you see, if I'm looking at a tree at the moment, there's a lot of little leaves in my mind. The leaves are described very abstractly. There are little green blobs and there's not much detail known about them in my mind, other than the leaves are little green blobs about a certain shade of green and a certain size of blob, but that's it. Not even a shape to the blob. And that's typical of seeing things in a far mode, it's far away, you see them more abstractly. That is you have a small number of descriptors, each of which is a more abstract, less detailed descriptor. And that's what it is to see things in far mode. And near versus far applies to not just things that are visual thing, but things in time are near versus far in social distance things are near versus far. Hypothetically, something you're confident in is near. Something that's speculative or unlikely is far. In planning, you have high level goals that are far, and you have specific constraints and practical considerations that are near in a wide range of our thinking we have near versus far. And this is a robust literature and psychology showing it maps actually onto our brain structures in the sense that our brains are organized in to a substantial degree in terms of structures that are thinking abstractly versus concretely. Basically when stuff comes into your eye, it goes through concrete layers, which then build up higher, higher, more abstract structures, and then the back layers are thinking in terms of large abstract structures of what you see.

So, that's near versus far, and the key observation is to note that this habit of seeing things near versus far is an obstacle to a group seeing something the same. So, ff we want to see medicine as sacred, which we do in our society, we want to see it the same and agree on it. But if I am sick and about to undergo surgery and you are not, I might see it in a near mode and you'd see it in a far mode, and then we would see it differently and we wouldn't agree on it.

And that's an obstacle to our treating this as sacred because the point of treating as sacred is so that we can bind together by seeing it the same. So the hypothesis is when we have something that we want to use to bind us together, by seeing it the same, we change our habit of seeing it so that even when we're close, we see it as if from afar.

So take the example of sex versus love. Like sex is something that you see different up close versus far away. It's very noticeable if you've ever, you know, paid attention, that your focus of attention and everything else is very different. Up close versus far away. Right, so someone talking about sex from a distance may well disapprove of the sex that someone up close would approve because they just see it different Love, however, is something in our society we see as more sacred and that we see it more the same, and we all agree that love is great, but we see love as if from afar we're less clear if any one situation counts as love. What is this particular relationship, is that love, is that not, we're not very clear on that. We're pretty clear on what sex is; we're not so clear on what love is. So, even if you're in a relationship for a long time, you could still not be sure if it's love but you're sure that love is good.

So love is an example of something we treat as sacred and we see it from afar and that makes it harder for us to tell for any one case whether the label applies and makes it harder for us to reason it abstractly about it, but it unites us more together in our shared view that love is great.

Dan Schulz:

So is the sacred good? It seems like for institutional things like marriage to some degree it's required, right? Do you treat anything as sacred and does it improve your life?

Robin Hanson:

I more think about this as something that's not really an option. We have a strong urge and habit of treating something as sacred. Almost all of us do. I do too. So it's not really going to be an option for you to not treat anything as sacred. You could pretend you don't, but then you'd just be looking away from the things you're actually treating as sacred and not reflecting on it. So I would say the question is more, do we have a choice about which things we treat as how sacred? And then what are the basis for making that choice, which things would be better to treat as sacred versus not? And so then I can use this theory to identify the trade offs that are there. So we could say by treating medicine as sacred, we put more energy and effort into medicine. We devote more resources to medicine on the one hand, and we agree that we are bound together and we feel more bound to the people that we share this view about medicine with. That's positives.

Negatives are, well, whoever we're binding together against, we are distancing as ourselves from those other people who don't value medicine so highly. We are feeling more distant from them and hostile and suspicious of them, and medicine itself, we are not very good at thinking about reasoning about calculating, so we don't do a very good job of distinguishing effective from ineffective medicines or making better institutions for medicine, or even calculating the value of medicine. Those are all things that treating it as sacred prevent or makes harder. So there's the trade off about medicine.

So for example, we actually do way too much medicine, and so this benefit of putting more energy into it is not really a benefit, at least on the margin where we're just doing way too much and we're doing a really bad job of creating institutions and incentives for the medical institutions to give us good medicine and not bad. So we're paying some pretty high costs for treating medicine as sacred, but that gives you a sense of how we'd want look at all those same costs and benefits about each other thing we might treat as sacred and ask, will that go better there?

So for example, I think I notice that I treat sort of intellectual inquiry and honest pursuit of that as somewhat sacred. That is if you ask me to lie and offer to pay me a modest amount to do so, I am somewhat horrified that I would consider such a thing. I don't want to be greatly influenced by financial payment or status or something in terms of which answers I choose on a intellectual inquiry or even which topics I pursue.

Those are all signs that I set it apart. That is, I make a strict line between sort of high abstract, good thinking and practical reasoning about things that I would be willing to compromise more on. I idealize it in many ways, and so I can see I'm treating it as sacred. And then I can use this theory to realize that I'm then gonna do a bad job of making these tradeoffs. Sometimes I should take the money and lie. I mean, actually in the abstract, yeah, it would be practical and helpful, but this thing is an obstacle to that. And I should be more flexible mixing other kinds of reasoning and this sort of abstract reasoning altogether in a big mixed up bunch where I do some things a little some way or the other way, and the treating as sacred gets in the way of that. I'm making this strict separation and it's gonna get in the way of a lot of sort of abstract reasoning about which variations on this are a good idea and when we should do them. So again, we can see the trade offs here. One way to think about it would be to ask, well, which things, if we treated them more as sacred, would be the least distorted by it? Which things more naturally have the features that we are attributing to the sacred and we are forcing on other things that we treat as sacred, which things are in some sense naturally more sacred?

And one thing that comes to mind there is math. Math is idealized in a very literal sense. Math is set apart in a very strict, definite sense. And these distortions are less effective about math because as long as we stick to the idea of a mathematical proof and insisting that people provide proofs for math claims, these other sorts of biases we might have about it are kept in check. Surprisingly, and actually our educational institutions do treat math as substantially sacred. That is it's pretty obvious that math is useful, but it's not as useful proportionate to the amount of effort we put in teaching people math and the amount of effort people put into using math in various academic disciplines. I mean, it does seem like we're going overboard with respect to math. But plausibly it's because it has these advantages as the sort of thing you treat as sacred, and that it can stand it, it can hold up to that sort of pressure and not buckle and the way medicine buckles.

[29:56] - Humans exploring the universe

Dan Schulz:

It seems like we're at some point according to your ideas, we'll come to a path where two roads diverge. We can either stay a quiet society or we can evolve to become a grabby society.

So for humans, like what size of organization do you think actually needs to inherit these grabby values for us to become grabby? Will, we need like a world government that says everyone is on board? Could we have just one country that says we're gonna go become grabby and you guys all stay here.

Could just a startup do it like some kids in a garage who needs to adopt these values for us to achieve grabs?

Robin Hanson:

So to be clear, we're talking about whether our descendants expand out in the universe. And the key idea I think is that in a large civilization it just takes any one small part that is inclined to and has the tech resources to become grabby, for the descendants to be grabby. So the opponents of grabbiness, those who would prefer that not to happen, are at a disadvantage. They have to coordinate to prevent anyone from doing that. Now, that's a tall order, but in the last half century our world has in fact been coordinating more at a global level. That is, say a century ago, the world was roughly described as a set of countries who were competing, and within each country there were elites who were mainly oriented around that country and promoting that country's interests or competing within that country. And since then, instead the world switched to a world where our elites within each country are largely mixed in with and identify with the elites of other countries, there's more of a world elite class, and this world elite class mainly cares about their reputation and stance with that world elite community. And that's created an enormous convergence of policy around the world.

As we saw in Covid, at the very beginning of the pandemic, there were all the usual experts who had their usual stances on masks and travel restrictions, whatever, and then elites around the world all talked in a month about what to do, and they came to a different opinion about masks were good, and travel restrictions were good, and lockdowns were good. And then the whole world did it that way. The whole world did what these new elites said pretty much everywhere and that's because this community, you know, the elites in each community wanted to be in good standing with the elites everywhere else in the world. And they basically went along with this elite consensus worldwide.

And we see that same level of convergence with lots of other kinds of regulation like nuclear energy, genetic engineering, airline safety, lots of other things. Basically the whole world does it pretty similar. And this sort of worldwide regulation is, you know, they're often to prevent deviations from the world cause. So for example, the only country in the world that allows any sort of organ sales is Iran. And the world of bioethicists is still irate about that and trying to make sure somehow they, we can make those Iranians change their mind. So we are in this world where people are convergent on regulation worldwide, and they're wary of any one place in the world allowing changes that would then disrupt the rest of the world.

And that's what you're seeing perhaps in the AI space here, as we talked about earlier. That you might have argued, well, no one, unless you've got a worldwide agreement on restricting AI, it's just not gonna work. And you might say, well, we already restrict many things worldwide without official worldwide agreements because of this elite community. And so people are trying to persuade this elite community worldwide to restrict AI, and they think they have a shot at that. And in some sense, that's based on this perception that if anybody allows this, it'll disrupt the whole world. So as we move into the future, any way in which competition might produce evolution that would create disruption worldwide might be limited, including AI or genetic engineering, or nuclear energy.

These are all things that, if they had been pursued substantially at any local place, would've created competitive pressures to disrupt the whole world, and they were shut down and stopped so far. And once the possibility of interstellar colonization is open, everybody will know that allowing anyone to leave will be the end of this era of global coordination. That is once some colonists leave out there and then they can go and colonize something, and then new colonists go from there. There, that colonization wave is out of control. It will evolve and change in ways that the center won't even be aware of for a long time, certainly can't limited control, and that those colonists would then have waves of descendants who come back here and contest with us, and we would be largely at their mercy. That's the consequence of letting anyone leave.

Dan Schulz:

So it sounds like we're on a trend right now of, you take nuclear power, you take genetic engineering, and what seems like potentially AI as well. You have the world government saying or not, it's not the world government.

It sounds like in your concept, it's sort of a more abstract idea of a world elite community, who wants to impress each other and show each other that they're in the club setting these rules. And so under this framework it may not be that the whole world needs to agree, but it's gonna be challenging for smaller localities to cause us to become grabby.

Robin Hanson:

So initially, like the cost of doing an interstellar colony will be very large, so you'll only need to restrict organizations who could possibly afford it, right? As time goes on, that cost will fall, and so you'll have to be more strict about how many more organizations you limit. But that's been true, say of genetic engineering or disease engineering or nuclear engineering. Even in the last half century as the cost of these things fall, we have to be more intrusive with our regulation and surveillance in order to prevent all the people who could do it from doing it. But we may just grow into that. So we may, as the cost of things fall, we may just actually have more surveillance and more intrusive regulation that does cover more and more people exactly with the rationale that if we don't, then it'll get out of hand.

Dan Schulz:

This actually brings up a question, so you, you're like really unusually comfortable with descendants being very different from us. And your book The Age of Em sort of talks about what most people would not consider humans at all, but in your view they are descendants of humans and therefore we can be comfortable with a deeply weird world.

Do you worry at all actually that AI or some of these new technologies could create sort of the opposite risk where we are stuck in the 2025 or 2030 timeline indefinitely for hundreds or thousands of years, and sort of this moral evolution actually does not change. We get locked in. Is that a concern that you've considered?

Robin Hanson:

Honestly, those are the two options. There isn't really much in between either we allow evolution and competition to continue, and then our descendants become as different from us as we are from our ancestors, which is really different, or we don't allow that sort of change, in which case we lock in the current preferences and styles for a long time.

Those are the two options. There really isn't another one, and neither one looks very pretty. I don't see that much in the middle though.

[38:02] - Social rot

Dan Schulz:

It does seem like though if you were pro grabby, you were pro us exploring beyond the earth. It seems like that's a sort of a trap door decision, right? Once you go you're not going back. So if we do have lock-in for the next even a hundred or so years, if it only takes one sort of uprising or event or change for us to go and become grabby, then presumably the chances of us becoming grabby could actually be quite large and in the lock-in scenario may not be such a big deal.

Robin Hanson:

So I think about this in terms of social rot. That is, all civilizations in the past have risen and then fallen again, suggesting that ours might do so too. But now we have a global civilization, so it might rise as it's doing now, but then fall again later. And I think this is, this is just an empirical observation, but if we think about what we know about software and the way it rots and the way firms rot, and the way other sorts of adaptive systems rot. We should see that it's a pretty strong tendency. The only solution to rot has always been some larger field competition where rotting things rise and then fall, but new things rise to replace them. But that larger world of competition would produce this long run change and the grabby expansion. So the attempt to prevent that grabby expansion in the long run change would be typically in terms of some system you create that's a global system that is not an arena of competition primarily, but it's a system that's limiting that competition. But that system itself would then rot and now you face a big problem for the long run: You don't like change, big change, you've created this system to prevent big change, but this system rots. So the big question is how slowly or fast does it rot, and can you make it last indefinitely or is there some time limit after which it would rot so far that it just couldn't manage to sustain this prevention of the change? That's a fundamental question about social rot that I hope to start thinking more about soon. But it seems to me like a really important question.

I mean, even without this thing, we are in a civilization that's plausibly rotting. And that raises the question of how this will play out. Will there be a fall, and how bad will it be, and what will be the disruption in the transition to the next rise?

So one of the most dramatic parameters on which our world seems to be rotting is fertility. That is, fertility has been falling quite consistently with wealth, and it's below replacement in half the world now, and not obviously stopping its fall. Fertility is still falling in most of the world, even the ones below replacement. And that looks like a kind of rot in the sense of, you know, long-term civilization growth. Now, many people say, "Ah, but evolution surely will create a subset of humans with higher fertility that overcomes this." But the question is, when and how?

So I might say that the key problem, so we've had some subcultures in our world so far that have had unusually high fertility, say Orthodox Jews or Mormons or something. But if you look at the fertility of those subcultures, it's falling at the same rate, it's just delayed compared to the rest. So I would summarize that as saying they are not sufficiently insular. They're still under the influence of the larger culture and behavior in those subcultures is still sufficiently influenced by the larger culture that they are not succeeding and creating a self-sustaining higher fertility subculture.

Dan Schulz:

To increase fertility rates, it seems like the notion of the sacred obviously helps if certain religions have been somewhat like the highest fertility numbers like Mormons and to your point Orthodox Jews is it like a little bit ironic almost that we require the sacred for an evolutionary process which should be a little bit more Darwinian and require less of this like human-centric idea.

Robin Hanson:

Sacred presumably evolved from a Darwinian process. It's part of human culture revolution that allows cultures to reproduce and succeed. So it's not anti Darwinian at all. In people's concept of the sacred, they think of it as something that rises above Darwinian selection. That is, that's part of the norm of the sacred is that you are to assume that sacredly driven behaviors are not constrained by, or subservient to, selfish genetic incentives. But in fact the sacred arrived from Darwinian cultural evolution, and that's plausibly where it came from.

I mean, orthodox religious people are not more sacred than other people. They just have different things treated as sacred. So it's more about how much a subculture can maintain a different concept of the sacred compared to a larger culture. And, you know, it's possible for a subculture to value fertility highly enough that if it were isolated from the rest of the world it would, even in the face of other pressures that would promote lower fertility, it could manage to support higher fertility. The problem is the world as a whole is in a culture that supports elements that are discouraging fertility and that these subcultures are not insulated enough from that. They still watch TV and watch movies and hear the news and go to school with other people, etc., and work with other people enough that that larger culture's lower value on fertility passes on to enough of their children that they are not actually making a large growing subpopulation.

So this is an example of rot, you see. The key point is when systems rot what it takes is a different system that's insulated enough from the original system to grow again. It's like in the past when empires fell, empire might be composed of five, provinces, all of which rotted together. No one of the provinces could resist the rot of the rest of the empire because they were so interconnected with us, the Empire, it had to be a whole nother empire that was pretty disconnected for the first one that would then rise and replace the first Empire because it was not being very influenced by all the rotting features of the old civilization. So then in our world, the problem is then if our civilization rots as a unit, then it'd have to be a pretty insular part somewhere. That is the thing that could grow and resist the previous thing, so the question is how much?

So for example, again, the Mormons and the Orthodox Jews seem like not sufficiently insular. Now you could say, how about China? China is somewhat different than the rest of the world, and China has somewhat different institutions that arose more recently, and so maybe China would rise and the rest of the world will fall and it'll work that way.

My guess is China's more integrated into the rest of world economy than will work for that scenario, but I would be happy if that were true. My Age of Em scenario is a scenario of sufficient difference in the sense that if the world of brain emulations is left alone and allowed to form its own rules and regulations, etc., without too much constraint by the rest of the world, then it could form this whole new fresh civilization that would then grow. But as we're seeing with AI, people may not be very willing to allow this new sphere to grow and have its own rules. But that's something of what happened in, say, computers in the last half century or so. Basically, we had a lot of other industries that were relatively heavily regulated, but with computers we said, oh, who cares about that? And then it was just largely unregulated, and then it's been the fountain of growth for the half century. And now we're finally getting around going. Why did we, why? Why aren't we regulating that? It looks like it has a lot of influence. Shouldn't we be regulating that? We're going, yeah, that seems like we should be regulating that. And so we may be about to clamp down much more on computer-based innovation because we're realizing we've been treating it differently from other industries and maybe we don't want them.

[46:52] - The Elephant in the Brain

Dan Schulz:

Let's shift over lastly here to your book the Elephant in the Brain. Do you wanna give just a real quick overview of the main point the book tries to get across?

Robin Hanson:

We are all doing a lot of things in our lives and for most of those things we tell ourselves a story about why we're doing them. And we social scientists, when we come in to study areas of life, we listen to those usual stories and we usually assume they're right. And we go on with modeling areas based on those claimed motives. So, for example, people say they go to school to learn the materials so they can get better jobs and be more productive on their jobs. And we go, well, yeah, that makes sense and then we try to make models and analyses of school based on that sort of assumption. Or people say they go to the doctor to get well cuz they're sick or they vote in order to make better policy for the nation.

These are things we all tell ourselves about why we do things and for the most part, social scientists have just accepted our usual explanations. And then in all these areas there's a lot of puzzles, a lot of weird things going on. Like in medicine, it turns out there's no correlation between health and medicine. People who get more medicine are not on average healthier, strangely. And we scratch our heads about these puzzles and we come up with epicycles to try to make sense of them.

And I realized at some point that you could hypothesize that we're just wrong about our motives. So the first place I did that was in medicine as a postdoc long ago, and I said, "What if we're just wrong about our motives for medicine? What if it's just for a different reason than we say or in medicine?" I said, "What if medicine is about showing we care and letting other people show they care about you?" If that was your motive, it might make a lot more sense about these details. It would make sense of the fact that medicines isn't on average helpful, but you're still able to show you care. And a number of other features of medicine in terms of the how it's a luxury good in terms of how spending goes up with wealth, or how we don't seem very interested in lots of other things that affect health much more than medicine. And you could say, well, this is what's going on, we're not aware of it consciously, we are thinking in terms of other motives, but this is the real motive, and this motive then makes more sense of our behavior there. And so The Elephant in the Brain book is a book saying this is generally true of a lot of things in our life. There's a lot of ways in which we're wrong about our motives, and so the first third of the book sort of sets up the argument why we might be wrong and why it might make sense to be wrong. And the last two thirds goes over 10 areas of life saying for each one: Here's the motive you think you have, here are some puzzles that'll make sense with that. And here's a motive that if that was your real motive could make more sense of these puzzles.

And so we suggest that this is in fact your motive. And motive, what we mean is the larger force that structured your behavior and the institutions altogether over time, the function that it's serving, not what's in your head and what you're thinking about when you're choosing. That's the elephant in the rain is the fact, the idea that we have all these hidden motives and illustrated by 10 different areas where we say in this area, you think you're doing it for this reason, but this is more probably why you're doing it.

Dan Schulz:

So to what extent do you think this is like advantageous for us, and maybe for a thought experiment, let's say that we found a way to genetically engineer future people to be much more aware of their hidden motives and their tendency for signaling, and we could dial it up. If we say, today, 80, 90% of all behaviors motivated by signaling, we could dial it up all the way to 100, or we could drop it down to zero.

What do you think would be optimal for our society?

Robin Hanson:

The first question to ask is, given the way the world is, what's optimal for you? That is in an equilibrium sense. So the story here has to be that at least until recently, this was the equilibrium because this is what it was optimal for you to do. That is it was optimal for your ancestors in the world they lived in to have these false beliefs about their motives, and to act the way they did not understanding why they're doing things. That was actually more effective for them at achieving their evolutionary goals for reproducing and succeeding and getting respect and things like that.

Then there's three variations we could imagine. We could say, well, the world's changed and maybe it's no longer optimal for individuals to be so ignorant. And in which case we're out of equilibrium and then we all need to learn to behave in a new way, in which case we might need to learn to be more aware of these things and then we'll be in an equilibrium because the world's changed and the old equilibrium doesn't apply. Doesn't seem very believable to me. Seems to me, this is probably still equilibrium.

Or we could say, look, this is the equilibrium for most people, most of the time. But you don't have to be most people most of the time, it might not always be the best thing to do. So we could ask, "For whom might this be an exception?" And I think the strongest case might be, well, if you're a social scientist whose job is to understand what's going on in the world and if you're wrong about what everybody's doing, you're just wrong about your job, which is to find out what people are doing. So as a policymaker or social scientist, you should just try to look at what's actually happening, even if that's not very natural for you as an individual.

But it might also be true that some specialists, like managers or salespeople, it's especially important for them to be able to understand what's going on in the world. They need to consciously think about their marketing and management plans, and for them it'll be worth the sort of personal awkwardness of not smoothly interacting with other people in the way they otherwise would in order to consciously know what actually is happening because they need to know that for their job. So you might need that. You might be you somebody who exceptionally needs to have a better conscious understanding of what you're doing. Relatedly might be nerds like me, that is most people glide through the social world smoothly, intuitively without knowing why they do things and they don't realize the words they say don't really match their actions, but they're smooth enough not to notice that and everything goes fine. For nerds your intuitive machine that tells you what to do doesn't work so well. And when you just do the intuitive thing, it doesn't tend to impress everybody and make them all feel comfortable. And so maybe for you, you need to more consciously think through what you're doing in order to work smoothly in the world. And for that, you may need to confront your motives to understand it better, to consciously reason about it.

So if we're in a equilibrium where this is still optimal for most people, most of the time you might ask, will we ever be in another equilibrium? And in the long run, plausibly, as I said, long run change could be pretty large and so I could very much believe that in the long run this will change. I don't have any particular reason to think it'll happen soon or fast. But I have made the long run guess that in the long run, our descendants will know that what they want primarily is to reproduce. So you don't know that. That is, evolution produced you primarily to reproduce, and that was evolution's goal in setting up your mental structures and your intuitions, and your feelings, etc. But it didn't choose to just give you this abstract concept of, "I want reproduce and have you plan all your activity out based on that abstract concept." That's not how it worked. Because initially your mind really wasn't capable of supporting that abstract concept or applying it to very many things, and so that just wasn't an option. So long before that was an option evolution gave you this complex mixture of all these different heuristics and feelings and habits that on average, in ancestral environments, produced reproduction, but apparently recently is failing, as we talked about for the fertility fall.

Somehow those heuristics are just going wrong lately. But, evolution's kind of stuck, it didn't really know how to tell you abstractly how to deal with new circumstances by having an abstract goal, it just had all these concrete habits and feelings. So in the long run though, it seems more robust to just have creatures who know in the abstract, I wanna reproduce in the long run. Let me calculate my great-grandchildren and figure out what life strategies will do that I just know that that's what I want and I calculate that, and that's just straightforward. That seems like a more robust way to in a changing world and changing environments achieve the end of having more descendants. So that's a way in which they would be, say, less misled about their motives but they could still be misled about other parts of their thoughts.

Dan Schulz:

Robin, this has been an absolute pleasure. Thank you for coming on the show today.

Robin Hanson:

Nice to talk to you, Dan.