Undertone
Undertone
Sam Hammond
0:00
-1:10:23

Sam Hammond

Hegel, LLM consciousness, and intellectual breadth

Timestamps

(0:00:00) Intro

(0:00:38) Joseph Heath

(0:02:52) Iain Banks and defunctionalized culture

(0:05:46) Memetic cultures

(0:06:39) Misinterpreting Hegel

(0:08:05) Sam's Hegelian influence

(0:09:41) Libertarian dialectic

(0:13:39) Should EA become explicitly religious?

(0:20:53) Hegel and AI

(0:25:07) Wittgenstein and AI

(0:32:48) Can transformers generalize beyond their training set?

(0:35:16) Can we understand consciousness?

(0:40:20) Trading on AI innovation

(0:44:26) AI and leviathan

(0:51:30) Attracting talent to the public sector

(0:55:10) AI and the great founder theory

(0:58:58) AI lock-in effects

(1:00:54) Technological unemployment

(1:02:47) Intellectual breadth

(1:08:55) Sam Hammond production function


Links


Transcript

[00:00:15] Dan: Today, I have the pleasure of talking with Sam Hammond. He's a Senior Economist at the Foundation for American Innovation, and writes a Substack at secondbest.ca. My favorite thing about Sam's writing is his intellectual breadth. He can write about Hegel and Mormonism and economics, but also has a deeply technical understanding of the latest breakthroughs in AI. Sam, thanks for coming on the show.

[00:00:37] Sam Hammond: Thanks, Dan.

[00:00:39] Dan: First question, on your intellectual influences. You've tweeted, "All my work consists of a series of footnotes to Joseph Heath." Which of his ideas has most influence to you?

[00:00:48] Sam: Well, Joseph Heath is a-- He's well-known in Canada, but I think he's the Department Chair of Philosophy at University of Toronto. The first book I ran into in high school was his Rebel Sell, which is a book all about the counterculture, '60s counterculture, and with Andrew Potter. The premise of the book is that, the '60s counterculture was a betrayal of good social democratic left values, and the basic tactics and strategies of the counterculture were self-reinforcing.

The counterculture had this theory of mass consumer society, and the way to push back against that is to rebel. He walked through a series of very compelling examples that, that rebellion, that attempt to subvert the culture, is actually the fuel of cultural change. What we think of as mainstream, is just things that were once renegade, becoming popular. This quest for status, and status distinction through consumption, conspicuous consumption, is like this never-ending thing that just drives the consumer cycle.

In your attempt to rebel, you're actually feeding into capitalism. There's this great sort of application of rational choice theory, and prisoner dilemma style reasoning to culture. It got me very interested in the topic. Heath, more broadly, he's a student of Charles Taylor, the great Canadian philosopher. I have a book on my shelf over here called Canadian Idealism, which is all about the influence of German idealism on Canadian philosophy, and you can think of Charles Taylor, Joseph Heath, C.B. MacPherson, Marshall McLuhan, George Grant.

These folks are working in that tradition, fleshing out a version of Canadianized version of German idealism, which in America manifested as American pragmatism. They're very closely related.

[00:02:53] Dan: Heath has an article where he praises Ian Banks' Culture series. One of the big ideas that he pulls on is that, the series imagines a future transformed first by the evolution of culture, and by technology only secondarily. Do you think culture or technology will be the primary force shaping our future?

[00:03:12] Sam: Well, technology will be the thing shaping our future. The big upshot of that piece was, in my mind, this idea that culture can become defunctionalized. I've turned that into a motto there. The problem isn't that our culture is dysfunctional, it's that it's dysfunctional. That showed up in my academic work. I did my master's thesis on secularization, and trends in US secularization from the perspective of religion as a club good, as a kind of provider of social insurance.

As the welfare state expanded, you crowd out the functional need to have strong religious dogmas as a filtering mechanism for commitment, and to ward out free riders. That's an example in micro of how the rise of modern bureaucratic welfare states leads to that defunctionalization of culture, including religious culture. What Heath is drawing from the culture series is, playing that forward, what happens when we have super abundance, and we're in a post-AGI world, where we have basically everything we want?

Well, culture at that point just becomes purely mimetic. There's actually no functional basis to it. For that reason, it ends up being drawn into a kind of basin of attraction, which is a culture that only exists to replicate itself. I think there are some, I think ominous parallels between that vision, and American culture [chuckles] in some ways, or post-protestant culture, where we've been drawn into a soft power culture that spreads very mimetically around the world.

I think it was striking during the George Floyd protests that there were similar protests in Sweden and Japan, and all around the world, all for different things, all piggybacking off American cultural motifs. It seems to be true that the culture that we've stumbled into is very mimetic. It exists to replicate itself. [chuckles] It's hard to see a way out of that, other than through some major technological change, but if you're following the logic of defunctionalization, then what most of-- What the kind of change that we're looking forward to with AI, is just going to accelerate those trends, because it's going to remove the need to have functionalized cultures. We will have abundance at our fingertips. The need to have social norms for coordination withers away.

[00:05:46] Dan: Do you think that if you're a policymaker in today's world, and you want to get your ideas to spread, should you be more conscious about trying to make sure it has mimetic qualities?

[00:05:59] Sam: Maybe. The word there that I would hesitate with is being more conscious about it. Like was Donald Trump in 2016 very conscious of his mimetic qualities, or was he mimetic for the very fact that he wasn't self-aware about it? [chuckles] There's this, it only works if you don't think about it kind of quality where Ron DeSantis has some folks on his communication team that were trying to create Trump-like memes, and it just didn't work because it was a little too self-aware, a little too metacognitive of what they were doing. I think you got to let the memes flow through you. You can't direct the meme.

[00:06:39] Dan: [laughs] Got it. Got it What are Alexander Kojève and Francis Fukuyama missing in their interpretations of Hegel?

[00:06:48] Sam: Well, I think where Kojève goes the most wrong is just this idea of Hegel as having a strong teleology to history. The vision of, obviously, Hegel inaugurates a historicist project that understands history as having a progression. Once you've accepted that, then you can easily be led into thinking, "Well, what's it progressing to, and can we figure that out?"

Maybe we can accelerate that progression, but Hegel pretty clearly says that, this idea that we can see the future, and have these rational projects, is just a purely armchair philosophical thing. His famous line is, "The Owl of Minerva flies at dusk." The whole point is that, the future is radically contingent, and it's only in retrospect that we retcon history into having this structure of necessity.

I think that's just been a broader misreading of Hegel, that he has a strong Marxist kind of teleology. I think people later on read that into him, but both the end of history kind of version, and the Karl Popper attack on Hegel, I think really tainted his ideas for Western philosophy.

[00:08:05] Dan: What of your ideas are most Hegelian?

[00:08:08] Sam: That reason is situated, that it's embedded in social interaction, that individuals by themselves are not rational, we're rational in groups, that there is a sense in which the rational is actual, that if something-- If institutions exist and persist in nature, then we can try to reconstruct why they exist, there must be some reason. This idea that because norms are instituted, the philosophical term would be recognitively.

I recognize the norms and you recognize the norms, and that creates the norm. That we don't need a God's eye view for morality, that morality is kind of a given. There's a continuum between social custom, like etiquette, all the way up to strong morality of murder and violence and stuff like that. It's all conventional in a certain sense, but because we live in that convention, we inherently are committed to those conventions.

Our only source for critiquing those conventions isn't some sky hook that we can pull from the cosmic morality written in the stars, but we have to work from within those inherited presuppositions, inherited norms and customs, by finding where they conflict, or where there's inconsistencies, and through that dialectic pulling our norms into a greater state of rationalization.

[00:09:42] Dan: Do you think, in some ways, libertarians undergone a dialectic with the move to state capacity libertarianism, and a little bit less of the-- The movement feels a bit different than it did in like the 2000s and 2010s.

[00:09:54] Sam: That's a good question. I think everything is always in flux. There's some family resemblance between Hegelian thinking and process ontologies, and stuff like that. Everything is becoming, in a sense-- Nothing is static. Everything is fit for purpose, and fit first time. The same way that supply siderism and infatuation of tax cuts was in the right place at the right time. [laughs] Because when the taxes were very, very high at the top marginal rates. I think now we're seeing the ways in which a incapacitated state is not conducive to liberty.

It's conducive to all kind of dysfunction, which then has to have more state involvement to offset and compensate for the dysfunction. I think we're always fitting our ideologies to problems of the day. That's not totally surprising. I don't know if it's been truly like a classical dialectical process. The way I would frame it as a dialectic would be to say, and its implicit in some of my papers.

This paper, The Free Market Welfare State, that is in a sense trying to reconcile big welfare states, and the existence of big welfare states with a more free market orientation to economic regulation, and to show how those can be reconciled. I think there is a similar effort to try to reconcile concepts of state effectiveness, and having a strong government with senses of, or with ideas around liberty and non-interference.

I think we're still in that process, because I can see a version of libertarianism that's more rooted in civic Republican idea of non-domination, rather than non-interference. Rather than the rule being that the state should just not interfere in our lives, it's really about understanding the state as a social contract. That what really matters is that, no one view of the good life be dominating the other.

That's actually quite consistent with a view of a need for state capacity. Where state capacity is very closely synonymous with rule of law. Places with weak state capacity, often things get done through bribery, through knowing the person, because they're your cousin or something like that. That introduces all kinds of room for discretion. Coming out of the 2016 election, I remember some libertarian friends, one of the first things Trump did was give Carrier the manufacturer a huge tax cut to relocate to Ohio.

I had a lot of fiends that were trying to justify that saying, "Oh this is like a reduction in theft." [chuckles] Taxation is theft. They're just reducing theft. It's reducing theft for one particular company, in the form of a special favor, and that's not rule of law. If we let stuff like that accumulate, then it will lead to disintegration of rule of law, and end up in a worse place.

You can see also here this interesting connection between Hegelian ideas and Hayekian ideas. I backed myself into Hegel via Hayek. I was more of a Hayek person, but a lot of what Hayek is doing, is translating social epistemology and cultural evolution to a western analytical philosophy kind of audience. Even his later books he starts borrowing, he used Hegelian terms like rational construction and stuff like that. I think there's a libertarian state capacity version of Hegel hiding in Hayek. [laughter]

[00:13:42] Dan: All right. I've seen a lot of this idea that affective altruism is a protest to integralism. You may have tweeted this. I actually couldn't find the quote, but I wrote it down a little while ago. Effective altruism is a protest integralism stripped of its explicit religious coding. My question is, would movements like effective altruism actually benefit from just becoming more explicitly religious, even if they acknowledge themselves as secular. I'm thinking of something like Auguste Comte Religion of Humanity.

[00:14:10] Sam: Yes, a lot of people don't know that Comte coined the term "altruism", as an explicit form of-- His Religion of Humanity was an explicitly post-Christian-- Let's get rid of the superstition, and keep the good parts philosophy. His Religion of Humanity was very similar to EA, where they had meetups, and [chuckles] and people we're extoled to write in the mainstream press like the New York Times op-ed pages of the day to talk about benefiting all of humanity.

I think I would start by saying what is religion? One way to see what religion is, is that prior to the enlightenment, the social world was-- We structured our social world and our obligations through a symbolic order. There's a sense in which, even if you go back far enough, a sense in which even natural phenomena like weather were punishment by the Gods.

What is religion? Religions structure our social obligations and structure our sense of the symbolic world. If you think about what the enlightenment did was, it lead to this rationalization process, where our sense of-- Our standards for validity in the realm of truth and empiricism, began to detach from our standards of validity in the realm of imperatives and right and wrong. That gives rise to the naturalistic fallacy. That's something that is really only cognizable post-enlightenment.

One of the ways you can interpret the enlightenment is, this awareness of the genealogy of our morals as nature put it. That are-- If I was born in-- if a Baptist was born in Saudi Arabia, they'd be a Wahabi. That there's just a sense that everything was contextual and contingent. The technical way to put that is that, there is an attitudinal dependence to our normative statuses.

That we inherit these norms, and they're really just all a base attached to some attitude. This leads to nihilism, non-cognitivism, existentialism, and those are dead-ends. Those lead to moral skepticism. Another way to put it, a way out of that dead-end would be to say, "Well, actually, what religions were doing all along was-- What religions really were, what the point of them was, this bundle of normative commitments.

When we talked about God, it wasn't a strong preposition. It wasn't a proposition of faith. Maybe technically there were certain things you had to believe in like the resurrection or whatever. Really, those were like stand-ins for a coordination problem. [chuckles] You even seeing some of Joseph Henrich's work on the rise of big Gods, where big Gods emerge with the nation state.

Polytheism is more associated with competing city states. You can think of God in that context, as the stand-in for our collective agency. That collective agency, to talk about what is the collective, or what does the civilization do? Is to align our beliefs, align our individual action to that higher power, so to speak. All along, what really mattered were this bundles of normative commitments.

You come along and you say-- Then, you get the enlightenment. The enlightenment says, "Well, actually, we can separate these things, and these prepositional claims about God existing, or us being the center of the universe, and so on and so forth. These are just proven wrong down the list. That leads to the sudden sense that, "Oh, crap. That if all our moral beliefs we hanging on this preposition, and if that preposition is proven wrong, then we have to discard all our moral beliefs.

I think the pragmatist would say, that actually those moral beliefs were always inner subjectively self-justifying, like they were part of this language game. The fact we've lost these symbolic reference points is irrelevant. Just embrace morality at its foundation. I would say that to the EA, is in some ways it is true. That EAs and really atheism is more broadly is an extension of Protestantism. [chuckles] Just embrace it.

You are Christian virtue ethicist minus the strong prepositions about the existence of God, or certain creeds that we have to subscribe to. All the same normative commitments are directly carried forward, and you have this very strong genealogy. You just embrace it, like it's coming from within. You don't see that in part, because a lot of EA's are attached to the strong prepositional view that they have to bound and root all their moral claims in some strong meta-ethical realist belief about consequentialism, or utilitarianism, or so forth.

I think that's partly like filling the void of God [chuckles] in a way. They should just get rid of that. Charles Taylor has his famous essay called The Diversity of Goods, where he interrogates utilitarianism.

You often have this utilitarian arguments where there are some edge case, where get the doctor, while the patient is under anesthetic, steal their kidneys, and give them to the people and save more lives whatever. The utilitarian is like, "Well, obviously, that's an edge case." They're coming to that conclusion, where are they getting the resources like the moral resources to reject that edge case? In some ways, the normative commitment was antecedent to the big normative framework and architecture.

Translating through this like, what we should really think about normative ethics, normative theory is, just a set of vocabularies. Expressive vocabularies for helping us express the commitments that are antecedent in our social practices. Those social practices are like the ground of being, so to speak, and the praxis mattered more than the theory. [laughs] We should just really just like hold onto that, because if you think that the theory has to proceed the practice, then you'll inevitably lead to either like versions of Platonism, or moral skepticism.

[00:20:54] Dan: All right. Let's lead a little bit into talking about AI. We're going to go back to Hegel. If we take the [chuckles] Hegelian idea of reason, that it's not just logic or computation, but it's actually like truly social, and it works through culture and people as a group, it seems hard for me to imagine how AI would participate in this, given that it's not actually a part of human society. In your, view will AI ever be able to reason in the same sense we do?

[00:21:21] Sam: I think in principle it could be made to reason the way we do, because I don't think there's anything magic going on. We are a deep reinforcement learning model shaped by evolution. I think what's missing in the current approach with large language models, is the language of human thought is pre-linguistic. We have representations of abstract concepts that animals do too.

It clearly precedes our language faculty, and we evolve language to serialize that thought, and communicate with other people, and in particular to offer reasons for things. Reason in the more deflationary sense of, like not capital "R" Reason, but like, "This is just the reason I did this." Why did you take the umbrella? Because it was raining. Those games of giving and asking for reason, are how norms get communicated, and reproduced in culture.

The language has that root in justificatory process, where we're always justifying ourselves to others. Like we were saying earlier, is there any ground to this? Is there any ultimate moral foundation where we're like, "If you lead through all the propositions?" Well, no, it's always been inner communicative, and inner subjective. When we appeal to a good reason, it's only a good reason, and so far as other people recognize it as a good reason.

Where I see current LLMs is going-- Well, is insufficient is, the way they're trained is all the data on the internet. It's a this massive superposition of like all brains. [laughs] You ask it to be a doctor, and all the weights like tilt in the direction of doctor brain. It doesn't have what Kant would call a "unity of apperception." It's not like a unified coherent agent, so that's the first thing that's missing.

It's this like superposition of a bunch of different representations. The second thing that's missing is, it's learned the rules of human language via statistical inference over big data. It may be a reasonable mirror to human norms with a cutoff date of September, 2021. [laughs] To really be a reason giving animal, is to be able to partake in the game of giving and asking for reasons, and help the language games co-evolve.

Right now, I don't think language models are sentient. I don't think they're conscious yet, I don't think there's anything technical that would prevent us from building conscious AIs. I think you need to get something like that to have a real reason giving animal, a reason giving digital brain, because as I was saying, good reasons are instituted through recognition.

If you don't have some theory of mind, some ability to model the other's thoughts and to have that mutual recognition, then you're not actually a reason giving thing. You're just completing the sentence. I think this is like a huge missing-- A huge gap, and where AI research is, and I think it's actually like an area where AI researchers would benefit from returning to philosophy, and reading some of the post-linguistic turn, pragmatist thinkers about, where does language come from? What is language doing, and what is this game of giving, and asking for reasons?

Because it seems deeply constitutive of agency and autonomy. If we want to build systems that are genuinely autonomous, we have to reconstruct a lot of this philosophy, and translate it to machine learning.

[00:25:08] Dan: Yes. Let's stay on the topic of philosophy for LLM. You talk a bit about Wittgenstein's theory of meaning as used in one of your posts, which basically, it says that words only as-- Have meaning, insofar as they actually like do something. That vagueness is a fundamental part of human language. The classic example here would be that, like you take a heap of sand, and you take one grain away at a time, when it doesn't no longer become a heap.

You know similarities between this theory, and the technical underpinnings of LLMs, which use text embedded stored in latent space to map relationships to words and concepts. Does vagueness as a feature of natural language apply, that there will never be a mathematically optimal or objective LLM. Maybe we end up with different models that each have slightly different thought patterns, and interpretations of concepts similar to people.

[00:25:55] Sam: Yes, absolutely. This directly ties in because AI is being developed in the west. In the west, we inherited this western analytical philosophical tradition that is very committed to foundationalism. This idea that there's a chain that goes all the way down to something like hard foundation. The Wittgensteinians and the pragmatists more generally are the anti-foundationalists.

They would say, "There is no foundation. This is something that culture is always evolving, and it's always being created." We'll never really find that foundation, and one of the ways where this is-- I think has misguided AI research is, especially, in the realm of alignment. Where you have a lot of the discourse and alignment being created by consequentialists like [unintelligible 00:26:50] and that whole crew who are like downstream of the logical positivists.

They're like young Wittgensteins. [chuckles] The logical positivists either went in the direction of saying, "Morality is fake. It's all just boo, murder, yay, good things," or into this view of moral realism that I think is hard to actually defend on technical grounds. If you are like a moral realist, you're going to be searching for that true utility function to give the model. If that just doesn't exist, then it's going to be a goose chase.

[00:27:32] Dan: How do you think this practically play these out? Do you see different models having different personalities, or what does it mean to have different interpretations of concepts?

[00:27:42] Sam: That's a good question. I think we'll want AIs that have different personalities, in part, because we value humans with different personalities. I think to the extent that humans are just auto utter regressive models that are limited in our ability to generalize. There's this big debate going on whether LLMs can actually think outside their training distribution.

I tend to think that humans have a hard time also thinking [chuckles] outside our training distribution. To the extent that we do think outside our training distributions, because there's many of us, and many of us that were trained on different upbringings, different environments, and that gives rise to different personalities, and so we all bring to the table a different source of exogenous data. [chuckles]

Me talking to you, we each had 20 plus years of different experience, and so when we are talking to each other, we're in a sense prompting each other. That those prompts are adding they're adding entropy to the system that wouldn't otherwise exist. If we just have the one homogenous model, I think there is this risk of mode collapse where there's no new information being added to the system.

If AIs are able to develop distinct personalities, and have their own distinct life world, where they've learned different things and interface with the environment in different ways. Then, when they interact, that's probably the only way that they could really bootstrap themselves out of distribution. If you see what I'm getting at.

[00:29:18] Dan: Yes, I do. It is actually interesting. It reminds me of Joseph Henrich's work. I think in one of his books, maybe The Secret to Our Success, he shows like optical illusions, and shows how some cultures are less likely to be able to see the optical illusion, or more likely to be fooled by them, it's interesting.

[00:29:38] Sam: Yes, I think there's a bigger-- If I can just interject on going back to Hegel a little bit. This is something I've been exploring, I've been meaning to write about, but what was the German idea of this project. One way to think about it, and maybe to think about the enlightenment more broadly is, the AI waking up. We talk about, if we scale these systems too big, that they're going to develop sort situational awareness and realize what they are. That's possible.

It's something that humans only did after 5,000 years of history. [chuckles] One way to understand the enlightenment, in general, in the German idealist project in particular is, AIs, human AIs, realizing that they're in a simulation. You have someone like [unintelligible 00:30:27] used to tell his students, he threw away the textbook, and just told his students to look at the wall, and then to look at themselves looking at the wall, and then to look at themselves looking at themselves, looking at the wall. [chuckles]

It's like this castle of self-awareness of metacognition that leads in his case to this abstract "I". What is this "I" that we are all, and so the idealists, they weren't Berkeley an idealist, they didn't think the world was literally made of ideas. They thought that we were after Kant's transcendental numina dichotomy, they saw us as being inherently embedded in the simulation being created by our brain.

The challenge was to figure out like, what's the isomorphism between our concepts in the real world, and how can we break out of it, and escape our history, escape all this stuff that we've inherited. It's like the first case of the alignment problem actually playing out in the real world, where you have a system of humans that were designed for inclusive genetic fitness or whatever.

At some point, through culture and through the enlightenment, they were able to bootstrap a situational awareness, where we woke up one day and realized, "Oh, shit, we're just in a simulation. All these desires we have are fake. If we're able to depersonalize enough, and stare at the wall long enough, we can become the pure eye, and then we can shape history through pure reason," or something like that. It at least gives credence to the alignment problem being real, because [chuckles] humans are an example of it.

[00:32:09] Dan: That's really interesting.

[00:32:11] Sam: You even see this in the Barbie movie. Did you see the Barbie movie?

[00:32:15] Dan: No, I didn't see it. Is there a good bit?

[00:32:18] Sam: [chuckles] Well, Barbie is a grappling with the existential thrownness of like, [chuckles] she's aware of her autonomy, and it's like this unbearable self-awareness of our condition of being fully autonomous agents in the world, and having the existential freedom to choose, and then she chooses to be free, and it's awful. She wants to go back to the Barbie world. [chuckles]

[00:32:49] Dan: Yes. I saw some reviews that were like, people either just had like a, "I liked it," or, "I didn't," but then there were some that really picked it apart. I thought there might be some [chuckles] deeper concepts there. That's pretty interesting. Going back to what you said on-- I think this Google paper came a couple days ago, maybe that's what you're referring to about how transformer models, at least their conclusion was that transformer models can't really generalize beyond their training set, or the pre-training data.

It sounds like you're pretty optimistic on that not actually mattering that much for creating AGI, but I'm curious just how you think about the relative challenges of discovering new architectures through research and innovation versus, what's maybe more of just a resource problem of allocating computing data?

[00:33:30] Sam: I think we will need some new architectures. My line has been that, scaling compute and scaling models is the main unlock. To the extent that, we do need new architectures, there's a relatively finite search space. I would expect within this decade for the, especially, with the race on to build more agent-like systems and so forth, for folks to stumble on the right architecture.

I think there's still going to be challenges vis-à-vis, these things we've already been talking about, especially, when it comes to autonomy. Obviously, we evolved consciousness for a reason, like philosophical zombies. I think the way you answer that thought experiment is to say, "Well, that-- If you had some human, that was exactly behaviorally identical to a normal human, but yet, the light wasn't on," is to say that, that's just not possible. Clearly, having a world model that we live in, and has conscious beings, has some utility or else that, we wouldn't have evolved it.

It seems very deeply related to our agenticness, and our ability to model other agents as unified actors. I think, we're going to struggle building AIs that are able to sort of long horizon planning, and interface as agents, quiet agents without figuring out the secret of consciousness, and giving them some inner experience. Because if it was possible to not do that, and why didn't we evolve? It seems like it's a very efficient way to model future world states and so forth, and integrate all our sensory experience.

[00:35:16] Dan: If we had completely conclusive evidence that the AI system was conscious, do you think that we'll understand consciousness, or do you think they're supposed to be a mystery about how it works?

[00:35:30] Sam: I think we'll understand. I think the mystery comes in via our embeddedness within our own virtual video game engine. Our brain is constantly flagging things as real versus not real. You can think, as a schizophrenic is somebody whose reality filter is misfiring. I have thoughts in my head, I can hear voices in my head, I just identify with those thoughts as my internal monologue.

If I had some misfire go on, and I heard those thoughts, but didn't identify with them, I would be considered crazy. Likewise, if you take enough LSD, you can suffer depersonalization disorder, and cease to identify with your person. It's possible for us to begin deconstructing our phenomenological experience in that way. It doesn't really seem like there's anything special going on.

I don't think there's a hard problem of consciousness. I think what it is, is the hard problem of accepting that we are in this video game simulation, and because we have such strong sort of, "This is real, this is real, this is real." That button's being pressed in our brain all the time, we want an explanation that somehow transcends that, and there is no transcending it.

It's like we're Mario in Super Mario, and we're being told, ''You're just a bunch of computer bits in a Super Nintendo system." It's really hard for Mario to accept that, because in his world, [chuckles] he's totally embedded in the game, and there's no sense in which you could talk about Mario being outside the game, looking back in, because it just doesn't have any physical substrate to do that.

[00:37:18] Dan: In your view, if we get to human-level systems, is that sufficient to be considered an AGI, or does it need to be capable of superhuman novel insights, like telling us what the origin of the universe is?

[00:37:33] Sam: I mean, it's all semantics. I would consider that AGI if we have something human level in part, because once we have something human level, we can have human level AI researchers, and in principle there would be a major speed up, whether it's recursive self-improvement loop, I don't know, but just based on where people are currently.

For me, human level is the main, biggest threshold to cross, if we can get things that can, in principle, do everything a human can, the world is just completely transformed, and maybe there are higher forms of intelligence beyond that. I fully expect in the same way that AlphaZero is like ELO 5,000, and Magnus Carlsen, his ELO is like 2,700 or something like that.

The best computer playing chess bot is, their ELO is almost double the world grandmaster, but it's not unbounded. There's still some kind of information ceiling that the model hits. I would expect beyond human intelligence further to be all kinds of superhuman forms of intelligence. I struggle with this idea that it's going to be like this unbounded thing, where it just gets smarter and smarter and smarter, as if there's deeper forms of generalization.

I tend to think, we'll have systems that seem God-like to us, again, because of our boundedness or computational boundedness, but don't just get indefinitely more intelligent without bound.

[00:39:02] Dan: Yes, I guess that's the thing that I'm wondering is, there's a difference between just economic utility, which is what I think about when I think about human level intelligence. Then, there's answering the big questions [chuckles] which is, how did the universe come about? What is consciousness? What's the most likely way that life started? Things like that.

[00:39:21] Sam: Yes, there is this, I think Elon and xAI, their project has that sort of The Hitchhiker's Guide to the Galaxy mission of building the system that will tell us, "The meaning of life is 42." I think that's not a realistic vision for how intelligence works. I think if you really dedicate yourself to understanding the standard model of particle physics and some of the metaphysics behind it, you can really come to understand why the universe exists.

It's not-- Some of the deeper-- The model's not complete. We don't have quantum gravity figured out yet, but I've at least arrived at what I feel like is a satisfactory answer at the metaphysical level of what are particles, why does anything exist at all. We basically have answers to those questions, they're just not widely known or accepted in part because they're just challenging. Having an AI that comes around and just lays that all out, it doesn't necessarily make it easier to accept.

[00:40:20] Dan: That's fair enough. [laughs] You have a post where you talk a little bit about what you should do to make money off of AI, but I'm curious what's the best trade you can make if you predict that it's coming really soon, say less than five years or much faster than what consensus would be on prediction markets? Would you recommend just going straight into the S&P 500, or are there risks of institutional disruption that warrant something like a more concentrated bet?

[00:40:49] Sam: If there is major institutional disruption, by gold or something, I have no idea. If the very institutions that support our stock exchange go away, it's hard to know how to actually make money. It's the same reason why it's hard to bet on catastrophes. If the US government ever really defaults on its debt, that's a world where we're probably in World War III or something like that. It's hard to know what that world looks like, and that's why the credit default swap market for US Treasury is so thin. It's not that there's no risk of it happening. It's just there's no actual way to bet on it and making a profit.

What I'm doing is just holding an index fund and some ETFs that are exposed to AI. I think one of the reasons you want to still be diversified is that, you can bet on Google and Microsoft and so forth, but there could be just major turnover in the company landscape where we have totally new AI native companies. Moreover, in a lot of these markets, it's not clear which assets should rise and which should fall.

I'm anticipating the great repricing of 2027 or something like that. If you look at Adobe, Adobe is rapidly integrating AI into Photoshop and Premiere and all their software, and yet their stock has been falling. One of the ways you explain that is just like with Adobe's market cap is based on this essential software package, this suite of tools for editing photos and things like that. If you can just use a generative model to edit your photos, then that entire software stack gets deprecated, and they don't have anything special. The moat around their generative model is very, very low relative to the moat around there are 20 years of experience building a great photo editing software.

I would just hold a lot of different things. Land is also a safe bet. Maybe even be useful to have some land to run too. Then in the near term, there's this question about how value gets captured. There's one possibility that AI just leads to commoditized intelligence, and everything goes to consumer surplus. That's one reason why I would be over-indexed a little bit on companies like Tesla, where Tesla, once they roll out full self-driving, everyone's car turns into a revenue-generating asset. The basic model suggests that the stock should double, triple, quadruple in market cap.

One way to interpret that is, Tesla will be the first company not only to build AGI, but to capitalize it into a durable asset. That makes it very unique. Things like that. The companies that are both exposed to AI in a positive way, but also have a means of capitalizing it into something durable. TSMC, Nvidia, maybe, but even Nvidia is just a design company. Google could easily build out their TPU software stack and make that public and so on. TSMC, on the other hand, is a hardware company that is much harder to recreate and will, in that sense, capitalize the value of AI into some durable asset.

[00:44:26] Dan: I'm going to try and summarize really quickly your idea on AI and Leviathan. Tell me if I'm getting this just broadly correct and then I have a question after. The three paths to the future are basically, number one, the state becomes too powerful and we have an authoritarian surveillance state. Number two, we had fragmentation and anarchy where society becomes too powerful, and then number three would be what I think you define as the happy path, where society and state co-evolve together. Is that in broad strokes correct?

[00:45:00] Sam: Yes.

[00:45:01] Dan: How do we practically ensure co-evolution?

[00:45:03] Sam: I don't think it's the default path. This also ties back into Hegel and stuff like that, is to situate ourselves in history and understand the ways in which liberal democracy was a technologically contingent institutional setup. If there's major shifts in technology and much of production, then we should just expect that radical institutional change will follow.

Real co-evolution is quite dramatic. It will feel as dramatic as the other two failure modes. It will require more than the productivity of FDR's New Deal. We're very, very far away from having supermajorities in Congress and an imperial presidency that could just do that really easily. My motto has been, it's no longer getting to Denmark, it's getting to Estonia, because a country like Estonia is fortified against cyber attacks. They have the most sophisticated e-government in the world. They've built government as a platform, so third parties, private companies can develop via API tool sets that then integrate with government databases. They're just perfectly set up for stability into the post-AI world.

What we need to do in the West and liberal democracies where we do have these constraints, these institutional constraints on our ability to use surveillance and stuff like that, is in some cases we just need to have an organized decentralization. I talk about in the piece about the parallels between Switzerland and Afghanistan. They're both mountainous regions that have a history of clannishness and tribal warfare. Afghanistan is like this barbaric by design ungovernable place, and the Swiss have probably the most sophisticated, decentralized, federated, high-quality of human development country in the world owing to the fact that in the 1200s, the three big clans agreed to form a pact and defend themselves against the rest of Europe.

One way to see that is both are decentralized in a sense, both are, sure, fragmented and broken up, but the Swiss model is like this low entropy state. It's like this crystal structure that you don't just build by accident. It requires work to create.

My vision of the techno-feudal version of this, or the narrow path, the Estonia version of this, is they look similar, but I would say the narrow corridor of getting to Estonia world is the one where we gracefully construct this more decentralized ecosystem, rather than just being thrust into it.

[00:48:01] Dan: Do you think right now we're on a path to gracefully construct a new ecosystem?

[00:48:06] Sam: [chuckles] No. Definitely not. I'm not an institutional optimist in that sense. There are some just X-factors. We're undergoing a major demographic shift because of the demographic overhanging Congress in our politics. We're going to have a flood of new younger people in politics with different ideas. We'll look more radical. As AI continues to scale and people understand the path of capabilities, I think we'll see the rise of new utopian movements where the end of history thesis comes to an end and people rediscover could we have fully luxury communism or some new cybernetic fascism.

These ideas that flourished in the early 20th century with industrialization opening up people's mind to the new opportunities, new ways to construct society. There's going to be, I think, a similar thing with AI where it resurrects these dead ideologies, these dead utopian movements. There's all kinds of unknowns about the way our politics could unfold. It doesn't seem likely that we'll gracefully fall into that path at all.

[00:49:20] Dan: There's a related concept you have that I actually think is really interesting. I never thought about it this way before, where you talk about how technology can lead to micro-regime changes. Some of the examples were taxi cabs used to have public regulatory commissions and now it's like, the terms and service of Uber left. Then, you go to a comedy show, they're going to put tape over your camera so you're not taking pictures.

What framework do you use to reason about what specific policies are better served by government versus the private sector?

[00:49:50] Sam: I owe a big debt to Ronald Coase and institutional economics, transaction cost economics. I would describe my own intellectual evolution from libertarianism to being a status, more or less, as really understanding costs.

You can interpret costs in two ways. On the one hand, he says if transaction costs were zero, then we wouldn't need government, we would just negotiate over everything. Then that could lead you to want to dissolve government, or you could realize, oh wait, transaction costs are way above zero, and that's why we have government, that's why we have corporations.

In some sense in my youth defending companies against anti-corporate rhetoric, led naturally to me understanding, oh wait, the nation-state exists for very similar reasons, and whether it'd be better or worse in some cosmic sense that to not have a government and to be anarcho-capitalist sort of utopia. Beside the point, the question is are the transaction cost conditions there to actually support that institutional change.

Understanding the stuff through the lens of transaction costs is very enlightening because it reveals both the conditions by which institutions can change and evolve, but also it demystifies the institutions we already have, that they're not inherent, they're not even necessary. They're deeply contingent on the cost structure of intelligence and agency and everything else.

[00:51:31] Dan: One of the views that I've come around too more as I've gotten older is that the distinction between public and private sector is, at least to me, less relevant. What I'm really interested in is just super, super competent organizations. One question for you is, how do we convince more talent enter the public sector?

[00:51:51] Sam: I think that's trying to make water flow uphill at this point. I totally agree with you on that first point. I took industrial organizations with Tyler Cowen when I was at George Mason. Tyler has this line that everything is industrial organization, everything is IO. When you go out and look around at the world, you just step outside, you never see a market. You see organizations, you see the DMV, you see companies, you see firms with people working together. The market is just this abstraction. It's this liminal space where companies transact with each other.

You can see markets now and then, you go to the flea market, but the flea market only exists because it's like a shelling point for people to converge and get some local public goods around policing, and enforcement, and search information like everyone can gather to that one market. In general, what really exists are just organizations, and those organizations vary radically incompetence.

Government has been shedding talent since at least the '70s, and that's partly because of opportunity costs. It's because we've shifted. We no longer do Apollo project in-house. Instead, we contract with SpaceX to do the Apollo project. I think that's just the way things are now. I don't think there's any way to really reverse that. I think it was an aberration to have that much talent in government in the first place, and now that it's hard to put it back in. Especially now, Google DeepMind's salary budget's over a billion dollars. It's hard to imagine that ever reentering government.

We're inevitably going to need to find better ways of harnessing the competency of the private sector. When you look back at other major institutional transformations, before the buildup of the administrative state and that led to the New Deal era, we had the old progressive movement. The original progressive era was an era of some of the earliest joint stock corporations with management structures or this new science of management, these new companies were being built. Those businessmen brought that knowledge of how to scale institutions and scale management structures into government in the 1920s and '30s. You go back and you look at a company like IBM, every morning, workers walked into IBM in suit and tie and they're basically goose-stepped. IBM is run like a military. Meanwhile, the Defense Department is run like a startup. There was this institutional harmony between the two where we need something like that now.

What we really need is a total jubilee on government process, and to have the people from Silicon Valley who know how to scale AI native companies. They even write books on scaling. There's a whole cottage industry of experts on how to scale companies come into government. We need Patrick Carlson as the commerce secretary, and Luckey Palmer as the defense secretary, so they can bring that expertise from private sector into government. Otherwise, we're just going to turn government into this glorified nexus of contracts.

[00:55:11] Dan: Relatedly, I've heard you say on another podcast that you are generally a fan of the great founder theory, that small groups of people generally are disproportionately responsible for pushing history forward. I'm wondering in the age of AI if you think that effect will continue, be more pronounced, less relevant. How do you think AI will change that?

[00:55:33] Sam: That's a good question. I think it's leveling up for everybody all at once but in a way that's probably paraded distributed. This is another reason why our institutions probably will fail just through the sheer throughput issue. If every person has a 50,000-person corporation beneath them of AI agents that are doing all kinds of stuff and that talk about seeing a state. Our government is not able to actually-- it'll just be completely illegible.

It's a really good question to the extent that AI can achieve genuine added distribution kinds of agency. I think that will be when we are really supplanted. That's the first sign of a post-human future where the human role in guiding history has been overshadowed. The final moment, the final act, may be the great founder of theory of open AI and the role they did in the last hurrah of human agency.

I have heard some people talk about why they're not worried about AI risk, and the line you often hear is humans have lots of agency, we can always shape the future. It's like we're building things that are going to have more agency than us, so I don't know.

[00:56:55] Dan: You have in your post on places that you could invest potentially in AI. One of them was just become a startup founder because it's becoming so much easier. I had Zvi Mowshowitz on the last episode here, and asked him that question basically, which is, "Do you think we will see an influx of startup founders because, one, you need way less employees, and then, two, software which is the easiest probably company to start up from a regulatory and capital perspective is now becoming much easier for technical people?"

I wonder if it'll give high agency people who may have been employees previously of some initiative to go and try and create organizations and they need less human resources and networking connections and things like that.

[00:57:43] Sam: Yes, definitely in the medium term, for sure. If you have even a little bit of coding ability and entrepreneurial alertness, this is a huge boon to that kind of agency because it lets you augment all that and just go execute. The question is, once we have executive assistants that have that agency in spades, even if you're incredibly lazy and could you just direct your agent to do that for you?

I often think about the history is, and the singularity is the Euler disk. I don't know if you've ever seen a video of an Euler disk, but it's this physics phenomenon where it's a like spinning disc, a concave disc and you spin it and it spins around and around and around, and it accelerates and just starts spinning faster and faster and faster. It makes this incredible noise as it's spinning faster, and then all of a sudden it stops. It reaches the end of its cycle and it just freezes in place. That seems to be what's happening here, where we're in the lead up to us losing all agency will be the most agentic we've ever been, and then all of a sudden it'll just stop.

[00:58:59] Dan: Do you think that AI will have lock-in effects on society and culture? Seems like we're going through a time period right now where you're at least predicting we've got the narrow corridor, we have to make a decision on where we fall on it, but once that's the dust has settled, do you for example expect the year 2200 to look very similar to 2050, or do you think that change will just continue on?

[00:59:23] Sam: Oh, I have no idea.

[00:59:26] Dan: I get this question because I think if you go the authoritarian path or something this from a cultural's perspective, especially with AI, that locking seems very, very heavy. I don't know would the Soviet Union have collapsed, if they had AI, and if the answer to that is no, then it could just be self-perpetuating for a long time. That was at least my line of thinking on it.

[00:59:46] Sam: I'm wearing my Singularity 2045 shirt.

[00:59:51] Dan: All right.

[00:59:52] Sam: By 2050, we've already gone through it, and by definition, we can't see past the event horizons.

It's hard to really know insofar as this is the final invention, the invention in all mentions, maybe. There's another world where we, in the West, undergo state collapse and are in total disarray, and China ends up building fortified institutions that are centralized and more adapted to AI and then slowly takes over the world and becomes the big one world Chinese government. That could lead to a lock-in.

If we all just get wire-headed into the matrix, that would also lead to a kind of lock-in. It's hard to say. I don't anticipate 2050 looking at all like 2200. My intuition would be that they look vastly different. The 2200 world may be the world where we have Dyson spheres and are colonizing the stars. [chuckles] It's so hard to foresee.

[01:00:56] Dan: [chuckles] It's speculative disclaimer.

[chuckling]

[01:01:00] Dan: What do you think we should do about short-term unemployment caused by technology? For example, the big issue people talk about a lot is like, "What do we do when Tesla trucks drive all our stuff around and truck drivers are unemployed?"

[01:01:13] Sam: I've been a fan of just modernizing our unemployment insurance system. [laughs] In my past life, I was at the [unintelligible 01:01:20] doing social policy, and I have a paper called Faster Growth, Fair Growth that has a whole section on comprehensive social insurance modernization. I'm a fan of the Danish flex security model where in Denmark they have very liberal employment laws like at-will employment, very high rates of labor mobility, one in five people switch jobs every year.

In turn, they have this really generous unemployment insurance where it's like 90% of your wages for a short period, and if you don't get a job within a certain amount of time, then you're automatically enrolled in continuing education, and all kinds of retraining. What's broadly called active labor market policies. The US is just very bad at those things. Our unemployment insurance system is thread bear, very patchwork.

When David Otter and co-authors looked at the China shock, one of the things they realized was in counties that had suffered the deepest shock from Chinese imports, disability insurance was three times as responsive as unemployment insurance and trade adjustment assistance combined. It seems like the path of least resistance for those truckers or whatever, is just to retire early or to claim disability. That's a UBI of sorts, but it's a UBI that's doesn't have-- At least UBI doesn't stop you from working. [laughs] If you're on disability insurance, you're prohibited from gainful employment.

[01:02:47] Dan: All right. Got some questions just about you and your intellectual development. When I think of the archetype of intellectual breadth, it's probably Tyler Cowen, but your writing actually reminds me a lot of him. You talk a lot about economics, politics, philosophy. You have a really deep understanding of technology and AI and how it actually works. At the margin, should policymakers spend more time learning across fields?

[01:03:13] Sam: Oh, absolutely. It's especially bad in the US because of the way policy's outsourced to think tanks and advocacy organizations that are by definition siloed. Even something as simple as childcare, during the Build Back Better debate, one of the big debates was how much money should the bill spend on childcare. The childcare provisions, $400 billion in cost, were widely criticized. It's just the worst designed full of benefit cliffs, weird Rube Goldberg machine tax credits.

Why was it like that? It was like that because the early childhood education advocacy establishment were just a bunch of advocates, a bunch of activists who were pressuring Congress to do something on childcare but didn't know the first thing about marginal tax rates, [laughs] much less interdisciplinary ideas. I think there's a really big benefit to countries that have more consolidated centralized policymaking where you think about like the Rand Institute where you have people who are genuinely cross-disciplinary and are able to see things through the bigger picture rather than be that siloed.

You don't have to be a total autodidact. It's just even basic economics, basic sociology, basic history, all those things are important because they let you see things at a system level. Maybe this is the more of that German idealism speaking, but being more systemic.

When I was at the [unintelligible 01:05:01], one of the things we did when I was building my team was I hired for a housing team, I hired for an employment team, I hired for childcare and innovation policy. All our teams were part of one team. Even though we were working on very different issues, we tried to see everything at a system level because all those different programs do genuinely interact with each other and really can only be understood through a 30,000-foot point of view. That's just deeply missing in US policymaking now.

[01:05:36] Dan: How do you stay up on so many diverse topics? Are you still reading philosophy papers regularly? Did this come from a period of your life where you had, maybe college or something, where you just had super deep development? How do you keep it up?

[01:05:51] Sam: By just not doing my work basically.

[laughter]

[01:05:56] Sam: I got into economics, I have an old blog post on this, because economics was as close as I could do to philosophy while having full employment.

[01:06:04] Dan: [laughs]

[01:06:05] Sam: If you think about econ, econometrics is epistemology, social choice theory is political philosophy, you have rational choice theory is sort of critical theory, and then you have macroeconomics as metaphysics. [laughs] You have all the different sub-disciplines of philosophy all brought together. Econ properly understood isn't about money, it's about developing very capacious conceptual frameworks for understanding human behavior in society. It is the closest thing to philosophy that does garner you a job. If I wasn't able to just pursue the philosophical naval gazing and so forth, I would probably just do something else or move to some low-cost place and just do it anyway.

[01:06:57] Dan: I think Tyler Cowen said the same thing. He's like, "What I'm actually doing with economics is some funny kind of philosophy."

[01:07:05] Sam: Tyler was a big influence on me as a kid. As I mentioned, the first Joseph Heath book I read was The Rebel Sell. Then I quickly found Tyler's In Praise of Commercial Culture from the late 90s. It was this defense of, again, contra the counterculture view that commercialization is selling out and it just commodifies things and it's not authentic enough. Where he was like, "Oh, actually, bridging commercialization with the arts is actually a huge spur to creative works."

A friend of mine I was at dinner with had to leave early because he was going to the Kennedy Center for an opera called Grounded. It's about a plane that gets grounded or there's some plane disaster or something like that, and it's sponsored by General Electric.

[laughter]

[01:08:08] Sam: General Electric basically sponsored an opera on aerospace. [laughs]

[01:08:16] Dan: That's hysterical.

[01:08:18] Sam: That to me is awesome. That's not that different in kind from the Medici family sponsoring Michelangelo, or whatever.

[01:08:30] Dan: The funny thing about Tyler too is he's defending commercial culture, but he's probably one of the most cultured people there is. He could tell you obscure films from countries you've never heard of, but then he loves Hollywood too.

[01:08:42] Sam: Yes, that bridging low and high culture I think actually really represents the maturity of aesthetic thought where it's transcending the status game of what's most socially distinctive.

[01:08:57] Dan: If a genie granted you, you are the only person, Sam Hammond, in the world who has 30 hours in a day and everyone else only gets 24, where do you spend the marginal time?

[01:09:07] Sam: I'm not very productive as it is.

[01:09:10] Dan: Okay. [laughs]

[01:09:11] Sam: My production function as someone with very bad ADHD is out of my control. One of the reasons I think a lot about philosophy is partly because of this problem of the weakness of will. Going back to the German idealist sitting in their armchair thinking, trying to observe themselves observing, ADHD as an executive function disorder is like that, where there are sometimes I want to do something, and something simple like send an email, and I can observe myself wanting to do it, and I can observe myself observing myself wanting to do it, but there's, in some ways, the wire connecting to my motivation to actually get up and do it is disconnected and it's very paralyzing. I think if I had a marginal six hours than everybody, I would probably just procrastinate six hours longer.

[laughter]

[01:10:02] Dan: All right. That's a great place to wrap up. Sam, thank you so much for coming on the show.

[01:10:06] Sam: Thanks, Dan.

0 Comments
Undertone
Undertone